This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Tail bounds for detection times in mobile hyperbolic graphs

Marcos Kiwi Depto. de Ingeniería Matemática and Centro de Modelamiento Matemático (CNRS IRL2807), Univ. Chile, ([email protected]). Gratefully acknowledges support by ACE210010 and FB210005, BASAL funds for centers of excellence from ANID-Chile, and by GrHyDy ANR-20-CE40-0002. Univ. Chile Amitai Linker Depto. de Matemáticas, Facultad de Ciencias Exactas, Univ. Andrés Bello, ([email protected]). Gratefully acknowledges support by IDEXLYON of Univ. de Lyon (Programme Investissements d’Avenir ANR16-IDEX-0005), and by DFG project number 425842117. Univ. Andres Bello Dieter Mitsche Institut Camille Jordan, Univ. Jean Monnet, Univ. de Lyon and IMC, Pontifícia Univ. Católica de Chile, ([email protected]). Gratefully acknowledges support by grant GrHyDy ANR-20-CE40-0002 and by IDEXLYON of Univ.} de Lyon (Programme Investissements d’Avenir ANR16-IDEX-0005). Univ. de Lyon and PUC Chile
Abstract

Motivated by Krioukov et al.’s model of random hyperbolic graphs [28] for real-world networks, and inspired by the analysis of a dynamic model of graphs in Euclidean space by Peres et al. [37], we introduce a dynamic model of hyperbolic graphs in which vertices are allowed to move according to a Brownian motion maintaining the distribution of vertices in hyperbolic space invariant. For different parameters of the speed of angular and radial motion, we analyze tail bounds for detection times of a fixed target and obtain a complete picture, for very different regimes, of how and when the target is detected: as a function of the time passed, we characterize the subset of the hyperbolic space where particles typically detecting the target are initially located. Our analysis shows that our dynamic model exhibits a phase transition as a function of the relation of angular and radial speed.

We overcome several substantial technical difficulties not present in Euclidean space, and provide a complete picture on tail bounds. On the way, moreover, we obtain results for a class of one-dimensional continuous processes with drift and reflecting barrier, concerning the time they spend within a certain interval. We also derive improved bounds for the tail of independent sums of Pareto random variables.

1 Introduction

Random Geometric Graphs (RGGs) are a family of spatial networks that have been intensively studied as models of communication networks, in particular sensor networks, see for example [2]. In this model, an almost surely finite number of vertices is distributed in a metric space according to some probability distribution and two vertices are joined by an edge if the distance between them is at most a given parameter called radius. Typically, the metric space is the dd-dimensional unit cube or torus (often with d=2d=2) and points are chosen according to a Poisson point process of given intensity. While simple, RGGs do capture relevant characteristics of some real world networks, for instance, non-negligible clustering coefficient. However, RGGs fail to exhibit other important features such as scale-freeness and non-homogeneous vertex degree distribution, both of which are staple features of a large class of networks loosely referred to as ”social networks” that are meant to encompass networks such as the Internet, citation networks, friendship relation among individuals, etc.

A network model that naturally exhibits clustering and scale-freeness is the Random Hypebolic Graph (RHG) model introduced by Krioukov et al. [28] where vertices of the network are points in a bounded region of the hyperbolic plane, and connections exist if their hyperbolic distance is small. In [10], a surprisingly good maximum likelihood fit of the hyperbolic model was shown for the embedding of the network corresponding to the autonomous systems of the Internet, drawing much attention, interest, and follow-up work on the model (see the section on related work below).

It has been recognized that in many applications of geometric network models the entities represented by vertices are not fixed in space but mobile. One way in which this has been addressed is to assume that the vertices of the network move according to independent Brownian motions, giving rise to what Peres et al. in [37]) call the mobile geometric graph model. Mobility, and more generally dynamic models, are even more relevant in the context of social networks. Thus, it is natural to adapt the mobile geometric graph setting to the hyperbolic graph context and assume the vertices of the latter graphs again move according to independent Brownian motions but in hyperbolic space. This gives rise, paraphrasing Peres et al., to the mobile hyperbolic graph model. We initiate the study of this new model by focusing on the fundamental problem of detection, that is, the time until a fixed (non-mobile) added target vertex becomes non-isolated in the (evolving) hyperbolic graph. We will do this; but in fact do much more. In order to discuss our contributions in detail we first need to precisely describe the model we introduce and formalize the main problem we address in our study.

1.1 The mobile hyperbolic graph model

We first introduce the model of Krioukov et al. [28] in its Poissonized version (see also [21] for the same description in the so called uniform model): for each n+n\in\mathbb{N}^{+}, consider a Poisson point process 𝒫\mathcal{P} on the hyperbolic disk of radius R:=2log(n/ν)R:=2\log(n/\nu) for some positive constant ν+\nu\in\mathbb{R}^{+} (log\log denotes here and throughout the paper the natural logarithm). The intensity function μ\mu at polar coordinates (r,θ)(r,\theta) for 0rR0\leq r\leq R and πθ<π-\pi\leq\theta<\pi is equal to nf(r,θ)nf(r,\theta), where f(r,θ)f(r,\theta) is given by

f(r,θ)\displaystyle f(r,\theta) :={αsinh(αr)2π(cosh(αR)1),if 0rR,0,otherwise.\displaystyle:=\begin{cases}\displaystyle\frac{\alpha\sinh(\alpha r)}{2\pi(\cosh(\alpha R)-1)},&\text{if $0\leq r\leq R$},\\[8.61108pt] 0,&\text{otherwise.}\end{cases}

In other words, the angle and radius are chosen independently; the former uniformly at random in (π,π](-\pi,\pi] and the latter with density proportional to sinh(αr)\sinh(\alpha r).

Next, identify the points of the Poisson process with vertices and define the following graph Gn:=(Vn,En)G_{n}:=(V_{n},E_{n}) where Vn:=𝒫V_{n}:=\mathcal{P}. For P,PVnP,P^{\prime}\in V_{n}, PPP\neq P^{\prime}, with polar coordinates (r,θ)(r,\theta) and (r,θ)(r^{\prime},\theta^{\prime}) respectively, there is an edge in EnE_{n} with endpoints PP and PP^{\prime} provided the hyperbolic distance dHd_{H} between PP and PP^{\prime} is such that dHRd_{H}\leq R, where dHd_{H} is obtained by solving

coshdH:=coshrcoshrsinhrsinhrcos(θθ).\cosh d_{H}:=\cosh r\cosh r^{\prime}-\sinh r\sinh r^{\prime}\cos(\theta{-}\theta^{\prime}). (1)

In particular, note that 𝔼(|Vn|)=n\mathbb{E}(|V_{n}|)=n.

Henceforth, we denote the point whose radius is 0 by OO and refer to it as the origin. For a point PP and r0r\geq 0 we let BP(r)B_{P}(r) denote the ball centered at PP of radius rr, that is, the set of all points at hyperbolic distance less than rr from PP. Also, we henceforth denote the boundary of BP(r)B_{P}(r) by BP(r)\partial B_{P}(r).

To define a dynamic version of Krioukov et al.’s model we consider an initial configuration 𝒫\mathcal{P} of the Poissonized model in BO(R)B_{O}(R) and then associate to each x0:=(r0,θ0)𝒫x_{0}:=(r_{0},\theta_{0})\in\mathcal{P} a particle that evolves independently of other particles following a trajectory given in radial coordinates by xt:=(rt,θt)x_{t}:=(r_{t},\theta_{t}) at time tt. At a microscopic level a natural choice for the movement of a particle is that of a random walk in BO(R)B_{O}(R) with BO(R)\partial B_{O}(R) acting as a reflecting boundary, and where at each step the particle can move either in the radial or angular direction. Assuming that the movement does not depend on the angle and that there is no angular drift, we conclude that at a macroscopic level particles should move according to a generator of the form

Δh=122r2+α21tanh(αr)r+12σθ2(r)2θ2,\Delta_{h}=\frac{1}{2}\frac{\partial^{2}}{\partial r^{2}}+\frac{\alpha}{2}\frac{1}{\tanh(\alpha r)}\frac{\partial}{\partial r}+\frac{1}{2}\sigma^{2}_{\theta}(r)\frac{\partial^{2}}{\partial\theta^{2}},

where the drift term in the radial component is chosen so that f(r)drdθf(r)drd\theta remains the stationary distribution of the process (this can be checked using the Fokker-Planck equation). The function σθ2()\sigma^{2}_{\theta}(\cdot) is unrestricted and relates to the displacement given by the angular movement at a given position (r,θ)(r,\theta). In this sense a natural choice is to take σθ2(r)\sigma^{2}_{\theta}(r) proportional to sinh2(r)\sinh^{-2}(r) which follows by taking the displacement proportional to the hyperbolic perimeter at that radius. An alternative, however, is to take σθ2(r)\sigma^{2}_{\theta}(r) proportional to sinh2(αr)\sinh^{-2}(\alpha r) corresponding to the (re-scaled) Brownian motion in hyperbolic space. In order to capture both settings, we introduce an additional parameter β\beta and work throughout this paper with the following generalized generator:

Δh:=122r2+α21tanh(αr)r+12sinh2(βr)2θ2\Delta_{h}:=\frac{1}{2}\frac{\partial^{2}}{\partial r^{2}}+\frac{\alpha}{2}\frac{1}{\tanh(\alpha r)}\frac{\partial}{\partial r}+\frac{1}{2\sinh^{2}(\beta r)}\frac{\partial^{2}}{\partial\theta^{2}} (2)

where β>0\beta>0 is a new parameter related to the velocity of the angular movement which can alternatively be understood as a deformation of the underlying space. We shall see that thus enhancing our model yields a broader range of behavior and exhibits phase transition phenomena (for detection times this is explained in detail in our next section where we summarize our main results).

Fix now an initial configuration of particles 𝒫\mathcal{P} located at points in BO(R)B_{O}(R). We denote by x0\mathbb{P}_{x_{0}} the law of a particle initially placed at a given point x0:=(r0,θ0)BO(R)x_{0}:=(r_{0},\theta_{0})\in B_{O}(R). We have one more fixed target QQ, located at the boundary of BO(R)B_{O}(R) (that is, rQ=Rr_{Q}=R), and at angular coordinate θQ:=0\theta_{Q}:=0. For any s>0s>0, let 𝒫s𝒫\mathcal{P}_{s}\subseteq\mathcal{P} be the set of points that have detected the target QQ by time ss: that is, for each P𝒫sP\in\mathcal{P}_{s} there exists a time instant 0ts0\leq t\leq s, so that xtBQ(R)x_{t}\in B_{Q}(R). Note that if t=0t=0, then PP might be in the interior of BQ(R)B_{Q}(R), whereas if t>0t>0, the first instant at which PP detects QQ is when PP is at the boundary of BQ(R)B_{Q}(R), that is xtBQ(R)x_{t}\in\partial B_{Q}(R). This instant tt is also called the hitting time of BQ(R)B_{Q}(R) by particle PP. The detection time of QQ, denoted by TdetT_{det}, is then defined as the minimum hitting time of BQ(R)B_{Q}(R) among all initially placed particles. We are particularly interested in the tail probability x0(Tdet𝔷)\mathbb{P}_{x_{0}}(T_{det}\geq\mathfrak{z}) for different values of 𝔷\mathfrak{z} (note that 𝔷\mathfrak{z} is a function on nn). In words, we are interested in the tail probability that no particle initially located at position x0x_{0} detects the target QQ by time 𝔷\mathfrak{z}.

Observe that any given particle PP evolving according to the generator Δh\Delta_{h} specified in (2) will eventually detect the target at some point, so that (Tdet𝔷)0\mathbb{P}(T_{det}\geq\mathfrak{z})\to 0 when 𝔷\mathfrak{z}\to\infty as soon as there is at least one particle. Following what was done in [37] for a similar model on Euclidean space, our main result determines the speed at which (Tdet>𝔷)\mathbb{P}(T_{det}>\mathfrak{z}) tends to zero as a function of 𝔷\mathfrak{z}. We consider the same setting of [37], that is 𝔷/𝔼(Tdet)\mathfrak{z}/\mathbb{E}(T_{det})\to\infty but we have to deal with several additional, both qualitatively and quantitatively different, new issues that arise due to the dependency of 𝔷\mathfrak{z} on nn.

1.2 Main results

In this section, we present the main results we obtain regarding tail probabilities for detection time, both for the mobile hyperbolic graph model we introduced in the previous section and for two restricted instances: one where the radial coordinate of particles does not change over time and another where the angular coordinate does not change. We also discuss the relation between the main results and delve into the insights they provide concerning the dominant mechanism (either angular movement, radial movement or a combination of both) that explain the different asymptotic regimes, depending on the relation between α\alpha and β\beta. We point out that we do not present in this section a complete list of other significant results, in particular the ones that provide a detailed idea of the initial location of particles that typically detect the target. These last results will be discussed at the start of Sections 3, 4, and 5 where, in fact, we prove slightly stronger results than the ones stated in this section. Neither do we delve here into those results concerning one-dimensional processes with non-constant drift and a reflecting barrier that might be useful in other settings and thus of independent interest (these results are found in Section 4).

We begin with the statement describing the (precise) behavior of the detection time tail probability depending on how the model parameters α\alpha and β\beta relate to each other:111We use the standard Bachmann–Landau asymptotic (in nn) notation of O()O(\cdot), o()o(\cdot), Ω()\Omega(\cdot), ω()\omega(\cdot), Θ()\Theta(\cdot), with all terms inside asymptotic expressions being positive.

Theorem 1.

Let α(12,1]\alpha\in(\frac{1}{2},1], β>0\beta>0, 𝔷:=𝔷(n)\mathfrak{z}:=\mathfrak{z}(n), and assume that particles move according to the generator Δh\Delta_{h} in (2). Then, the following hold:

  1. (i)

    For β12\beta\leq\frac{1}{2}, if 𝔷=Ω((eβR/n)2)O(1)\mathfrak{z}=\Omega((e^{\beta R}/n)^{2})\cap O(1), then (Tdet𝔷)=exp(Θ(neβR𝔷)).\displaystyle\mathbb{P}(T_{det}\geq\mathfrak{z})=\exp\big{(}{-}\Theta(ne^{-\beta R}\sqrt{\mathfrak{z}})\big{)}.

  2. (ii)

    For β12\beta\leq\frac{1}{2} and 𝔷=Ω(1)\mathfrak{z}=\Omega(1) the tail exponent depends on the relation between α\alpha and 2β2\beta as follows:

    1. (a)

      For α<2β\alpha<2\beta, if 𝔷=O(eαR)\mathfrak{z}=O(e^{\alpha R}), then (Tdet𝔷)=exp(Θ(neβR𝔷βα))\displaystyle\mathbb{P}(T_{det}\geq\mathfrak{z})=\exp\big{(}{-}\Theta(ne^{-\beta R}\mathfrak{z}^{\frac{\beta}{\alpha}})\big{)}.

    2. (b)

      For α=2β\alpha=2\beta, if 𝔷=O(eαR/(αR))\mathfrak{z}=O(e^{\alpha R}/(\alpha R)), then (Tdet𝔷)=exp(Θ(neβR𝔷log𝔷))\displaystyle\mathbb{P}(T_{det}\geq\mathfrak{z})=\exp\big{(}{-}\Theta(ne^{-\beta R}\sqrt{\mathfrak{z}\log\mathfrak{z}})\big{)}.

    3. (c)

      For α>2β\alpha>2\beta, if 𝔷=O(e2βR)\mathfrak{z}=O(e^{2\beta R}), then (Tdet𝔷)=exp(Θ(neβR𝔷)).\displaystyle\mathbb{P}(T_{det}\geq\mathfrak{z})=\exp\big{(}{-}\Theta(ne^{-\beta R}\sqrt{\mathfrak{z}})\big{)}.

  3. (iii)

    For β>12\beta>\tfrac{1}{2}, if 𝔷=Ω(1)O(eαR)\mathfrak{z}=\Omega(1)\cap O(e^{\alpha R}), then (Tdet𝔷)=exp(Θ(𝔷12α))\displaystyle\mathbb{P}(T_{det}\geq\mathfrak{z})=\exp\big{(}{-}\Theta(\mathfrak{z}^{\frac{1}{2\alpha}})\big{)}.

Remark 2.

Since we are working with the Poissonized model, the probability of not having any particle to begin with is of order eΘ(n)e^{-\Theta(n)}, and on this event the detection time is infinite. The reader may check that replacing the upper bound of 𝔷\mathfrak{z} in each of the previous theorem cases gives a probability of this order, which explains the asymptotic upper bounds on 𝔷\mathfrak{z}.

Observe that by Theorem 1, for 𝔷=Θ((eβR/n)2)\mathfrak{z}=\Theta((e^{\beta R}/n)^{2}) in the case β12\beta\leq\frac{1}{2}, and for 𝔷=Θ(1)\mathfrak{z}=\Theta(1) in the case β>12\beta>\frac{1}{2} we recover a tail exponent of order Θ(1)\Theta(1), showing that the expected detection time occurs at values of 𝔷\mathfrak{z} of said order. It follows that at β=12\beta=\frac{1}{2} there is a phase transition in the qualitative behavior of the model; for β<12\beta<\frac{1}{2}, since (eβR/n)2=o(1)(e^{\beta R}/n)^{2}=o(1), the detection becomes asymptotically “immediate” whereas for β>12\beta>\frac{1}{2} the target can remain undetected for an amount of time of order Θ(1)\Theta(1). To explain this change in behavior notice that even though for any β>0\beta>0 the value of σθ2(r):=sinh2(βr)\sigma^{2}_{\theta}(r):=\sinh^{-2}(\beta r) is minuscule near the boundary of BO(R)B_{O}(R) (where particles spend most of the time due to the radial drift), decreasing β\beta does increase σθ()\sigma_{\theta}(\cdot) dramatically, allowing for a number of particles tending to infinity, to detect the target immediately. Since for small values of 𝔷\mathfrak{z} all but a few particles remain “radially still” near the boundary, we deduce that there must be some purely angular movement responsible for the detection of the target, to which we can associate the tail exponent neβR𝔷ne^{-\beta R}\sqrt{\mathfrak{z}} seen in (i) of Theorem 1. The same tail exponent appears in Case (iic) of Theorem 1 for large 𝔷\mathfrak{z} when β\beta is again sufficiently small (smaller than α2\frac{\alpha}{2}), and again the purely angular movement is responsible for the detection of the target. To make the understanding of the observed distinct behaviors even more explicit we study two simplified models where particles are restricted to evolve solely through their angular or radial movement, respectively.

Theorem 3 (angular movement).

Let α(12,1]\alpha\in(\frac{1}{2},1], β>0\beta>0, 𝔷:=𝔷(n)\mathfrak{z}:=\mathfrak{z}(n), and assume that particles move according to the generator

Δang:=12sinh2(βr)2θ2.\Delta_{ang}:=\frac{1}{2\sinh^{2}(\beta r)}\frac{\partial^{2}}{\partial\theta^{2}}.

If 𝔷π2(1o(1))eβR\sqrt{\mathfrak{z}}\leq\frac{\pi}{2}(1-o(1))e^{\beta R}, then

(Tdet𝔷)={exp(Θ(1+n(𝔷eβR)1αβ)),if αβ,exp(Θ(1+n(𝔷eβR)1αβlog(eβR1+𝔷))),if α=β.\mathbb{P}(T_{det}\geq\mathfrak{z})=\begin{cases}\exp\Big{(}{-}\Theta\Big{(}1+n\Big{(}\frac{\sqrt{\mathfrak{z}}}{e^{\beta R}}\Big{)}^{1\wedge\frac{\alpha}{\beta}}\Big{)}\Big{)},&\text{if $\alpha\neq\beta$,}\\[8.0pt] \exp\Big{(}{-}\Theta\Big{(}1+n\Big{(}\frac{\sqrt{\mathfrak{z}}}{e^{\beta R}}\Big{)}^{1\wedge\frac{\alpha}{\beta}}\log\big{(}\frac{e^{\beta R}}{1+\sqrt{\mathfrak{z}}}\big{)}\Big{)}\Big{)},&\text{if $\alpha=\beta$.}\end{cases}

Observe that the tail exponent appearing in the case α>β\alpha>\beta of the previous theorem is the same as the one appearing in Case (iic) (where β12<α\beta\leq\frac{1}{2}<\alpha) and Case (i) (where β<α2<α\beta<\frac{\alpha}{2}<\alpha) of Theorem 1: this is no coincidence. Our proof shows that a purely angular movement is responsible for detection in those cases. Note also that the other exponents contemplated in Theorem 1 are not present in Theorem 3.

We consider next the result of our analysis of the second simplified model where particles move only radially:

Theorem 4 (radial movement).

Let α(12,1]\alpha\in(\tfrac{1}{2},1], β>0\beta>0, 𝔷:=𝔷(n)\mathfrak{z}:=\mathfrak{z}(n), and assume that particles move according to the generator

Δrad:=122r2+α21tanh(αr)r.\Delta_{rad}:=\frac{1}{2}\frac{\partial^{2}}{\partial r^{2}}+\frac{\alpha}{2}\frac{1}{\tanh(\alpha r)}\frac{\partial}{\partial r}.

For every 0<c<π20<c<\frac{\pi}{2}, there is a C>0C>0 such that if 𝔷C\mathfrak{z}\geq C and 𝔷12α(π2c)eR2\mathfrak{z}^{\frac{1}{2\alpha}}\leq(\frac{\pi}{2}-c)e^{\frac{R}{2}}, then

(Tdet𝔷)=exp(Θ(𝔷12α)).\mathbb{P}(T_{det}\geq\mathfrak{z})=\exp\big{(}{-}\Theta(\mathfrak{z}^{\frac{1}{2\alpha}})\big{)}.

Once more observe that the tail exponent in the β>12\beta>\frac{1}{2} case of Theorem 1 is the same as the one in Theorem 4, and once more this is not a coincidence: our proof shows that when β>12\beta>\frac{1}{2} the detection of the target is the result of the radial motion of particles alone. In contrast, Cases (iia) and (iib) in Theorem 1, when 12<α2β1\frac{1}{2}<\alpha\leq 2\beta\leq 1 yield new tail exponents not observed neither in Theorem 3 nor in Theorem 4, and which is larger than those given in said results. Since a larger tail exponent means a larger probability of an early detection, in this case, neither the angular nor the radial component of the movement dominates the other, but they rather work together to detect the target quicker: in Case (iia), the proof reveals that the radial movement is responsible for pulling particles sufficiently far away from the boundary to a radial value, where σθ(r):=sinh2(βr)\sigma_{\theta}(r):=\sinh^{-2}(\beta r) becomes sufficiently large (although still small) so that with a constant probability at least one particle has a chance of detecting at this radial value through its angular motion. Finally, for Case (iib) in Theorem 1 we see a tail exponent of the form neβR𝔷ne^{-\beta R}\sqrt{\mathfrak{z}} (as expected when taking α2β\alpha\nearrow 2\beta in Case (iia) or α2β\alpha\searrow 2\beta in Case (iic)) but accompanied by a logarithmic correction. This correction becomes clear when inspecting the proof: it is a result of the balance between the effect of the radial drift and the contribution of the angular movement of particles at distinct radii whose effect is that the contributions to the total angular movement coming from the time spent by a particle within a narrow band centered at a given radius are all about the same, adding up and giving the additional logarithmic factor.

Our main theorem, that is Theorem 1, provides insight on how unlikely it is for the target QQ to remain undetected until time 𝔷\mathfrak{z}, and as discussed before, the proof strategy as well as our remarks provide additional information about the dominant detection mechanisms (that is, either by angular movement only, by radial movement only, or by a combination of the two). Moreover, through our techniques we can obtain more precise information showing in each case where particles must be initially located in order to detect QQ before time 𝔷\mathfrak{z} with probability bounded away from zero. Specifically, given a parameter κ>0\kappa>0 (independent of nn) we can construct a parametric family of sets 𝒟(𝔷)(κ)\mathcal{D}^{(\mathfrak{z})}(\kappa) depending on 𝔷:=𝔷(n)\mathfrak{z}:=\mathfrak{z}(n) and the parameters of the model, such that for every x0𝒟(𝔷)(κ)x_{0}\in\mathcal{D}^{(\mathfrak{z})}(\kappa) and sufficiently large nn, the probability x0(Tdet𝔷)\mathbb{P}_{x_{0}}(T_{det}\geq\mathfrak{z}) is at least a constant that depends on the parameter κ\kappa. Even further, we will show that μ(𝒟(𝔷)(κ))\mu(\mathcal{D}^{(\mathfrak{z})}(\kappa)) is of the same order as the tail exponents, which implies that asymptotically the best chance for QQ to remain undetected by time 𝔷\mathfrak{z} is to find these sets initially empty of points. To be precise, we show the following meta-result:

Theorem 5.

Let α(12,1]\alpha\in(\frac{1}{2},1], β>0\beta>0 and 𝔷:=𝔷(n)\mathfrak{z}:=\mathfrak{z}(n). For every case considered in Theorems 13 and 4 with the corresponding hypotheses, and under the additional assumption of 𝔷=ω(1)\mathfrak{z}=\omega(1) in the case of Theorem 1, there is an explicit parametric family of sets 𝒟(𝔷):=𝒟(𝔷)(κ)\mathcal{D}^{(\mathfrak{z})}:=\mathcal{D}^{(\mathfrak{z})}(\kappa) which is increasing in κ\kappa, and which satisfies:

  1. (i)

    (Uniform lower bound) For every κ\kappa sufficiently large and nn sufficiently large,

    infx0𝒟(𝔷)(κ)x0(Tdet𝔷)\inf_{x_{0}\in\mathcal{D}^{(\mathfrak{z})}\!(\kappa)}\mathbb{P}_{x_{0}}(T_{det}\geq\mathfrak{z})

    is bounded from below by a positive expression that depends only on κ\kappa and the parameters of the model.

  2. (ii)

    (Uniform upper bound) For every κ\kappa sufficiently large and nn sufficiently large

    supx0𝒟¯(𝔷)(κ)x0(Tdet𝔷)\sup_{x_{0}\in\overline{\mathcal{D}}^{(\mathfrak{z})}\!(\kappa)}\mathbb{P}_{x_{0}}(T_{det}\geq\mathfrak{z})

    is bounded from above by a positive expression that depends only on κ\kappa and the parameters of the model, and that tends to 0 as κ\kappa\to\infty.

  3. (iii)

    (Integral bound) Fix κ=C\kappa=C for some sufficiently large constant C>0C>0. For nn sufficiently large, we have

    x0𝒟¯(𝔷)(C)x0(Tdet𝔷)𝑑μ(x0)=O(μ(𝒟(𝔷)(C))).\int_{x_{0}\in\overline{\mathcal{D}}^{(\mathfrak{z})}\!(C)}\mathbb{P}_{x_{0}}(T_{det}\leq\mathfrak{z})d\mu(x_{0})=O(\mu(\mathcal{D}^{(\mathfrak{z})}(C))).

The added value of this last theorem is that it reveals that the set 𝒟(𝔷)(κ)\mathcal{D}^{(\mathfrak{z})}(\kappa) can roughly be interpreted as the regions where particles typically detecting the target are initially located. Moreover, each of the Theorems 1, 3, and 4 follow from the instances of Theorem 5 regarding angular, radial and unrestricted movement, respectively, thus revealing the unified approach that we take throughout the paper as explained in detail in Subsection 1.3.

Establishing the uniform lower bounds for the different cases of Theorem 5 requires, especially in Case (iib) of Theorem 1, a delicate coupling with a discretized auxiliary process. For the integral upper bounds a careful analysis has to be performed: we thereby need to calculate hitting times of Brownian motions in the hyperbolic plane with a reflecting barrier where we make use of results on tails of independent sums of Pareto random variables (that differ greatly according to the exponent).

Finally, let us point out that from a technical point of view, the additional difficulties we encounter compared to Euclidean space are substantial: in [37] the authors address the Euclidean analogue of the problem studied in this paper by relating the detection probability to the expected volume of the dd-dimensional Wiener sausage (that is, the neighborhood of the trace of Brownian motion up to a certain time moment – see also the related work section below). To the best of our knowledge, however, the volume of the Wiener sausage of Brownian motion with a reflecting boundary in hyperbolic space is not known. Even further, our proof techniques give significantly more information: with our proof we are able to characterize the subregions of the hyperbolic space such that particles initially located therein are likely to detect the target up to a certain time moment, whereas particles located elsewhere are not - with the information of the pure volume of the Wiener sausage we would not be able to do so. The radial drift towards the boundary of BO(R)B_{O}(R) entails many additional technical difficulties that do not arise in Euclidean space (see also the related work section below for work being done there). Moreover, our setup contains a reflecting barrier at the boundary, thereby adding further difficulty to the analysis.

 

1.3 Structure and strategy of proof

Recall that in our mobile hyperbolic graph model we consider an initial configuration 𝒫\mathcal{P} of the Poissonized model in BO(R)B_{O}(R) and then associate to each x0:=(r0,θ0)𝒫x_{0}:=(r_{0},\theta_{0})\in\mathcal{P} a particle that evolves independently of other particles according to the generator Δh\Delta_{h} with a reflecting barrier at B0(R)\partial B_{0}(R). Denote by 𝒫𝔷\mathcal{P}_{\mathfrak{z}} the Poisson point process obtained from 𝒫\mathcal{P} by retaining only particles having detected the target by time 𝔷\mathfrak{z}. Since points move independently of each other, each particle initially placed at x0BO(R)x_{0}\in B_{O}(R) belongs to 𝒫𝔷\mathcal{P}_{\mathfrak{z}} independently with probability x0(Tdet𝔷)\mathbb{P}_{x_{0}}(T_{det}\leq\mathfrak{z}) and hence 𝒫𝔷\mathcal{P}_{\mathfrak{z}} is a thinned Poisson point process on BO(R)B_{O}(R) with intensity measure

dμ𝔷(x)=x0(Tdet𝔷)dμ(x).d\mu_{\mathfrak{z}}(x)\;=\;\mathbb{P}_{x_{0}}(T_{det}\leq\mathfrak{z})d\mu(x).

Noticing that the event {Tdet𝔷}\{T_{det}\geq\mathfrak{z}\} is equivalent to {𝒫𝔷=}\{\mathcal{P}_{\mathfrak{z}}=\emptyset\}, we have

(Tdet𝔷)=exp(BO(R)x0(Tdet𝔷)𝑑μ(x0))\mathbb{P}(T_{det}\geq\mathfrak{z})=\exp\Big{(}-\int_{B_{O}(R)}\mathbb{P}_{x_{0}}(T_{det}\leq\mathfrak{z})d\mu(x_{0})\Big{)} (3)

and hence in order to obtain lower and upper bounds on (Tdet𝔷)\mathbb{P}(T_{det}\geq\mathfrak{z}) we will compute the dependence of this integral as a function of 𝔷\mathfrak{z}. In fact, we can say a bit more on the way we go about determining the value of the said integral. Specifically, we partition the range of integration BO(R)B_{O}(R) into 𝒟(𝔷)(κ)\mathcal{D}^{(\mathfrak{z})}(\kappa) and 𝒟¯(𝔷)(κ)\overline{\mathcal{D}}^{(\mathfrak{z})}(\kappa) and observe that the three parts of Theorem 5 together immediately imply that

BO(R)x0(Tdet𝔷)𝑑μ(x0)=Θ(μ(𝒟(𝔷)(κ)))\int_{B_{O}(R)}\mathbb{P}_{x_{0}}(T_{det}\leq\mathfrak{z})d\mu(x_{0})=\Theta(\mu(\mathcal{D}^{(\mathfrak{z})}(\kappa)))

thus reducing, via (3), the computation of tail bounds for detection times to fixing the parameter κ\kappa equal to a constant and determining the expected number of particles initially in 𝒟(𝔷)(κ)\mathcal{D}^{(\mathfrak{z})}(\kappa), that is, computing μ(𝒟(𝔷)(κ))\mu(\mathcal{D}^{(\mathfrak{z})}(\kappa)). So, the structure of the proof of our main results is the following: we determine a suitable candidate set 𝒟(𝔷)(κ)\mathcal{D}^{(\mathfrak{z})}(\kappa), compute μ(𝒟(𝔷)(κ))\mu(\mathcal{D}^{(\mathfrak{z})}(\kappa)), and establish an adequate version of Theorem 5 for the specific type of movement of particles considered (angular, radial or a combination of both).

1.4 Related work

After the introduction of random hyperbolic graphs by Krioukov et al. [28], the model was then first analyzed in a mathematically rigorous way by Gugelmann et al. [21]: they proved that for the case α>12\alpha>\frac{1}{2} the distribution of the degrees follows a power law, they showed that the clustering coefficient is bounded away from 0, and they also obtained the average degree and maximum degree in such networks. The restriction α>12\alpha>\frac{1}{2} guarantees that the resulting graph has bounded average degree (depending on α\alpha and ν\nu only): if α<12\alpha<\frac{1}{2}, then the degree sequence is so heavy tailed that this is impossible (the graph is with high probability connected in this case, as shown in [9]). The power-law degree distribution of random hyperbolic graphs is equal to 2α+12\alpha+1 (see [21, Theorem 2.2]) which, for α(12,1]\alpha\in(\frac{1}{2},1], belongs to the interval (2,3](2,3], and this is the interval where the best fit power-law exponent of social networks typically falls into (see [5, p. 69]). If α>1\alpha>1, then as the number of vertices grows, the largest component of a random hyperbolic graph has sublinear size (more precisely, its order is n1/(2α)+o(1)n^{1/(2\alpha)+o(1)}, see [8, Theorem 1.4] and [14]). On the other hand, it is known that for 12<α<1\frac{1}{2}<\alpha<1, with high probability a hyperbolic random graph of expected order nn has a connected component whose order is also linear in nn [8, Theorem 1.4] and the second largest component has size Θ(log11αn)\Theta(\log^{\frac{1}{1-\alpha}}n) [24], which justifies referring to the linear size component as the giant component. More precise results including a law of large numbers for the largest component in these networks were established in [17]. Further results on the static version of this model include results on the diameter [23, 19, 34], on the spectral gap [25], on typical distances [1], on the clustering coefficient [12, 18], on bootstrap percolation [26] and on the contact process [29].

No dynamic model of random hyperbolic graphs has been proposed so far. In the Boolean model (that is similar to random geometric graphs but now the underlying metric space is d\mathbb{R}^{d} and, almost surely, there is an infinite number of vertices) the same question of the tail probability of detection time, as well as the questions of coverage time (the time to detect a certain region of d\mathbb{R}^{d}), percolation time (the time a certain particle needs to join particle of the infinite component) and broadcast time (the time it takes to broadcast a message to all particles) were studied by Peres et al. [37]: the authors therein show that for a Poisson process of intensity λ\lambda, and a target performing independent Brownian motion, as tt\to\infty,

(Tdett)=exp(2πλtlogt(1+o(1))),\mathbb{P}{(T_{det}\geq t)}=\exp\Big{(}-2\pi\lambda\frac{t}{\log t}(1+o(1))\Big{)},

where a target is detected if a particle is at Euclidean distance at most rr, for some arbitrary but fixed rr (in the three other mentioned contexts the interpretations are similar: we say that an interval is covered if each point of the interval has been at distance at most rr from some particle at some time instant before tt; we say that a particle joins a component if it is at distance at most rr from another particle of the component, and we say that a message can be sent between two particles if they are at distance rr). Note that the probability holds only as tt\to\infty, and in this case the main order term of the tail probability does not depend on rr, as shown in [37]. Stauffer [39] generalized detection of a mobile target moving according to any continuous function and showed that for a Poisson process of sufficiently high intensity λ\lambda over d\mathbb{R}^{d} (with the same detection setup) the target will eventually be detected almost surely, whereas for small enough λ\lambda, with positive probability the target can avoid detection forever. The somewhat inverse question of isolation of the target, that is, the time it takes for a (possibly) moving target (again for a Poisson process of intensity λ\lambda in d\mathbb{R}^{d}) until no other vertex is at distance rr anymore and the target becomes isolated, was then studied by Peres et al. [36]: it was shown therein that the best strategy for the target to stay isolated as long as possible in fact is to stay put (again with the same setup for being isolated).

Also for the Boolean model, the question of detection time was already addressed by Liu et al. [30] when each particle moves continuously in a fixed randomly chosen direction: they showed that the time it takes for the set of particles to detect a target is exponentially distributed with expectation depending on the intensity λ\lambda (where detection again means entering the sensing area of the target). For the case of a stationary target as discussed here, as observed in Kesidis et al. [22] and in Konstantopoulos [27] the detection time can be deduced from classical results on continuum percolation: namely, in this case it follows from Stoyan et al. [40] that

(Tdett)=eλ𝔼(vol(Wr(t))),\mathbb{P}{(T_{det}\geq t)}=e^{-\lambda\mathbb{E}(vol(W_{r}(t)))},

where vol(Wr(t))vol(W_{r}(t)) is the volume of the Wiener sausage of radius rr up to time tt (equivalent to being able to detect at distance at most rr), and which in the case of Euclidean space is known quite precisely [38, 6].

A study of dynamic random geometric graphs prior to the paper of Peres et al. was undertaken by Díaz et al. [13]: the authors therein consider a random geometric graph whose radius is close to the threshold of connectivity and analyze in this setup the lengths of time intervals of connectivity and disconnectivity. Even earlier, the model of Brownian motion in infinite random geometric graphs (in the ”dynamic Boolean model”) was studied by van den Berg et al. [7] who showed that for a Poisson intensity above the critical intensity for the existence of an infinite component, almost surely an infinite component exists at all times (here the radii of particles are not fixed to rr, but rather i.i.d. random variables following a certain distribution). More generally, the question of detecting an immobile target (or escaping from a set of immobile traps) when performing random walks on a square lattice is well studied (here, in contrast to the previous papers, detecting means to be exactly at the same position in the lattice); escaping from a set of mobile traps was recently analyzed as well, for both questions we refer to Athreya et al. [4] and the references therein.

1.5 Organization

Section 2 gathers facts and observations that will be used throughout the paper. Section 3 then analyzes the simplified model where the particles’ movement is restricted to angular movement only, at the same time illustrating in the simplest possible setting our general proof strategy. Section 4 then considers the model where the particles’ movement is restricted to radial movement only. In Section 5 we finally address the mixed case, that is, the combined case of both radial and angular movement of particles. Specifically, in Section 5.1, we look at quick detection mechanisms and, in Section 5.2, we show that the detection mechanisms found are asymptotically optimal in all cases, that is, no other strategy can detect the target asymptotically faster. In Section 6 we briefly comment on future work.

1.6 Global conventions

Throughout all of this paper α\alpha is a fixed constant such that 12<α1\frac{1}{2}<\alpha\leq 1 (the reason to consider only this range is that only therein hyperbolic random graphs are deemed to be reasonable models of real-world networks – see the discussion in Section 1.4). Furthermore, in this article both β\beta and ν\nu are strictly positive constants. In order to avoid tedious repetitions and lengthy statements, we will not reiterate these facts. Also, wherever appropriate, we will hide α\alpha, β\beta, and ν\nu within asymptotic notation. Moreover, throughout the paper, we let 𝔷:=𝔷(n)\mathfrak{z}:=\mathfrak{z}(n) be a non-negative function depending on nn.

In order to avoid using ceiling and floors, we will always assume R:=2ln(n/ν)R:=2\ln(n/\nu) is an integer. Since all our claims hold asymptotically, doing so has no impact on the stated results. All asymptotic expressions are with respect to nn, and wherever it makes sense, inequalities should be understood as valid asymptotically.

2 Preliminaries

In this section we collect basic facts and a few useful observations together with a bit of additional notation. First, for a region Ω\Omega of BO(R)B_{O}(R), that is, ΩBO(R)\Omega\subseteq B_{O}(R), let μ(Ω)\mu(\Omega) denote the expected number of points 𝒫\mathcal{P} in Ω\Omega. The first basic fact we state here gives the expected number of points 𝒫\mathcal{P} at distance at most rr from the origin, as well as the expected number of points at distance at most RR from a given point QBO(R)Q\in B_{O}(R):

Lemma 6 ([21, Lemma 3.2]).

If 0rR0\leq r\leq R, then μ(BO(r))=neα(Rr)(1+o(1))\mu(B_{O}(r))=ne^{-\alpha(R-r)}(1+o(1)). Moreover, if QBO(R)Q\in B_{O}(R) is such that rQ:=rr_{Q}:=r, then for Cα:=2α/(π(α12))C_{\alpha}:=2\alpha/(\pi(\alpha-\frac{1}{2})),

μ(BQ(R)BO(R))=nCαer2(1+O(e(α12)r+er)).\mu(B_{Q}(R)\cap B_{O}(R))=nC_{\alpha}e^{-\frac{r}{2}}(1+O(e^{-(\alpha-\frac{1}{2})r}+e^{-r})).

We also use the following Chernoff bounds for Poisson random variables:

Theorem 7 (Theorem A.1.15 of [3]).

Let PP have Poisson distribution with mean μ\mu. Then, for every ε>0\varepsilon>0,

(Pμ(1ε))eε2μ/2\mathbb{P}(P\leq\mu(1-\varepsilon))\leq e^{-\varepsilon^{2}\mu/2}

and

(Pμ(1+ε))(eε(1+ε)(1+ε))μ.\mathbb{P}(P\geq\mu(1+\varepsilon))\leq\left(e^{\varepsilon}(1+\varepsilon)^{-(1+\varepsilon)}\right)^{\mu}.

Throughout this paper, we denote by θR(r,r)\theta_{R}(r,r^{\prime}) the maximal angle between two points at (hyperbolic) distance at most RR and radial coordinates 0r,rR0\leq r,r^{\prime}\leq R. By (1), for r+rRr+r^{\prime}\geq R, we have

coshR=coshrcoshrsinhrsinhrcosθR(r,r).\cosh R=\cosh r\cosh r^{\prime}-\sinh r\sinh r^{\prime}\cos\theta_{R}(r,r^{\prime}). (4)

Henceforth, consider the mapping ϕ:[0,R][θR(R,R),π/2]\phi:[0,R]\to[\theta_{R}(R,R),\pi/2] such that ϕ(r):=θR(R,r)\phi(r):=\theta_{R}(R,r). For the sake of future reference, we next collect some simple as well as some known facts concerning ϕ()\phi(\cdot).

Lemma 8.

The following hold:

  1. (i)

    cos(ϕ(r))=cothRtanh(r2)\cos(\phi(r))=\coth R\cdot\tanh(\frac{r}{2}).

  2. (ii)

    ϕ()\phi(\cdot) is differentiable (hence, also continuous) and decreasing.

  3. (iii)

    ϕ(0)=π2\phi(0)=\frac{\pi}{2}.

  4. (iv)

    ϕ()\phi(\cdot) is invertible and its inverse is differentiable (hence, continuous) and decreasing.

  5. (v)

    ϕ(r)=2er2(1±O(er))\phi(r)=2e^{-\frac{r}{2}}(1\pm O(e^{-r})).

Proof.

The first part follows from (1) taking dH:=Rd_{H}:=R, r:=Rr^{\prime}:=R, and by the hyperbolic tangent half angle formula. The second part follows because the derivative with respect to rr of arccos(cothRtanh(r2))\arccos(\coth R\cdot\tanh(\frac{r}{2})) exists and is negative except when r=Rr=R (for details see [24, Remark 2.1]). The third part follows directly from the first part, and the fourth part follows immediately from the preceding two parts. The last part is a particular case of a more general result [21, Lemma 3.1]. ∎

To conclude this section, we introduce some notation that we will use throughout the article. For a subset Ω\Omega of BO(R)B_{O}(R) we let Ω¯\overline{\Omega} denote its complement with respect to BO(R)B_{O}(R), i.e., Ω¯:=BO(R)Ω\overline{\Omega}:=B_{O}(R)\setminus\Omega. Thus, B¯O(r)\overline{B}_{O}(r) denotes BO(R)BO(r)B_{O}(R)\setminus B_{O}(r).

3 Angular movement

In this section we consider angular movement only. As it will become evident, restricting the motion to angular movement alone simplifies the analysis considerably, compared to the general case, and in fact also compared to restricting movement radially. The simpler setting considered in this section is ideal for illustrating our general strategy for deriving tail bounds on detection times. It is also handy for introducing some notation and approximations we shall use throughout the rest of the paper. In later sections we will follow the same general strategy argument but encounter technically more challenging obstacles. In contrast, the calculations involved in the study of the angular movement only case can be mostly reduced to ones involving known facts about one dimensional Brownian motion.

OOx0x_{0}r𝔷=r0\ \ {}_{r_{\mathfrak{z}}=r_{0}}x𝔷x_{\mathfrak{z}}r^{}_{\widehat{r}}θ𝔷{}_{\theta_{\mathfrak{z}}}QQOO
Figure 1: The position x𝔷=(r𝔷,θ𝔷)x_{\mathfrak{z}}=(r_{\mathfrak{z}},\theta_{\mathfrak{z}}) of particle PP at time 𝔷\mathfrak{z} is shown. The lightly shaded area corresponds to 𝒟(𝔷)(κ)BQ(R)\mathcal{D}^{(\mathfrak{z})}(\kappa)\setminus B_{Q}(R) and the strongly shaded region represents BQ(R)B_{Q}(R). The dashed line corresponds to the arc segment where a particle PP at initial radial distance r0r_{0} is, conditional under not having detected the target QQ up to time 𝔷\mathfrak{z}. The depicted dash-dot arc segment has length Cr0C_{r_{0}}. Each of the two dash-dot arc lengths inside the lightly shaded region have (hyperbolic) length κ𝔷\kappa\sqrt{\mathfrak{z}} corresponding to an angle θ0\theta_{0} such that ϕ(r0)|θ0|ϕ(r0)+κϕ(𝔷)\phi(r_{0})\leq|\theta_{0}|\leq\phi(r_{0})+\kappa\phi^{(\mathfrak{z})}. The dashed circle is the boundary of the largest ball centered at the origin which is contained in 𝒟(𝔷)(κ)\mathcal{D}^{(\mathfrak{z})}(\kappa).

Following the proof strategy described in Section 1.3, we first define 𝒟(𝔷)(κ)\mathcal{D}^{(\mathfrak{z})}(\kappa). Recall that, roughly speaking, 𝒟(𝔷)(κ)\mathcal{D}^{(\mathfrak{z})}(\kappa) is the set of starting points from which a particle has a ”good” chance to detect the target by time 𝔷\mathfrak{z}, where ”good” depends on a parameter κ>1\kappa>1 (independent of nn). Let ϕ(𝔷)\phi^{(\mathfrak{z})} be roughly proportional to the angular distance travelled by a particle at radial coordinate r0r_{0} up to time 𝔷\mathfrak{z}, more precisely, let ϕ(𝔷):=𝔷eβr0\phi^{(\mathfrak{z})}:=\sqrt{\mathfrak{z}}e^{-\beta r_{0}} (that is, ϕ(𝔷)\phi^{(\mathfrak{z})} corresponds to the standard deviation of a standard Brownian motion during an interval of time of length 𝔷\mathfrak{z} normalized by a term which is proportional to the perimeter of BO(r0)B_{O}(r_{0}) for r0r_{0} bounded away from 0). We can then define the following region (the shaded region in Figure 1):

𝒟(𝔷)=𝒟(𝔷)(κ):={x0BO(R):|θ0|ϕ(r0)+κϕ(𝔷)}.\mathcal{D}^{(\mathfrak{z})}=\mathcal{D}^{(\mathfrak{z})}(\kappa):=\big{\{}x_{0}\in B_{O}(R):|\theta_{0}|\leq\phi(r_{0})+\kappa\phi^{(\mathfrak{z})}\big{\}}.

In words, 𝒟(𝔷)(κ)\mathcal{D}^{(\mathfrak{z})}(\kappa) is the collection of points x0=(r0,θ0)BO(R)x_{0}=(r_{0},\theta_{0})\in B_{O}(R) which are either contained in BQ(R)B_{Q}(R) or form an angle at the origin of at most κϕ(𝔷)\kappa\phi^{(\mathfrak{z})} with a point that belongs to the boundary of BQ(R)B_{Q}(R) and has radial coordinate exactly r0r_{0}. Since x0BQ(R)x_{0}\in B_{Q}(R) if and only if |θ0|ϕ(r0)|\theta_{0}|\leq\phi(r_{0}), it is clear that BQ(R)B_{Q}(R) is contained in 𝒟(𝔷)(κ)\mathcal{D}^{(\mathfrak{z})}(\kappa). The value κ\kappa is a parameter that tells us at which scaling up to time 𝔷\mathfrak{z} we consider our region 𝒟(𝔷)(κ)\mathcal{D}^{(\mathfrak{z})}(\kappa).

The main goal of this section is to prove the result stated next which is an instance of Theorem 5 for the case of angular movement only. Thus, Theorem 3 will immediately follow (by the proof strategy discussed in Section 1.3) once we show that μ(𝒟(𝔷)(κ))\mu(\mathcal{D}^{(\mathfrak{z})}(\kappa)) is of the right order:

Theorem 9.

Denote by Φ\Phi the standard normal distribution. If κ>0\kappa>0 and κ𝔷π2(1o(1))eβR\kappa\sqrt{\mathfrak{z}}\leq\frac{\pi}{2}(1-o(1))e^{\beta R}, then

infx0𝒟(𝔷)(κ)x0(Tdet𝔷)=Ω(Φ(κ))andsupx0𝒟¯(𝔷)(κ)x0(Tdet𝔷)=O(Φ(κ)).\inf_{x_{0}\in\mathcal{D}^{(\mathfrak{z})}(\kappa)}\mathbb{P}_{x_{0}}(T_{det}\leq\mathfrak{z})=\Omega(\Phi(-\kappa))\qquad\text{and}\qquad\sup_{x_{0}\in\overline{\mathcal{D}}^{(\mathfrak{z})}(\kappa)}\mathbb{P}_{x_{0}}(T_{det}\leq\mathfrak{z})=O(\Phi(-\kappa)).

Furthermore,

𝒟¯(𝔷)(κ)x0(Tdet𝔷)𝑑μ(x0)=O(μ(𝒟(𝔷)(κ))).\int_{\overline{\mathcal{D}}^{(\mathfrak{z})}\!(\kappa)}\mathbb{P}_{x_{0}}(T_{det}\leq\mathfrak{z})d\mu(x_{0})=O(\mu(\mathcal{D}^{(\mathfrak{z})}(\kappa))).

Before proving the previous result, we make a few observations and introduce some definitions. First, note that the perimeter of BO(r)B_{O}(r) that is outside BQ(R)B_{Q}(R) has length (see Figure 1)

Cr:=2(πϕ(r))sinh(βr).C_{r}:=2(\pi-\phi(r))\sinh(\beta r).

Since ϕ()\phi(\cdot) is decreasing and continuous (by Part (ii) of Lemma 8), we obtain the following:

Fact 10.

The mapping r12Cr=(πϕ(r))sinh(βr)r\mapsto\frac{1}{2}C_{r}=(\pi-\phi(r))\sinh(\beta r) is increasing and continuous for arguments in [0,R][0,R] and takes values in [0,12CR][0,\frac{1}{2}C_{R}].

In particular, for κ𝔷[0,12CR]\kappa\sqrt{\mathfrak{z}}\in[0,\frac{1}{2}C_{R}] there is a unique value r^\widehat{r} such that

κ𝔷=12Cr^=(πϕ(r^))sinh(βr^).\kappa\sqrt{\mathfrak{z}}=\tfrac{1}{2}C_{\widehat{r}}=(\pi-\phi(\widehat{r}))\sinh(\beta\widehat{r}). (5)

One may think of r^\widehat{r} as chosen so that up to time κ2𝔷\kappa^{2}\mathfrak{z} a point at distance r^\widehat{r} from the origin has a reasonable chance to detect the target due to their angular movement. Using Fact 10, we immediately obtain the following:

Fact 11.

For r[0,R]r\in[0,R], the following holds: rr^r\geq\widehat{r} if and only if κ𝔷12Cr\kappa\sqrt{\mathfrak{z}}\leq\frac{1}{2}C_{r}. In particular, BO(r^)B_{O}(\widehat{r}) is the largest ball centered at the origin contained in 𝒟(𝔷)(κ)\mathcal{D}^{(\mathfrak{z})}(\kappa).

(See Figure 1 for an illustration of BO(r^)B_{O}(\widehat{r}).)

Fact 12.

If r^>1\widehat{r}>1, then r^=1βlog(κ𝔷)+Θ(1)\widehat{r}=\frac{1}{\beta}\log(\kappa\sqrt{\mathfrak{z}})+\Theta(1). Moreover, if r^1\widehat{r}\leq 1, then κ𝔷=O(1)\kappa\sqrt{\mathfrak{z}}=O(1).

We will need the following claim whose proof is simple although the calculations involved are a bit tedious, mostly consisting in computing integrals and case analysis.

Lemma 13.

If κ>0\kappa>0 and κ𝔷π2(1o(1))eβR\kappa\sqrt{\mathfrak{z}}\leq\frac{\pi}{2}(1-o(1))e^{\beta R}, then

μ(𝒟(𝔷)(κ))={Θ(1+n(κ𝔷eβR)1αβ),if αβ,Θ(1+nκ𝔷eβR(1+log(eβRκ𝔷))),if α=β.\mu(\mathcal{D}^{(\mathfrak{z})}(\kappa))=\begin{cases}\Theta\Big{(}1+n\Big{(}\frac{\kappa\sqrt{\mathfrak{z}}}{e^{\beta R}}\Big{)}^{1\wedge\frac{\alpha}{\beta}}\Big{)},&\text{if $\alpha\neq\beta$,}\\[8.0pt] \Theta\Big{(}1+n\frac{\kappa\sqrt{\mathfrak{z}}}{e^{\beta R}}\Big{(}1+\log\big{(}\frac{e^{\beta R}}{\kappa\sqrt{\mathfrak{z}}}\big{)}\Big{)}\Big{)},&\text{if $\alpha=\beta$.}\end{cases}
Proof.

If r^>RΘ(1)\widehat{r}>R-\Theta(1), by (5) and Part (iii) of Lemma 8, we see that κ𝔷=Θ(eβR)\kappa\sqrt{\mathfrak{z}}=\Theta(e^{\beta R}) which always gives an expression of order Θ(n)\Theta(n) on the right-hand side of the equality in the lemma’s statement. On the other hand, by Fact 11, we know that BO(r^)𝒟(𝔷)(κ)BO(R)B_{O}(\widehat{r})\subseteq\mathcal{D}^{(\mathfrak{z})}(\kappa)\subseteq B_{O}(R), so from Lemma 6 we get that μ(𝒟(𝔷)(κ))=Θ(μ(BO(r^)))=Θ(n)\mu(\mathcal{D}^{(\mathfrak{z})}(\kappa))=\Theta(\mu(B_{O}(\widehat{r})))=\Theta(n), and hence the claim holds for said large values of r^\widehat{r}.

Assume henceforth that r^RΘ(1)\widehat{r}\leq R-\Theta(1) and define A:=𝒟(𝔷)(κ)(BQ(R)BO(r^))A:=\mathcal{D}^{(\mathfrak{z})}(\kappa)\setminus(B_{Q}(R)\cup B_{O}(\widehat{r})). Clearly,

μ(𝒟(𝔷)(κ))=μ(BQ(R)BO(r^))+μ(A).\mu(\mathcal{D}^{(\mathfrak{z})}(\kappa))=\mu(B_{Q}(R)\cup B_{O}(\widehat{r}))+\mu(A). (6)

Now, observe that BQ(R)B_{Q}(R) is completely contained in half of the disk BO(R)B_{O}(R) (see Figure 1), so μ(BQ(R)BO(r^))=μ(BQ(R))+Θ(μ(BO(r^)))\mu(B_{Q}(R)\cup B_{O}(\widehat{r}))=\mu(B_{Q}(R))+\Theta(\mu(B_{O}(\widehat{r}))), and thus by Lemma 6 and Fact 12, for r^>1\widehat{r}>1,

μ(BQ(R)BO(r^)))=Θ(n(eR2+(κ𝔷eβR)αβ))=Θ(1+n(κ𝔷eβR)αβ).\mu(B_{Q}(R)\cup B_{O}(\widehat{r})))=\Theta\Big{(}n\Big{(}e^{-\frac{R}{2}}+\Big{(}\frac{\kappa\sqrt{\mathfrak{z}}}{e^{\beta R}}\Big{)}^{\frac{\alpha}{\beta}}\Big{)}\Big{)}=\Theta\Big{(}1+n\Big{(}\frac{\kappa\sqrt{\mathfrak{z}}}{e^{\beta R}}\Big{)}^{\frac{\alpha}{\beta}}\Big{)}. (7)

Moreover, the identity also holds when r^1\widehat{r}\leq 1, since μ(BQ(R))=Θ(1)\mu(B_{Q}(R))=\Theta(1) and μ(BO(r^))=O(neαR)=o(1)\mu(B_{O}(\widehat{r}))=O(ne^{-\alpha R})=o(1) (by Lemma 6, definition of RR and the fact that α>12\alpha>\frac{1}{2}), and n(κ𝔷/eβR)αβ=O(neαR)=o(1)n(\kappa\sqrt{\mathfrak{z}}/e^{\beta R})^{\frac{\alpha}{\beta}}=O(ne^{-\alpha R})=o(1) (by Fact 12, definition of RR and since α>12)\alpha>\frac{1}{2}).

On the other hand, by Fact 11 and our choice of 𝒟(𝔷)(κ)\mathcal{D}^{(\mathfrak{z})}(\kappa), we get

μ(A)=Θ(neαRκ𝔷)r^Reβr0sinh(αr0)𝑑r0\mu(A)=\Theta(ne^{-\alpha R}\kappa\sqrt{\mathfrak{z}})\int_{\widehat{r}}^{R}e^{-\beta r_{0}}\sinh(\alpha r_{0})dr_{0} (8)

The next claim together with (6), (7) and (8) yield the lemma:

neαRκ𝔷r^Reβr0sinh(αr0)𝑑r0={O(n(κ𝔷eβR)1αβ),if αβ,Θ(nκ𝔷eβRlog(eβRκ𝔷)),if α=β.ne^{-\alpha R}\kappa\sqrt{\mathfrak{z}}\int_{\widehat{r}}^{R}e^{-\beta r_{0}}\sinh(\alpha r_{0})dr_{0}=\begin{cases}O\Big{(}n\Big{(}\frac{\kappa\sqrt{\mathfrak{z}}}{e^{\beta R}}\Big{)}^{1\wedge\frac{\alpha}{\beta}}\Big{)},&\text{if $\alpha\neq\beta$,}\\[8.0pt] \Theta\Big{(}n\frac{\kappa\sqrt{\mathfrak{z}}}{e^{\beta R}}\log\big{(}\frac{e^{\beta R}}{\kappa\sqrt{\mathfrak{z}}}\big{)}\Big{)},&\text{if $\alpha=\beta$.}\end{cases}

To prove the claim, not that when α=β\alpha=\beta, the last integral equals Θ(Rr^)\Theta(R-\widehat{r}). If r^1\widehat{r}\leq 1, by Fact 12 we have that κ𝔷=O(1)\kappa\sqrt{\mathfrak{z}}=O(1), so by definition of RR we get that Rr^=Θ(R)=Θ(log(eβR/(κ𝔷)))R-\widehat{r}=\Theta(R)=\Theta(\log(e^{\beta R}/(\kappa\sqrt{\mathfrak{z}}))). If on the other hand r^>1\widehat{r}>1, again by Fact 12, we have that r^=1βlog(κ𝔷)+Θ(1)\widehat{r}=\frac{1}{\beta}\log(\kappa\sqrt{\mathfrak{z}})+\Theta(1), so analogously to the previous calculations we obtain Rr^=Θ(log(eβR/(κ𝔷)))R-\widehat{r}=\Theta(\log(e^{\beta R}/(\kappa\sqrt{\mathfrak{z}}))). Plugging back establishes the claim when α=β\alpha=\beta.

For α>β\alpha>\beta, since r^RΘ(1)\widehat{r}\leq R-\Theta(1), the claim follows because the integral therein equals Θ(e(αβ)R)\Theta(e^{(\alpha-\beta)R}).

Finally, when α<β\alpha<\beta, the integral in the claim is Θ(e(βα)r^)\Theta(e^{-(\beta-\alpha)\widehat{r}}). If r^>1\widehat{r}>1, by Fact 12, e(βα)r^=Θ((κ𝔷)(1αβ))e^{-(\beta-\alpha)\widehat{r}}=\Theta((\kappa\sqrt{\mathfrak{z}})^{-(1-\frac{\alpha}{\beta})}). If r^1\widehat{r}\leq 1, then e(βα)r^=O(1)e^{-(\beta-\alpha)\widehat{r}}=O(1) and, again by Fact 12, also κ𝔷=O(1)\kappa\sqrt{\mathfrak{z}}=O(1). So, no matter the value of r^\widehat{r} the claim holds when α<β\alpha<\beta which concludes the proof of the claim for all cases. ∎

We now have all the required ingredients to prove this section’s main result.

Proof.

(of Theorem 9) Throughout the ensuing discussion we let 𝒟(𝔷):=𝒟(𝔷)(κ)\mathcal{D}^{(\mathfrak{z})}:=\mathcal{D}^{(\mathfrak{z})}(\kappa). We begin by showing the uniform upper and lower bounds on x0(Tdet𝔷)\mathbb{P}_{x_{0}}(T_{det}\leq\mathfrak{z}). To do so observe first that if x0BQ(R)𝒟(𝔷)x_{0}\in B_{Q}(R)\subseteq\mathcal{D}^{(\mathfrak{z})}, clearly x0(Tdet𝔷)=1\mathbb{P}_{x_{0}}(T_{det}\leq\mathfrak{z})=1 for any 𝔷0\mathfrak{z}\geq 0 and hence the uniform lower bound follows directly for said x0x_{0}. Assume henceforth that x0BQ(R)x_{0}\not\in B_{Q}(R) and observe that since there is only angular movement, a particle initially located at (r0,θ0)(r_{0},\theta_{0}) detects QQ if and only if it reaches (r0,ϕ(r0))(r_{0},\phi(r_{0})) or (r0,ϕ(r0))(r_{0},-\phi(r_{0})). Now, recall that the angular movement’s law is that of a variance 11 Brownian motion BI(𝔷)B_{I(\mathfrak{z})} with I(𝔷):=0𝔷cosech2(βr𝔷)𝑑𝔷=(ϕ(𝔷))2I(\mathfrak{z}):=\int_{0}^{\mathfrak{z}}\operatorname{cosech}^{2}(\beta r_{\mathfrak{z}})d\mathfrak{z}=(\phi^{(\mathfrak{z})})^{2}, so

x0(Tdet𝔷)=(H[a,b]I(𝔷))\mathbb{P}_{x_{0}}(T_{det}\leq\mathfrak{z})=\mathbb{P}(H_{[-a,b]}\leq I(\mathfrak{z})) (9)

where we have used (with a slight abuse of notation) \mathbb{P} for the law of a standard Brownian motion, and where H[a,b]H_{[-a,b]} is its exit time from the interval [a,b][-a,b] where in this case a:=ϕ(r0)|θ0|a:=\phi(r_{0})-|\theta_{0}| and b:=2πϕ(r0)|θ0|b:=2\pi-\phi(r_{0})-|\theta_{0}|. This last probability is a well studied function of a,ba,b and I(𝔷)I(\mathfrak{z}) (see [11], formula 3.0.2), which can be bounded using the reflection principle and the fact that aba\leq b, giving

(H[a,b]I(𝔷))=Θ((BI(𝔷)a))=Θ(Φ((ϕ(r0)|θ0|)/ϕ(𝔷))).\mathbb{P}(H_{[-a,b]}\leq I(\mathfrak{z}))=\Theta\big{(}\mathbb{P}(B_{I(\mathfrak{z})}\leq-a)\big{)}=\Theta\big{(}\Phi\big{(}(\phi(r_{0})-|\theta_{0}|)/\phi^{(\mathfrak{z})}\big{)}\big{)}. (10)

From our assumption x0BQ(R)x_{0}\not\in B_{Q}(R) we deduce that |θ0|>ϕ(r0)|\theta_{0}|>\phi(r_{0}), and hence the argument within Φ\Phi above is always negative. It follows that for θ0>0\theta_{0}>0 the mapping θ0Φ((ϕ(r0)θ0)/ϕ(𝔷))\theta_{0}\mapsto\Phi\big{(}(\phi(r_{0})-\theta_{0})/\phi^{(\mathfrak{z})}\big{)} is decreasing, and so both uniform bounds on x0(Tdet𝔷)\mathbb{P}_{x_{0}}(T_{det}\leq\mathfrak{z}) follow from the definition of 𝒟(𝔷)(κ)\mathcal{D}^{(\mathfrak{z})}(\kappa).

We next establish the integral bound. Let 𝒟(𝔷):=𝒟(𝔷)(κ)\mathcal{D}^{(\mathfrak{z})}:=\mathcal{D}^{(\mathfrak{z})}(\kappa). From (9) and (10) we observe that

𝒟¯(𝔷)x0(Tdet𝔷)𝑑μ(x0)=Θ(neαR)r^Rϕ(r0)+κϕ(𝔷)πΦ((ϕ(r0)θ0)/ϕ(𝔷))sinh(αr0)𝑑θ0𝑑r0.\displaystyle\int_{\overline{\mathcal{D}}^{(\mathfrak{z})}}\mathbb{P}_{x_{0}}(T_{det}\leq\mathfrak{z})d\mu(x_{0})=\Theta(ne^{-\alpha R})\int_{\widehat{r}}^{R}\int_{\phi(r_{0})+\kappa\phi^{(\mathfrak{z})}}^{\pi}\Phi\big{(}(\phi(r_{0})-\theta_{0})/\phi^{(\mathfrak{z})}\big{)}\sinh(\alpha r_{0})d\theta_{0}dr_{0}.

Applying the change of variables y0:=(θ0ϕ(r0))/ϕ(𝔷)y_{0}:=(\theta_{0}-\phi(r_{0}))/\phi^{(\mathfrak{z})} and bounding π\pi by \infty in the upper limit of the integral we obtain

𝒟¯(𝔷)x0(Tdet𝔷)𝑑μ(x0)\displaystyle\int_{\overline{\mathcal{D}}^{(\mathfrak{z})}}\mathbb{P}_{x_{0}}(T_{det}\leq\mathfrak{z})d\mu(x_{0}) =O(neαR)r^RκΦ(y0)sinh(αr0)ϕ(𝔷)𝑑y0𝑑r0\displaystyle=O(ne^{-\alpha R})\int_{\widehat{r}}^{R}\int_{\kappa}^{\infty}\Phi({-}y_{0})\sinh(\alpha r_{0})\phi^{(\mathfrak{z})}dy_{0}dr_{0}
=O(neαR𝔷)r^Reβr0sinh(αr0)𝑑r0.\displaystyle=O(ne^{-\alpha R}\sqrt{\mathfrak{z}})\int_{\widehat{r}}^{R}e^{-\beta r_{0}}\sinh(\alpha r_{0})dr_{0}.

The last expression is the same as one encountered in the proof of Lemma 13. Substituting by the values obtained therein one gets a term which, by Lemma 13, is O(μ(𝒟(𝔷)(κ))O(\mu(\mathcal{D}^{(\mathfrak{z})}(\kappa)), thus completing the proof of the claimed integral upper bound. ∎

4 Radial movement

The basic structure of this section is similar to Section 3. However, we now consider radial movement only. We define the relevant set 𝒟(𝔷):=𝒟(𝔷)(κ)\mathcal{D}^{(\mathfrak{z})}:=\mathcal{D}^{(\mathfrak{z})}(\kappa) depending on a parameter κ1\kappa\geq 1 independent of nn, then we compute μ(𝒟(𝔷))\mu(\mathcal{D}^{(\mathfrak{z})}) and afterwards separately establish the stated upper and lower bounds. Since the radial movement contains a drift towards the boundary that makes the calculations more involved, we first need to prove basic results on such diffusion processes before actually defining 𝒟(𝔷)\mathcal{D}^{(\mathfrak{z})}. Let us thus start with the definition of the radial movement of a given particle, initially located at x0=(r0,θ0)x_{0}=(r_{0},\theta_{0}). Recall that a particle which at time 𝔷\mathfrak{z} is in position xs=(rs,θs)x_{s}=(r_{s},\theta_{s}) will stay at an angle θs=θ0\theta_{s}=\theta_{0} while its radial distance from the origin rsr_{s} evolves according to a diffusion process with a reflecting barrier at RR and generator

Δrad:=122r2+α21tanh(αr)r.\Delta_{rad}:=\frac{1}{2}\frac{\partial^{2}}{\partial r^{2}}+\frac{\alpha}{2}\frac{1}{\tanh(\alpha r)}\frac{\partial}{\partial r}.

We are only concerned with the evolution of the process up until the detection time of the target, which occurs when the particle reaches BQ(R)B_{Q}(R), and since the particles can only move radially, for any x0BQ(R)x_{0}\not\in B_{Q}(R) we can also impose an absorbing barrier for rsr_{s} at the radius 𝔯0\mathfrak{r}_{0} corresponding to the point in BQ(R)\partial B_{Q}(R) with angle θ0\theta_{0}. Recall that by Part (iv) of Lemma 8, the function ϕ:[0,R][ϕ(R),π2]\phi:[0,R]\to[\phi(R),\frac{\pi}{2}] has an inverse ϕ1\phi^{-1} which is also decreasing and continuous, so the absorbing barrier is given by 𝔯0=ϕ1(|θ0|)\mathfrak{r}_{0}=\phi^{-1}(|\theta_{0}|). This means that for values of |θ0|>π2|\theta_{0}|>\frac{\pi}{2} we choose as absorbing barrier the origin OO, that is, 𝔯0=0\mathfrak{r}_{0}=0. Also recall that, whatever the value of θ0\theta_{0}, we have that (θ0,𝔯0)(\theta_{0},\mathfrak{r}_{0}) always belongs to the boundary of BQ(R)B_{Q}(R). Since near the origin OO the drift towards the boundary grows to infinity, for a point x0x_{0} such that |θ0|>π2|\theta_{0}|>\frac{\pi}{2} we have x0(Tdett)=0\mathbb{P}_{x_{0}}(T_{det}\leq t)=0 (in other words, at these angles the only way to detect QQ would be by reaching the origin, but since the drift there is -\infty this is impossible). For the case where the absorbing barrier is distinct from the origin we use the following result, which we state as a standalone result since it will be of use in later sections as well (by abuse of notation, since Δrad\Delta_{rad} defined above depends only on the radial (one-dimensional) movement we use in the following lemma the operator Δrad\Delta_{rad} also to denote the specific operator acting over sufficiently smooth one variable functions over the reals):

Lemma 14.

Let {ys}s0\{y_{s}\}_{s\geq 0} be a diffusion process on (0,Y](0,Y] with generator Δrad\Delta_{rad}, and with a reflecting barrier at YY, and let y\mathbb{P}_{y} denote the law of ysy_{s} with initial position yy. Define also T𝐲0T_{{\bf y}_{0}}, TYT_{Y} the hitting times of 𝐲0{\bf y}_{0} and YY respectively. We have:

  1. (i)

    For any λ>0\lambda>0 and any y[𝐲0,Y]y\in[{\bf y}_{0},Y],

    𝔼y(eλT𝐲0)λ1eλ2(Yy)+λ2eλ1(Yy)λ1eλ2(Y𝐲0)+λ2eλ1(Y𝐲0)\mathbb{E}_{y}(e^{-\lambda T_{{\bf y}_{0}}})\,\leq\,\frac{\lambda_{1}e^{-\lambda_{2}(Y-y)}+\lambda_{2}e^{\lambda_{1}(Y-y)}}{{\lambda_{1}e^{-\lambda_{2}(Y-{\bf y}_{0})}+\lambda_{2}e^{\lambda_{1}(Y-{\bf y}_{0})}}}

    where λ1=α24+2λ+α2\lambda_{1}=\sqrt{\frac{\alpha^{2}}{4}+2\lambda}+\frac{\alpha}{2} and λ2=α24+2λα2\lambda_{2}=\sqrt{\frac{\alpha^{2}}{4}+2\lambda}-\frac{\alpha}{2}.

  2. (ii)

    If y[𝐲0,Y]y\in[{\bf y}_{0},Y], then

    𝔼y(T𝐲0)𝔼Y(T𝐲0)eαYα2log(coth(12α𝐲0)).\mathbb{E}_{y}(T_{{\bf y}_{0}})\,\leq\,\mathbb{E}_{Y}(T_{{\bf y}_{0}})\,\leq\,\frac{e^{\alpha Y}}{\alpha^{2}}\log(\operatorname{coth}(\tfrac{1}{2}\alpha{\bf y}_{0})).

    In particular, if 𝐲0>1αlog2{\bf y}_{0}>\tfrac{1}{\alpha}\log 2, then 𝔼y(T𝐲0)4α2eα(Y𝐲0)\displaystyle\mathbb{E}_{y}(T_{{\bf y}_{0}})\,\leq\,\frac{4}{\alpha^{2}}e^{\alpha(Y-{\bf y}_{0})}.

  3. (iii)

    If y[𝐲0,Y]y\in[{\bf y}_{0},Y], then

    G𝐲0(y):=y(T𝐲0<TY)=log(tanh(αY/2)tanh(αy/2))log(tanh(αY/2)tanh(α𝐲0/2)).G_{{\bf y}_{0}}(y):=\mathbb{P}_{y}(T_{{\bf y}_{0}}<T_{Y})=\frac{\log\big{(}\tfrac{\tanh(\alpha Y/2)}{\tanh(\alpha y/2)}\big{)}}{\log\big{(}\tfrac{\tanh(\alpha Y/2)}{\tanh(\alpha{\bf y}_{0}/2)}\big{)}}.
  4. (iv)

    If y[𝐲0,Y)y\in[{\bf y}_{0},Y), then 𝔼y(T𝐲0|T𝐲0<TY)2α(y𝐲0)+2α2(1G𝐲0(y))\displaystyle\mathbb{E}_{y}(T_{{\bf y}_{0}}\,\big{|}\,T_{{\bf y}_{0}}<T_{Y})\leq\frac{2}{\alpha}(y-{\bf y}_{0})+\frac{2}{\alpha^{2}}(1-G_{{\bf y}_{0}}(y)).

Proof.

We begin with the proof of (i) by observing that tanh(x)1\tanh(x)\leq 1 for all positive values of xx and hence we can couple the trajectory of a particle PP with that of an auxiliary particle P~\widetilde{P} starting with the same initial position as PP, but whose radius y~𝔷\widetilde{y}_{\mathfrak{z}} evolves according to the diffusion with generator

Δ~rad(f):=12f′′+α2f,\widetilde{\Delta}_{rad}(f):=\frac{1}{2}f^{\prime\prime}+\frac{\alpha}{2}f^{\prime},

in such a way that y~𝔷y𝔷\widetilde{y}_{\mathfrak{z}}\leq y_{\mathfrak{z}} for all 𝔷\mathfrak{z}. It follows that the detection time T~𝐲0\widetilde{T}_{{\bf y}_{0}} of this auxiliary particle is smaller than the one of PP so in particular 𝔼y(eλT𝐲0)𝔼y(eλT~𝐲0)\mathbb{E}_{y}(e^{-\lambda T_{{\bf y}_{0}}})\leq\mathbb{E}_{y}(e^{-\lambda\widetilde{T}_{{\bf y}_{0}}}), and it suffices to prove the inequality for the auxiliary process. Let now gg be the solution of the following O.D.E.,

12g′′(y)+α2g(y)λg(y)=0\frac{1}{2}g^{\prime\prime}(y)+\frac{\alpha}{2}g^{\prime}(y)-\lambda g(y)=0 (11)

on [𝐲0,Y][{\bf y}_{0},Y] with boundary conditions g(𝐲0)=1g({\bf y}_{0})=1 and g(Y)=0g^{\prime}(Y)=0, which is equal to

g(y)=λ1eλ2(Yy)+λ2eλ1(Yy)λ1eλ2(Y𝐲0)+λ2eλ1(y𝐲0)g(y)=\frac{\lambda_{1}e^{-\lambda_{2}(Y-y)}+\lambda_{2}e^{\lambda_{1}(Y-y)}}{{\lambda_{1}e^{-\lambda_{2}(Y-{\bf y}_{0})}+\lambda_{2}e^{\lambda_{1}(y-{\bf y}_{0})}}}

where λ1\lambda_{1} and λ2\lambda_{2} are as in the statement of the lemma. It follows from Itô’s lemma that {eλsg(y~s)}s0\{e^{-\lambda s}g(\widetilde{y}_{s})\}_{s\geq 0} is a bounded martingale, and hence we can apply Doob’s optional stopping theorem to deduce g(y)=𝔼y(eλ0g(y~0))=𝔼y(eλT~𝐲0)g(y)=\mathbb{E}_{y}(e^{-\lambda\cdot 0}g(\widetilde{y}_{0}))=\mathbb{E}_{y}(e^{-\lambda\widetilde{T}_{{\bf y}_{0}}}), giving the result. To obtain the bound in (ii), we go back to the original process {ys}s0\{y_{s}\}_{s\geq 0} which evolves according to Δrad\Delta_{rad}, and define F(y)F(y) as the solution of the O.D.E. on [𝐲0,Y][{\bf y}_{0},Y]

1=12F′′(y)+α2coth(αy)F(y)-1=\frac{1}{2}F^{\prime\prime}(y)\,+\,\frac{\alpha}{2}\operatorname{coth}(\alpha y)F^{\prime}(y) (12)

with boundary conditions F(Y)=0F^{\prime}(Y)=0 and F(𝐲0)=0F({\bf y}_{0})=0. We advance that the solution is smooth and bounded and deduce from Itô’s lemma that {F(ys)+s}s0\{F(y_{s})+s\}_{s\geq 0} is a martingale, so applying Doob’s optional stopping theorem we deduce F(y)=𝔼y(F(y0)+0)=𝔼y(F(ytT𝐲0)+tT𝐲0)F(y)=\mathbb{E}_{y}(F(y_{0})+0)=\mathbb{E}_{y}(F(y_{t\wedge T_{{\bf y}_{0}}})+t\wedge T_{{\bf y}_{0}}) for every t>0t>0. Choosing any c>0c>0, a simple argument obtained by restarting the process at YY every cc units of time gives

y(T𝐲0>t)Y(T𝐲0>t)(Y(T𝐲0>c))tc,\mathbb{P}_{y}(T_{{\bf y}_{0}}>t)\leq\mathbb{P}_{Y}(T_{{\bf y}_{0}}>t)\leq(\mathbb{P}_{Y}(T_{{\bf y}_{0}}>c))^{\lfloor\frac{t}{c}\rfloor},

and hence limtty(T𝐲0>t)=0\lim_{t\to\infty}t\mathbb{P}_{y}(T_{{\bf y}_{0}}>t)=0. We deduce then that limt𝔼y(tT𝐲0)=𝔼y(T𝐲0)\lim_{t\to\infty}\mathbb{E}_{y}(t\wedge T_{{\bf y}_{0}})=\mathbb{E}_{y}(T_{{\bf y}_{0}}) and since FF is bounded, limt𝔼y(F(ytT𝐲0)=𝔼y(F(yT𝐲0))=0\lim_{t\to\infty}\mathbb{E}_{y}(F(y_{t\wedge T_{{\bf y}_{0}}})=\mathbb{E}_{y}(F(y_{T_{{\bf y}_{0}}}))=0. Thus, F(y)=𝔼y(T𝐲0)F(y)=\mathbb{E}_{y}(T_{{\bf y}_{0}}), and it remains to solve the O.D.E. To do so, we multiply (12) by 2sinh(αy)2\sinh(\alpha y) to obtain

2sinh(αy)=sinh(αy)F′′(y)+αcosh(αy)F(y)=(sinh(αy)F(y)).-2\sinh(\alpha y)=\sinh(\alpha y)F^{\prime\prime}(y)+\alpha\cosh(\alpha y)F^{\prime}(y)=(\sinh(\alpha y)F^{\prime}(y))^{\prime}.

Thus, integrating from yy to YY and using that F(Y)=0F^{\prime}(Y)=0 we have

2α(cosh(αY)cosh(αy))=sinh(αy)F(y),\frac{2}{\alpha}(\cosh(\alpha Y)-\cosh(\alpha y))=\sinh(\alpha y)F^{\prime}(y),

which in particular proves directly that F(y)F(y) is an increasing function, so that 𝔼y(T𝐲0)𝔼Y(T𝐲0)\mathbb{E}_{y}(T_{{\bf y}_{0}})\leq\mathbb{E}_{Y}(T_{{\bf y}_{0}}). Integrating from 𝐲0{\bf y}_{0} to YY, together with the condition F(Y)=0F^{\prime}(Y)=0 gives

𝔼Y(T𝐲0)=F(Y)=2α2(log(sinh(α𝐲0)sinh(αY))cosh(αY)log(tanh(12α𝐲0)tanh(12αY)))\mathbb{E}_{Y}(T_{{\bf y}_{0}})=F(Y)=\frac{2}{\alpha^{2}}\Big{(}\log\Big{(}\frac{\sinh(\alpha{\bf y}_{0})}{\sinh(\alpha Y)}\Big{)}-\cosh(\alpha Y)\log\Big{(}\frac{\tanh(\tfrac{1}{2}\alpha{\bf y}_{0})}{\tanh(\tfrac{1}{2}\alpha Y)}\Big{)}\Big{)}

and hence the general bound appearing in (ii) follows by noticing that the first term is negative, and by bounding cosh(αY)\cosh(\alpha Y) by 12eαY\frac{1}{2}e^{\alpha Y}. To obtain 𝔼y(T𝐲0)4α2eα(Y𝐲0)\mathbb{E}_{y}(T_{{\bf y}_{0}})\leq\frac{4}{\alpha^{2}}e^{\alpha(Y-{\bf y}_{0})} observe that if we assume 𝐲0>1αlog2{\bf y}_{0}>\tfrac{1}{\alpha}\log 2 then coth(12α𝐲0)1+4eα𝐲0\operatorname{coth}(\tfrac{1}{2}\alpha{\bf y}_{0})\leq 1+4e^{-\alpha{\bf y}_{0}} so the result follows from bounding log(1+4eα𝐲0)4eα𝐲0\log(1+4e^{-\alpha{\bf y}_{0}})\leq 4e^{-\alpha{\bf y}_{0}}.

To establish (iii) and (iv) it will be enough to work with a diffusion evolving on (0,)(0,\infty) according to the original generator Δrad\Delta_{rad}, but without any barriers. Abusing notation we still call the process {ys}s0\{y_{s}\}_{s\geq 0}. To deduce (iii), observe that the solution of the O.D.E.

0=12G𝔯0′′(y)+α2coth(αy)G𝐲0(y)0=\frac{1}{2}G_{\mathfrak{r}_{0}}^{\prime\prime}(y)+\frac{\alpha}{2}\operatorname{coth}(\alpha y)G_{{\bf y}_{0}}^{\prime}(y)

with conditions G𝐲0(𝐲0)=1G_{{\bf y}_{0}}({\bf y}_{0})=1 and G𝐲0(Y)=0G_{{\bf y}_{0}}(Y)=0 is given by the closed expression given in (iii). It follows from Itô’s lemma that {G𝐲0(ys)}s0\{G_{{\bf y}_{0}}(y_{s})\}_{s\geq 0} is a bounded martingale, so applying Doob’s optional stopping theorem we deduce G𝐲0(y)=𝔼y(G𝐲0(y0))=𝔼y(G𝐲0(yT𝐲0TY))=y(T𝐲0<TY)G_{{\bf y}_{0}}(y)=\mathbb{E}_{y}(G_{{\bf y}_{0}}(y_{0}))=\mathbb{E}_{y}(G_{{\bf y}_{0}}(y_{T_{{\bf y}_{0}}\wedge T_{Y}}))=\mathbb{P}_{y}(T_{{\bf y}_{0}}<T_{Y}).

Finally, to prove (iv) define the function H(y)H(y) as the solution of the ordinary differential equation

G𝐲0(y)=12H′′(y)+α2coth(αy)H(y)-G_{{\bf y}_{0}}(y)=\frac{1}{2}H^{\prime\prime}(y)\,+\,\frac{\alpha}{2}\operatorname{coth}(\alpha y)H^{\prime}(y)

with boundary conditions H(𝐲0)=H(Y)=0H({\bf y}_{0})=H(Y)=0. It can be checked directly that the last equation is satisfied by

H(y)=2αG𝐲0(r)𝐲0ysinh(αl)G𝐲0(l)log(tanh(αl2)tanh(α𝐲02))𝑑l+2α(1G𝐲0(y))yYsinh(αl)G𝐲0(l)log(tanh(αY2)tanh(αl2))𝑑l,H(y)=\frac{2}{\alpha}G_{{\bf y}_{0}}(r)\int_{{\bf y}_{0}}^{y}\!\sinh(\alpha l)G_{{\bf y}_{0}}(l)\log\Big{(}\frac{\tanh(\frac{\alpha l}{2})}{\tanh(\frac{\alpha{\bf y}_{0}}{2})}\Big{)}dl+\frac{2}{\alpha}(1{-}G_{{\bf y}_{0}}(y))\int_{y}^{Y}\!\sinh(\alpha l)G_{{\bf y}_{0}}(l)\log\Big{(}\frac{\tanh(\frac{\alpha Y}{2})}{\tanh(\frac{\alpha l}{2})}\Big{)}dl, (13)

which is smooth. It follows once again from Itô’s lemma that {H(ys)+0sG𝐲0(yu)𝑑u}s0\{H(y_{s})+\int_{0}^{s}G_{{\bf y}_{0}}(y_{u})du\}_{s\geq 0} is a martingale. Since in the proof of (iii) we already showed that {G𝐲0(ys)}s0\{G_{{\bf y}_{0}}(y_{s})\}_{s\geq 0} is a martingale, it follows that {0sG𝐲0(yu)𝑑usG𝐲0(ys)}s0\{\int_{0}^{s}G_{{\bf y}_{0}}(y_{u})du-sG_{{\bf y}_{0}}(y_{s})\}_{s\geq 0} is also a martingale. We conclude that {H(ys)+sG𝐲0(ys)}s0\{H(y_{s})+sG_{{\bf y}_{0}}(y_{s})\}_{s\geq 0} is a martingale, so applying Doob’s optional stopping theorem we deduce

H(y)=𝔼y(H(y0)+0G𝐲0(y0))=𝔼y(H(ytT𝐲0TY)+(tT𝐲0TY)G𝐲0(ytT𝐲0TY))H(y)=\mathbb{E}_{y}(H(y_{0})+0\cdot G_{{\bf y}_{0}}(y_{0}))=\mathbb{E}_{y}(H(y_{t\wedge T_{{\bf y}_{0}}\wedge T_{Y}})+(t\wedge T_{{\bf y}_{0}}\wedge T_{Y})\cdot G_{{\bf y}_{0}}(y_{t\wedge T_{{\bf y}_{0}}\wedge T_{Y}}))

for every t>0t>0. Reasoning as in the proof of (ii) we can take the limit as tt\to\infty to obtain

H(y)=𝔼y(H(yT𝐲0TY)+(T𝐲0TY)G𝐲0(yT𝐲0TY))=𝔼y(T𝐲0𝟏{T𝐲0<TY})H(y)=\mathbb{E}_{y}(H(y_{T_{{\bf y}_{0}}\wedge T_{Y}})+(T_{{\bf y}_{0}}\wedge T_{Y})\cdot G_{{\bf y}_{0}}(y_{T_{{\bf y}_{0}}\wedge T_{Y}}))=\mathbb{E}_{y}(T_{{\bf y}_{0}}{\bf 1}_{\{T_{{\bf y}_{0}}<T_{Y}\}})

where we used that H(𝐲0)=H(Y)=G𝐲0(Y)=0H({\bf y}_{0})=H(Y)=G_{{\bf y}_{0}}(Y)=0 and G𝐲0(𝐲0)=1G_{{\bf y}_{0}}({\bf y}_{0})=1. Observing that 𝔼y(T𝐲0|T𝐲0<TY)=H(y)G𝐲0(y)\mathbb{E}_{y}(T_{{\bf y}_{0}}\,|\,T_{{\bf y}_{0}}<T_{Y})=\frac{H(y)}{G_{{\bf y}_{0}}(y)}, to obtain the inequality in (iv) we only need to bound H(y)H(y). To do so notice that for any 𝐲0l{\bf y}_{0}\leq l gives tanh(αl/2)tanh(α𝐲0/2)1tanh(αl/2)\frac{\tanh(\alpha l/2)}{\tanh(\alpha{\bf y}_{0}/2)}\leq\frac{1}{\tanh(\alpha l/2)}, which together with 0G𝐲0(l)10\leq G_{{\bf y}_{0}}(l)\leq 1 allows us to bound from above the first integral of (13) by sinh(αl)log(coth(12αl))=2coth(12αl)coth2(12αl)1log(coth(12αl))=f(coth(12αl))\sinh(\alpha l)\log(\operatorname{coth}(\frac{1}{2}\alpha l))=\frac{2\operatorname{coth}(\frac{1}{2}\alpha l)}{\operatorname{coth}^{2}(\frac{1}{2}\alpha l)-1}\log(\operatorname{coth}(\frac{1}{2}\alpha l))=f(\operatorname{coth}(\frac{1}{2}\alpha l)), where f(z)=2zz21logzf(z)=\frac{2z}{z^{2}-1}\log z is bounded from above by 11 on [1,)[1,\infty). Using the same argument we can bound the term within the second integral of (13) by G𝐲0(l)G_{{\bf y}_{0}}(l), so that

H(y)G𝐲0(y)2α(y𝐲0)+2α(1G𝐲0(y))yYG𝐲0(l)G𝐲0(y)𝑑l.\frac{H(y)}{G_{{\bf y}_{0}}(y)}\leq\frac{2}{\alpha}(y-{\bf y}_{0})+\frac{2}{\alpha}(1-G_{{\bf y}_{0}}(y))\int_{y}^{Y}\frac{G_{{\bf y}_{0}}(l)}{G_{{\bf y}_{0}}(y)}dl.

Using the fact that G𝐲0(l)G𝐲0(y)=Gy(l)\frac{G_{{\bf y}_{0}}(l)}{G_{{\bf y}_{0}}(y)}=G_{y}(l), the second integral becomes yYGy(l)𝑑l\int_{y}^{Y}G_{y}(l)dl which we control by studying the function yyYGy(l)𝑑ly\mapsto\int_{y}^{Y}G_{y}(l)dl on (0,Y)(0,Y). Notice first that any critical point yy^{\prime} of said function satisfies

yYGy(l)𝑑l=1αlog(tanh(12αY)tanh(12αy))sinh(αy)1α,\int_{y^{\prime}}^{Y}G_{y^{\prime}}(l)dl=\frac{1}{\alpha}\log\Big{(}\frac{\tanh(\frac{1}{2}\alpha Y)}{\tanh(\frac{1}{2}\alpha y^{\prime})}\Big{)}\sinh(\alpha y^{\prime})\leq\frac{1}{\alpha},

so it will be sufficient to control the integral when either y=Yy=Y or y=0y=0. For the first case we have YYGY(l)𝑑l=0\int_{Y}^{Y}G_{Y}(l)dl=0, and for the second one, by definition we have limy0Gy(l)=0\lim_{y\to 0}G_{y}(l)=0 for any l>0l>0. Since Gy(l)G_{y}(l) is monotone increasing in yy, by the monotone convergence theorem, limy0yYGy(l)𝑑l=limy0Gy(l)dl=0\lim_{y\to 0}\int_{y}^{Y}G_{y}(l)dl=\int\lim_{y\to 0}G_{y}(l)dl=0, so putting all these cases together we conclude yYGy(l)𝑑l1α\int_{y}^{Y}G_{y}(l)dl\leq\frac{1}{\alpha}. ∎

Before we define 𝒟(𝔷)(κ)\mathcal{D}^{(\mathfrak{z})}(\kappa), let

ϕ(𝔷):=(𝔷1αeR)12.\phi^{(\mathfrak{z})}:=\Big{(}\frac{\mathfrak{z}^{\frac{1}{\alpha}}}{e^{R}}\Big{)}^{\frac{1}{2}}.

Intuitively, one may think of ϕ(𝔷)\phi^{(\mathfrak{z})} as those points that are so close in terms of angle to the target, so that - even though possibly initially at the boundary of BO(R)B_{O}(R) - they have a reasonable chance to detect the target by time 𝔷\mathfrak{z} through the radial movement. We define 𝒟(𝔷)(κ)\mathcal{D}^{(\mathfrak{z})}(\kappa) as the collection of points where a particle initially located can detect the target before time 𝔷\mathfrak{z} with a not too small probability (depending on κ\kappa). From our discussion preceding Lemma 14, we will always assume that any point x0𝒟(𝔷)x_{0}\in\mathcal{D}^{(\mathfrak{z})} satisfies |θ0|π2|\theta_{0}|\leq\frac{\pi}{2}. Since r(T𝔯0<TR)=G𝔯0(r)\mathbb{P}_{r}(T_{\mathfrak{r}_{0}}<T_{R})=G_{\mathfrak{r}_{0}}(r), with G𝔯0(r)G_{\mathfrak{r}_{0}}(r) as defined in Lemma 14 with y:=ry:=r, 𝐲0:=𝔯0{\bf y}_{0}:=\mathfrak{r}_{0} and Y:=RY:=R, it follows that G𝔯0(r)G_{\mathfrak{r}_{0}}(r) is decreasing as a function of rr, continuous and takes values in [0,1][0,1]. In particular, for κ>1\kappa>1 there is a unique 𝔯~0[𝔯0,R]\widetilde{\mathfrak{r}}_{0}\in[\mathfrak{r}_{0},R] such that

𝔯~0(T𝔯0<TR)=δ(κ,𝔷), where δ=δ(κ,𝔷):=1+𝔷(1+κ𝔷12α)2α.\mathbb{P}_{\widetilde{\mathfrak{r}}_{0}}(T_{\mathfrak{r}_{0}}<T_{R})=\delta(\kappa,\mathfrak{z}),\quad\text{ where }\quad\delta=\delta(\kappa,\mathfrak{z}):=\frac{1+\mathfrak{z}}{(1+\kappa\mathfrak{z}^{\frac{1}{2\alpha}})^{2\alpha}}. (14)

Note that 0δ(κ,𝔷)<10\leq\delta(\kappa,\mathfrak{z})<1 (the latter inequality holds because we assume κ1\kappa\geq 1). Also observe that δ(κ,0)=1\delta(\kappa,0)=1 and δ(κ,𝔷)\delta(\kappa,\mathfrak{z}) tends to κ2α\kappa^{-2\alpha} as 𝔷\mathfrak{z} tends to infinity. Furthermore, δ(κ,𝔷)=Θ(κ2α)\delta(\kappa,\mathfrak{z})=\Theta(\kappa^{-2\alpha}) if 𝔷=Ω(1)\mathfrak{z}=\Omega(1). Now, define (see Figure 2)

𝒟(𝔷)=𝒟(𝔷)(κ):={x0BO(R):|θ0|π2[|θ0|ϕ(R)+κϕ(𝔷)r0𝔯~0]}.\mathcal{D}^{(\mathfrak{z})}=\mathcal{D}^{(\mathfrak{z})}(\kappa):=\big{\{}x_{0}\in B_{O}(R):|\theta_{0}|\leq\tfrac{\pi}{2}\wedge\big{[}|\theta_{0}|\leq\phi(R)+\kappa\phi^{(\mathfrak{z})}\vee r_{0}\leq\widetilde{\mathfrak{r}}_{0}\big{]}\big{\}}.

which contains BQ(R)B_{Q}(R) since every x0BQ(R)x_{0}\in B_{Q}(R) satisfies r0<𝔯0𝔯~0r_{0}<\mathfrak{r}_{0}\leq\widetilde{\mathfrak{r}}_{0}. To better understand the motivation for defining 𝔯~0\widetilde{\mathfrak{r}}_{0} and δ\delta as above, we consider the most interesting regime, i.e., 𝔷=Ω(1)\mathfrak{z}=\Omega(1). Under this condition the effect of the drift has enough time to move the particle far away from its initial position, and towards the boundary, so that the event {T𝔯0<𝔷,T𝔯0<TR}\{T_{\mathfrak{r}_{0}}<\mathfrak{z},\,T_{\mathfrak{r}_{0}}<T_{R}\} is mostly explained by the particles’ initial trajectory. In particular, by Part (iii) of Lemma 14, we have that r0(T𝔯0<𝔷,T𝔯0<TR)r0(T𝔯0<TR)=G𝔯0(r0)\mathbb{P}_{r_{0}}(T_{\mathfrak{r}_{0}}<\mathfrak{z},\,T_{\mathfrak{r}_{0}}<T_{R})\approx\mathbb{P}_{r_{0}}(T_{\mathfrak{r}_{0}}<T_{R})=G_{\mathfrak{r}_{0}}(r_{0}), so the condition r0𝔯0~r_{0}\leq\widetilde{\mathfrak{r}_{0}} aims to include in 𝒟(𝔷)\mathcal{D}^{(\mathfrak{z})} all points whose probability of detecting the target before reaching the boundary of BO(R)B_{O}(R) is not too small. To exhaust all possibilities, we must also include in 𝒟(𝔷)\mathcal{D}^{(\mathfrak{z})} all points which have a sufficiently large probability of detecting the target even after reaching the boundary of BO(R)B_{O}(R). Said points are considered through the condition |θ0|ϕ(R)+κϕ(𝔷)|\theta_{0}|\leq\phi(R)+\kappa\phi^{(\mathfrak{z})}, which gives a lower bound of order δ(κ,𝔷)\delta(\kappa,\mathfrak{z}) for the detection probability, thus explaining our choice of said function.

Before moving to the main theorem of this section, we spend some time building some intuition about the geometric shape of 𝒟(𝔷)\mathcal{D}^{(\mathfrak{z})}. Since 𝔯0\mathfrak{r}_{0} goes to 0 when θ0\theta_{0} tends to π2\frac{\pi}{2}, from (14) which is used to define 𝔯~0\widetilde{\mathfrak{r}}_{0}, it is not hard to see that 𝔯~0\widetilde{\mathfrak{r}}_{0} also goes to 0 (and hence 𝔯~0𝔯0\widetilde{\mathfrak{r}}_{0}-\mathfrak{r}_{0} as well) when θ0\theta_{0} tends to π2\frac{\pi}{2}. It requires a fair amount of additional work to show that 𝔯~0𝔯0\widetilde{\mathfrak{r}}_{0}-\mathfrak{r}_{0}, as a function of θ0>ϕ(R)+κϕ(𝔷)\theta_{0}>\phi(R)+\kappa\phi^{(\mathfrak{z})}, first increases very slowly, then reaches a maximum value of roughly 1αlog1δ\frac{1}{\alpha}\log\frac{1}{\delta} and finally decreases rapidly (we omit the details since we will not rely on this observation). In fact, for all practical purposes, one might think of 𝔯~0𝔯0\widetilde{\mathfrak{r}}_{0}-\mathfrak{r}_{0} as being essentially constant up to the point when 𝔯0\mathfrak{r}_{0} is smaller than a constant (equivalently, θ0\theta_{0} is at least a constant).

The main goal of this section is to prove the following result from which Theorem 4 immediately follows (by the proof strategy discussed in Section 1.3) once we show that μ(𝒟(𝔷)(κ))\mu(\mathcal{D}^{(\mathfrak{z})}(\kappa)) is of the right order:

Theorem 15.

The following hold:

  1. (i)

    If κ1\kappa\geq 1 and ϕ(R)+κϕ(𝔷)π2\phi(R)+\kappa\phi^{(\mathfrak{z})}\leq\frac{\pi}{2}, then supx0𝒟¯(𝔷)x0(Tdet𝔷)=O(δ(κ,𝔷))\sup_{x_{0}\in\overline{\mathcal{D}}^{(\mathfrak{z})}}\mathbb{P}_{x_{0}}(T_{det}\leq\mathfrak{z})=O(\delta(\kappa,\mathfrak{z})) and

    𝒟¯(𝔷)(κ)(Tdet𝔷)𝑑μ(x0)=O(μ(𝒟(𝔷)(κ))).\int_{\overline{\mathcal{D}}^{(\mathfrak{z})}\!(\kappa)}\mathbb{P}(T_{det}\leq\mathfrak{z})d\mu(x_{0})=O(\mu(\mathcal{D}^{(\mathfrak{z})}(\kappa))).
  2. (ii)

    For every c>0c>0 there is a κ0>0\kappa_{0}>0 such that if κκ0\kappa\geq\kappa_{0} and 𝔷=16αlogκ+Ω(1)\mathfrak{z}=\frac{16}{\alpha}\log\kappa+\Omega(1) satisfy ϕ(R)+κϕ(𝔷)π2c\phi(R)+\kappa\phi^{(\mathfrak{z})}\leq\frac{\pi}{2}-c, then infx0𝒟(𝔷)x0(Tdet𝔷)=Ω(δ(κ,𝔷))\inf_{x_{0}\in\mathcal{D}^{(\mathfrak{z})}}\mathbb{P}_{x_{0}}(T_{det}\leq\mathfrak{z})=\Omega(\delta(\kappa,\mathfrak{z})).

Remark 16.

The extra hypotheses on κ\kappa and 𝔷\mathfrak{z} needed for the lower bounds of Part (ii) of Theorem 15 are key for our methods to work, since for small times 𝔷=o(1)\mathfrak{z}=o(1) the detection probability for a point always tends to 0 unless it is already in BQ(R)B_{Q}(R) or very close to it (but the latter set is of smaller measure than BQ(R)B_{Q}(R)). Similarly, if one is very close to the origin (that is, for angles very close to π2\frac{\pi}{2}) the explosion of the drift towards the boundary at the origin also implies an extra penalization for the probability of detection. Observe nevertheless that the extra hypothesis in Part (ii) are automatically satisfied if 𝔷=ω(1)o(eαR)\mathfrak{z}=\omega(1)\cap o(e^{\alpha R}). Furthermore, the hypothesis 𝔷=Ω(1)\mathfrak{z}=\Omega(1) is natural since in the case of radial movement only we will show that 𝔼(Tdet)=Θ(1)\mathbb{E}(T_{det})=\Theta(1), and we are interested in tail behaviors of the detection time.

Among all facts regarding the intuition of 𝒟(𝔷)\mathcal{D}^{(\mathfrak{z})}, we will only need to prove rigorously (see the next result) that if ϕ(R)+κϕ(𝔷)|θ0|π2\phi(R)+\kappa\phi^{(\mathfrak{z})}\leq|\theta_{0}|\leq\frac{\pi}{2}, then 𝔯~0𝔯01αlog1δ+O(1)\widetilde{\mathfrak{r}}_{0}-\mathfrak{r}_{0}\leq\frac{1}{\alpha}\log\frac{1}{\delta}+O(1).

Fact 17.

For 𝔷>0\mathfrak{z}>0 and κ1\kappa\geq 1, if θ0[ϕ(R)+κϕ(𝔷),π2]\theta_{0}\in[\phi(R)+\kappa\phi^{(\mathfrak{z})},\frac{\pi}{2}], then e𝔯~0𝔯0=O(1/δ1α)e^{\widetilde{\mathfrak{r}}_{0}-\mathfrak{r}_{0}}=O(1/\delta^{\frac{1}{\alpha}}) where δ:=δ(κ,𝔷)\delta:=\delta(\kappa,\mathfrak{z}).

Proof.

We first handle some simple cases. If 𝔯~01αlog2δ\widetilde{\mathfrak{r}}_{0}\leq\frac{1}{\alpha}\log\frac{2}{\delta}, then er~0𝔯0er~0(2/δ)1αe^{\widetilde{r}_{0}-\mathfrak{r}_{0}}\leq e^{\widetilde{r}_{0}}\leq(2/\delta)^{\frac{1}{\alpha}} and we are done. Similarly, if 𝔯0R1αlog3δ\mathfrak{r}_{0}\geq R-\frac{1}{\alpha}\log\frac{3}{\delta}, since 𝔯~0R\widetilde{\mathfrak{r}}_{0}\leq R, we have that er~0𝔯0eR𝔯0(3/δ)1αe^{\widetilde{r}_{0}-\mathfrak{r}_{0}}\leq e^{R-\mathfrak{r}_{0}}\leq(3/\delta)^{\frac{1}{\alpha}} and we obtain again the claimed bound. Henceforth, assume that 𝔯~0>1αlog2δ\widetilde{\mathfrak{r}}_{0}>\frac{1}{\alpha}\log\frac{2}{\delta} and that 𝔯0<R1αlog3δ\mathfrak{r}_{0}<R-\frac{1}{\alpha}\log\frac{3}{\delta}.

Let g(r):=log(tanh(12αR)/tanh(12αr))g(r):=\log(\tanh(\frac{1}{2}\alpha R)/\tanh(\frac{1}{2}\alpha r)) and note that since tanh(x)1\tanh(x)\leq 1, coth(x)=1+2e2x1e2x\operatorname{coth}(x)=1+\frac{2e^{-2x}}{1-e^{-2x}} and 1+yey1+y\leq e^{y},

g(𝔯~0)log(coth(12α𝔯~0))2eα𝔯~01eα𝔯~04eα𝔯~0,g(\widetilde{\mathfrak{r}}_{0})\leq\log(\operatorname{coth}(\tfrac{1}{2}\alpha\widetilde{\mathfrak{r}}_{0}))\leq\frac{2e^{-\alpha\widetilde{\mathfrak{r}}_{0}}}{1-e^{-\alpha\widetilde{\mathfrak{r}}_{0}}}\leq 4e^{-\alpha\widetilde{\mathfrak{r}}_{0}},

where in the last inequality we have used that δ1\delta\leq 1. Moreover, since tanh(x)=12e2x1+e2x\tanh(x)=1-\frac{2e^{-2x}}{1+e^{-2x}}, coth(x)=1+2e2x1e2x\operatorname{coth}(x)=1+\frac{2e^{-2x}}{1-e^{-2x}} and using twice that log(1+y)y1+y\log(1+y)\geq\frac{y}{1+y} for y>1y>-1, we have

g(𝔯0)=log(coth(12α𝔯0))+log(tanh(12αR))2eα𝔯01+eα𝔯02eαR1eαReα𝔯0(2+o(1))eαR.g(\mathfrak{r}_{0})=\log(\coth(\tfrac{1}{2}\alpha\mathfrak{r}_{0}))+\log(\tanh(\tfrac{1}{2}\alpha R))\geq\frac{2e^{-\alpha\mathfrak{r}_{0}}}{1+e^{-\alpha\mathfrak{r}_{0}}}-\frac{2e^{-\alpha R}}{1-e^{-\alpha R}}\geq e^{-\alpha\mathfrak{r}_{0}}-(2+o(1))e^{-\alpha R}.

Since δ1\delta\leq 1, by our assumption on 𝔯0\mathfrak{r}_{0}, we conclude that g(𝔯0)=Ω(eα𝔯0)g(\mathfrak{r}_{0})=\Omega(e^{-\alpha\mathfrak{r}_{0}}). Observing that g(𝔯~0)/g(𝔯0)=G𝔯0(𝔯~0)=δg(\widetilde{\mathfrak{r}}_{0})/g(\mathfrak{r}_{0})=G_{\mathfrak{r}_{0}}(\widetilde{\mathfrak{r}}_{0})=\delta, it follows from the previously derived bounds on g(𝔯~0)g(\widetilde{\mathfrak{r}}_{0}) and g(𝔯0)g(\mathfrak{r}_{0}) that eα(𝔯~0𝔯0)=O(δ1)e^{\alpha(\widetilde{\mathfrak{r}}_{0}-\mathfrak{r}_{0})}=O(\delta^{-1}) and thus e𝔯~0𝔯0=O(1/δ1α)e^{\widetilde{\mathfrak{r}}_{0}-\mathfrak{r}_{0}}=O(1/\delta^{\frac{1}{\alpha}}) as desired. ∎

For practical purposes one may view 𝒟(𝔷)\mathcal{D}^{(\mathfrak{z})} as the collection of points of BO(R)B_{O}(R) which belong either to a sector of angle 2(ϕ(R)+κϕ(𝔷))2(\phi(R)+\kappa\phi^{(\mathfrak{z})}) whose bisector contains QQ, or to the ball BQ(R)B_{Q}(R), or to those points with angular coordinate |θ0|ϕ(R)+κϕ(𝔷)|\theta_{0}|\geq\phi(R)+\kappa\phi^{(\mathfrak{z})} which are within radial distance roughly 1αlog1δ\frac{1}{\alpha}\log\frac{1}{\delta} of BQ(R)B_{Q}(R). This picture would be accurate except for the fact that it places into 𝒟(𝔷)\mathcal{D}^{(\mathfrak{z})} points with angular coordinate close to π2\frac{\pi}{2} and fairly close to the origin OO, say at distance 1αlog1δΩ(1)\frac{1}{\alpha}\log\frac{1}{\delta}-\Omega(1). However, particles initially located at such points are extremely unlikely to reach BQ(R)B_{Q}(R) and detect QQ, since the drift towards the boundary tends to infinity close to the origin. This partly justifies why 𝒟(𝔷)\mathcal{D}^{(\mathfrak{z})} is defined so that in the case where θ0\theta_{0} goes to π2\frac{\pi}{2} the expression 𝔯~0𝔯0\widetilde{\mathfrak{r}}_{0}-\mathfrak{r}_{0} tends to 0, thus leaving out of 𝒟(𝔷)\mathcal{D}^{(\mathfrak{z})} the previously mentioned problematic points.

x𝔷x_{\mathfrak{z}}x0x_{0}A0{}_{A_{0}}B0{}_{B_{0}}ϕ(R)+κϕ(𝔷){}_{\phi(R)+\kappa\phi^{(\mathfrak{z})}}θ𝔷=θ0{}_{\theta_{\mathfrak{z}}=\theta_{0}}QO
Figure 2: Half of disk BO(R)B_{O}(R). The position x𝔷:=(r𝔷,θ𝔷)x_{\mathfrak{z}}:=(r_{\mathfrak{z}},\theta_{\mathfrak{z}}) of particle PP at time 𝔷\mathfrak{z} is shown. The lightly shaded area corresponds to 𝒟(𝔷)(κ)BQ(R)\mathcal{D}^{(\mathfrak{z})}(\kappa)\setminus B_{Q}(R) and the strongly shaded region represents BQ(R)B_{Q}(R). The dashed line corresponds to the radial segment where a particle PP at initial angle θ0\theta_{0} is, conditional under not having detected the target QQ at time 𝔷\mathfrak{z}. The radial coordinates of A0A_{0} and B0B_{0} are 𝔯0:=𝔯0(θ0)\mathfrak{r}_{0}:=\mathfrak{r}_{0}(\theta_{0}) and 𝔯~0:=𝔯~0(θ0)\widetilde{\mathfrak{r}}_{0}:=\widetilde{\mathfrak{r}}_{0}(\theta_{0}), respectively.

We next determine μ(𝒟(𝔷))\mu(\mathcal{D}^{(\mathfrak{z})}), which simply amounts to performing an integral.

Lemma 18.

If κ1\kappa\geq 1 are such that ϕ(R)+κϕ(𝔷)π2\phi(R)+\kappa\phi^{(\mathfrak{z})}\leq\frac{\pi}{2}, then

μ(𝒟(𝔷)(κ))=Θ(n(ϕ(R)+κϕ(𝔷)))=Θ(1+κ𝔷12α).\mu(\mathcal{D}^{(\mathfrak{z})}(\kappa))=\Theta\big{(}n(\phi(R)+\kappa\phi^{(\mathfrak{z})})\big{)}=\Theta(1+\kappa\mathfrak{z}^{\frac{1}{2\alpha}}).
Proof.

For the lower bound, simply observe that 𝒟(𝔷):=𝒟(𝔷)(κ)\mathcal{D}^{(\mathfrak{z})}:=\mathcal{D}^{(\mathfrak{z})}(\kappa) contains a sector ΥQ\Upsilon_{Q} of angle 2(ϕ(R)+κϕ(𝔷))2(\phi(R)+\kappa\phi^{(\mathfrak{z})}) on whose bisector lies the target QQ, so μ(𝒟(𝔷))μ(ΥQ)=Θ(n(ϕ(R)+κϕ(𝔷)))\mu(\mathcal{D}^{(\mathfrak{z})})\geq\mu(\Upsilon_{Q})=\Theta(n(\phi(R)+\kappa\phi^{(\mathfrak{z})})). Now, we address the upper bound. It suffices to show that μ(𝒟(𝔷)ΥQ)=O(μ(ΥQ))\mu(\mathcal{D}^{(\mathfrak{z})}\setminus\Upsilon_{Q})=O(\mu(\Upsilon_{Q})). Indeed, observe that

μ(𝒟(𝔷)ΥQ)=O(neαR)ϕ(R)+κϕ(𝔷)π20𝔯~012πsinh(αr0)dr0dθ0=O(neαR)ϕ(R)+κϕ(𝔷)π2eα𝔯~0dθ0.\mu(\mathcal{D}^{(\mathfrak{z})}\setminus\Upsilon_{Q})=O(ne^{-\alpha R})\int_{\phi(R)+\kappa\phi^{(\mathfrak{z})}}^{\frac{\pi}{2}}\int_{0}^{\widetilde{\mathfrak{r}}_{0}}\frac{1}{2\pi}\sinh(\alpha r_{0})dr_{0}d\theta_{0}=O(ne^{-\alpha R})\int_{\phi(R)+\kappa\phi^{(\mathfrak{z})}}^{\frac{\pi}{2}}e^{\alpha\widetilde{\mathfrak{r}}_{0}}d\theta_{0}.

Thus, since eα𝔯~0=eα𝔯0eα(𝔯~0𝔯0)e^{\alpha\widetilde{\mathfrak{r}}_{0}}=e^{\alpha\mathfrak{r}_{0}}e^{\alpha(\widetilde{\mathfrak{r}}_{0}-\mathfrak{r}_{0})}, by Fact 17,

μ(𝒟(𝔷)ΥQ)=O(neαRδ1)ϕ(R)+κϕ(𝔷)π2eα𝔯0dθ0.\mu(\mathcal{D}^{(\mathfrak{z})}\setminus\Upsilon_{Q})=O(ne^{-\alpha R}\delta^{-1})\int_{\phi(R)+\kappa\phi^{(\mathfrak{z})}}^{\frac{\pi}{2}}e^{\alpha\mathfrak{r}_{0}}d\theta_{0}.

Recall that by definition |θ0|=ϕ(𝔯0)|\theta_{0}|=\phi(\mathfrak{r}_{0}), so that by Part (v) of Lemma 8, we have |θ0|=ϕ(𝔯0)=Θ(e12𝔯0)|\theta_{0}|=\phi(\mathfrak{r}_{0})=\Theta(e^{-\frac{1}{2}\mathfrak{r}_{0}}), and

μ(𝒟(𝔷)ΥQ)=O(neαRδ1)ϕ(R)+κϕ(𝔷)π2θ02αdθ0=O(neαRδ1(ϕ(R)+κϕ(𝔷))(2α1)).\mu(\mathcal{D}^{(\mathfrak{z})}\setminus\Upsilon_{Q})=O(ne^{-\alpha R}\delta^{-1})\int_{\phi(R)+\kappa\phi^{(\mathfrak{z})}}^{\frac{\pi}{2}}\theta_{0}^{-2\alpha}d\theta_{0}=O(ne^{-\alpha R}\delta^{-1}(\phi(R)+\kappa\phi^{(\mathfrak{z})})^{-(2\alpha-1)}).

By our choices of ϕ(𝔷)\phi^{(\mathfrak{z})} and δ\delta we have eαRδ1=11+𝔷(ϕ(R)+κϕ(𝔷))2α(ϕ(R)+κϕ(𝔷))2αe^{-\alpha R}\delta^{-1}=\frac{1}{1+\mathfrak{z}}(\phi(R)+\kappa\phi^{(\mathfrak{z})})^{2\alpha}\leq(\phi(R)+\kappa\phi^{(\mathfrak{z})})^{2\alpha}. Thus, μ(𝒟(𝔷)ΥQ)=O(n(ϕ(R)+κϕ(𝔷)))=O(μ(ΥQ))\mu(\mathcal{D}^{(\mathfrak{z})}\setminus\Upsilon_{Q})=O(n(\phi(R)+\kappa\phi^{(\mathfrak{z})}))=O(\mu(\Upsilon_{Q})) as claimed. ∎

4.1 Uniform upper bound

The goal of this subsection is to prove the following result from which the upper bounds of Theorem 15 immediately follow.

Proposition 19.

If κ1\kappa\geq 1, then

supx0𝒟¯(𝔷)(κ),|θ0|<π2x0(Tdet𝔷)=O(δ(κ,𝔷)) and supx0𝒟¯(𝔷)(κ),|θ0|π2x0(Tdet𝔷)=0.\sup_{x_{0}\in\overline{\mathcal{D}}^{(\mathfrak{z})}\!(\kappa),|\theta_{0}|<\frac{\pi}{2}}\mathbb{P}_{x_{0}}(T_{det}\leq\mathfrak{z})=O(\delta(\kappa,\mathfrak{z}))\quad\text{ and }\quad\sup_{x_{0}\in\overline{\mathcal{D}}^{(\mathfrak{z})}\!(\kappa),|\theta_{0}|\geq\frac{\pi}{2}}\mathbb{P}_{x_{0}}(T_{det}\leq\mathfrak{z})=0.

Furthermore, if ϕ(R)+κϕ(𝔷)π2\phi(R)+\kappa\phi^{(\mathfrak{z})}\leq\frac{\pi}{2}, then

𝒟¯(𝔷)(κ)x0(Tdet𝔷)dμ(x0)=O(μ(𝒟(𝔷)(κ))).\int_{\overline{\mathcal{D}}^{(\mathfrak{z})}\!(\kappa)}\mathbb{P}_{x_{0}}(T_{det}\leq\mathfrak{z})d\mu(x_{0})=O(\mu(\mathcal{D}^{(\mathfrak{z})}(\kappa))).
Proof.

By the discussion previous to Lemma 14, we have x0(Tdet𝔷)=0\mathbb{P}_{x_{0}}(T_{det}\leq\mathfrak{z})=0 when x0=(r0,θ0)𝒟¯(𝔷)x_{0}=(r_{0},\theta_{0})\in\overline{\mathcal{D}}^{(\mathfrak{z})} is such that |θ0|π2|\theta_{0}|\geq\frac{\pi}{2}, as claimed. Henceforth, let x0=(r0,θ0)𝒟¯(𝔷)x_{0}=(r_{0},\theta_{0})\in\overline{\mathcal{D}}^{(\mathfrak{z})} be such that |θ0|<π2|\theta_{0}|<\frac{\pi}{2}. As in Lemma 14, let r0\mathbb{P}_{r_{0}} be the law of r𝔷r_{\mathfrak{z}} with initial position r0r_{0} and T𝔯T_{\mathfrak{r}} be the hitting time of 𝔯\mathfrak{r}.

Observe that a particle initially at x0x_{0} reaches the boundary of BQ(R)B_{Q}(R) either without ever visiting the boundary of BO(R)B_{O}(R) (which happens with probability at most r0(T𝔯0<TR))\mathbb{P}_{r_{0}}(T_{\mathfrak{r}_{0}}<T_{R})) or after visiting the said boundary for the first time (in which case the time to reach the boundary of BQ(R)B_{Q}(R) is dominated by the time of a particle starting at the boundary of BO(R)B_{O}(R) and having the same angular coordinate as the point x0x_{0}). Thus,

r0(Tdet𝔷)r0(T𝔯0<TR)+R(T𝔯0𝔷).\mathbb{P}_{r_{0}}(T_{\det}\leq\mathfrak{z})\leq\mathbb{P}_{r_{0}}(T_{\mathfrak{r}_{0}}<T_{R})+\mathbb{P}_{R}(T_{\mathfrak{r}_{0}}\leq\mathfrak{z}).

We will show that each of the right hand side terms above is O(δ)O(\delta) where δ:=δ(κ,𝔷)\delta:=\delta(\kappa,\mathfrak{z}). This immediately implies the first part of the proposition’s statement.

By our choice of 𝔯~0\widetilde{\mathfrak{r}}_{0} and since r0𝔯~0r_{0}\geq\widetilde{\mathfrak{r}}_{0}, we have

r0(T𝔯0<TR)𝔯~0(T𝔯0<TR)=δ.\mathbb{P}_{r_{0}}(T_{\mathfrak{r}_{0}}<T_{R})\leq\mathbb{P}_{\widetilde{\mathfrak{r}}_{0}}(T_{\mathfrak{r}_{0}}<T_{R})=\delta.

By Part (i) of Lemma 14 with y:=ry:=r, 𝐲0:=𝔯0{\bf y}_{0}:=\mathfrak{r}_{0} and Y:=RY:=R, and Markov’s inequality, for λ>0\lambda>0, λ1:=α24+2λ+α2>0\lambda_{1}:=\sqrt{\frac{\alpha^{2}}{4}+2\lambda}+\frac{\alpha}{2}>0 and λ2:=α24+2λα2\lambda_{2}:=\sqrt{\frac{\alpha^{2}}{4}+2\lambda}-\frac{\alpha}{2}, we get that

r0(T𝔯0𝔷)=r0(eλT𝔯0eλ𝔷)λ1eλ2(Rr0)+λ2eλ1(Rr0)λ2eλ1(R𝔯0)eλ𝔷.\mathbb{P}_{r_{0}}(T_{\mathfrak{r}_{0}}\leq\mathfrak{z})=\mathbb{P}_{r_{0}}(e^{-\lambda T_{\mathfrak{r}_{0}}}\geq e^{-\lambda\mathfrak{z}})\leq\frac{\lambda_{1}e^{-\lambda_{2}(R-r_{0})}+\lambda_{2}e^{\lambda_{1}(R-r_{0})}}{\lambda_{2}e^{\lambda_{1}(R-\mathfrak{r}_{0})}}e^{\lambda\mathfrak{z}}. (15)

Observing that λ1λ2=2λ\lambda_{1}\lambda_{2}=2\lambda, that λ2>0\lambda_{2}>0 and λ1>α\lambda_{1}>\alpha, and taking λ=11+𝔷<1\lambda=\frac{1}{1+\mathfrak{z}}<1 (since by hypothesis 𝔷>0\mathfrak{z}>0) it follows that

R(T𝔯0𝔷)λ1+λ2λ2eλ1(R𝔯0)eλ𝔷=O(eλ𝔷λeλ1(R𝔯0))=O((1+𝔷)eα(R𝔯0)).\mathbb{P}_{R}(T_{\mathfrak{r}_{0}}\leq\mathfrak{z})\leq\frac{\lambda_{1}+\lambda_{2}}{\lambda_{2}e^{\lambda_{1}(R-\mathfrak{r}_{0})}}\cdot e^{\lambda\mathfrak{z}}=O\Big{(}\frac{e^{\lambda\mathfrak{z}}}{\lambda}e^{-\lambda_{1}(R-\mathfrak{r}_{0})}\Big{)}=O((1+\mathfrak{z})e^{-\alpha(R-\mathfrak{r}_{0})}). (16)

By our choice of δ\delta, we have 1+𝔷δ(1+κ𝔷12α)2α=Θ(δeαR(ϕ(R)+κϕ(𝔷))2α)1+\mathfrak{z}\leq\delta(1+\kappa\mathfrak{z}^{\frac{1}{2\alpha}})^{2\alpha}=\Theta(\delta e^{\alpha R}(\phi(R)+\kappa\phi^{(\mathfrak{z})})^{2\alpha}). Moreover, by Part (v) of Lemma 8, we have |θ0|=ϕ(𝔯0)=Θ(e12𝔯0)|\theta_{0}|=\phi(\mathfrak{r}_{0})=\Theta(e^{-\frac{1}{2}\mathfrak{r}_{0}}). Since by hypothesis x0𝒟¯(𝔷)x_{0}\in\overline{\mathcal{D}}^{(\mathfrak{z})}, we know that |θ0|ϕ(R)+κϕ(𝔷)|\theta_{0}|\geq\phi(R)+\kappa\phi^{(\mathfrak{z})}, it follows that e12𝔯0=Ω(ϕ(R)+κϕ(𝔷))e^{-\frac{1}{2}\mathfrak{r}_{0}}=\Omega(\phi(R)+\kappa\phi^{(\mathfrak{z})}). Thus,

R(T𝔯0𝔷)=O(δ(ϕ(R)κϕ(𝔷))2αeα𝔯0)=O(δ)\mathbb{P}_{R}(T_{\mathfrak{r}_{0}}\leq\mathfrak{z})=O\big{(}\delta(\phi(R)-\kappa\phi^{(\mathfrak{z})})^{2\alpha}e^{\alpha\mathfrak{r}_{0}}\big{)}=O(\delta)

which establishes the first part of our stated result.

Next, we consider the second stated claim (the integral bound). By (15), denoting by μθ0\mu_{\theta_{0}} the measure induced by μ\mu for the angle θ0\theta_{0}, recalling that λ1λ2=α\lambda_{1}-\lambda_{2}=\alpha, it follows that

𝔯~0Rx0(Tdet𝔷)dμθ0(r0)\displaystyle\int_{\widetilde{\mathfrak{r}}_{0}}^{R}\mathbb{P}_{x_{0}}(T_{det}\leq\mathfrak{z})d\mu_{\theta_{0}}(r_{0}) =O(n)𝔯~0Rλ1eλ2(Rr0)+λ2eλ1(Rr0)λ2eλ1(R𝔯0)eλ𝔷eα(Rr0)dr0\displaystyle=O(n)\int_{\widetilde{\mathfrak{r}}_{0}}^{R}\frac{\lambda_{1}e^{-\lambda_{2}(R-r_{0})}+\lambda_{2}e^{\lambda_{1}(R-r_{0})}}{{\lambda_{2}e^{\lambda_{1}(R-\mathfrak{r}_{0})}}}\cdot e^{\lambda\mathfrak{z}}\cdot e^{-\alpha(R-r_{0})}dr_{0}
=O(n)eλ2(R𝔯~0)eλ1(R𝔯~0)λ2eλ1(R𝔯0)eλ𝔷\displaystyle=O(n)\frac{e^{\lambda_{2}(R-\widetilde{\mathfrak{r}}_{0})}-e^{-\lambda_{1}(R-\widetilde{\mathfrak{r}}_{0})}}{{\lambda_{2}e^{\lambda_{1}(R-\mathfrak{r}_{0})}}}\cdot e^{\lambda\mathfrak{z}}
=O(n)eλ2(R𝔯~0)λ2eλ1(R𝔯0)eλ𝔷.\displaystyle=O(n)\frac{e^{\lambda_{2}(R-\widetilde{\mathfrak{r}}_{0})}}{\lambda_{2}e^{\lambda_{1}(R-\mathfrak{r}_{0})}}\cdot e^{\lambda\mathfrak{z}}.

Using once more that λ1λ2=α\lambda_{1}-\lambda_{2}=\alpha and λ1λ2=2λ\lambda_{1}\lambda_{2}=2\lambda, taking again λ=11+𝔷<1\lambda=\frac{1}{1+\mathfrak{z}}<1, we obtain

𝔯~0Rx0(Tdet𝔷)dμθ0(r0)=O(n(1+𝔷)eα(R𝔯0)eλ2(𝔯~0𝔯0)).\int_{\widetilde{\mathfrak{r}}_{0}}^{R}\mathbb{P}_{x_{0}}(T_{det}\leq\mathfrak{z})d\mu_{\theta_{0}}(r_{0})=O(n(1+\mathfrak{z})e^{-\alpha(R-\mathfrak{r}_{0})}e^{-\lambda_{2}(\widetilde{\mathfrak{r}}_{0}-\mathfrak{r}_{0})}).

Therefore, since 𝔯~0𝔯0\widetilde{\mathfrak{r}}_{0}\geq\mathfrak{r}_{0} and λ2>0\lambda_{2}>0 imply that λ2(𝔯~0𝔯0)\lambda_{2}(\widetilde{\mathfrak{r}}_{0}-\mathfrak{r}_{0}) is positive, it follows that

𝒟¯(𝔷)x0(Tdet𝔷)dμ(x0)=O(n(1+𝔷)eαR)ϕ(R)+κϕ(𝔷)π2eα𝔯0dθ0.\int_{\overline{\mathcal{D}}^{(\mathfrak{z})}}\mathbb{P}_{x_{0}}(T_{det}\leq\mathfrak{z})d\mu(x_{0})=O(n(1+\mathfrak{z})e^{-\alpha R})\int_{\phi(R)+\kappa\phi^{(\mathfrak{z})}}^{\frac{\pi}{2}}e^{\alpha\mathfrak{r}_{0}}d\theta_{0}.

Using that |θ0|=ϕ(𝔯0)=Θ(e12𝔯0)|\theta_{0}|=\phi(\mathfrak{r}_{0})=\Theta(e^{-\frac{1}{2}\mathfrak{r}_{0}}), since α>12\alpha>\frac{1}{2}, by definition of δ\delta we get

𝒟¯(𝔷)x0(Tdet𝔷)dμ(x0)=O(n(1+𝔷)eαR)ϕ(R)+κϕ(𝔷)π2θ02αdθ0=O(nδ(ϕ(R)+κϕ(𝔷))).\int_{\overline{\mathcal{D}}^{(\mathfrak{z})}}\mathbb{P}_{x_{0}}(T_{det}\leq\mathfrak{z})d\mu(x_{0})=O(n(1+\mathfrak{z})e^{-\alpha R})\int_{\phi(R)+\kappa\phi^{(\mathfrak{z})}}^{\frac{\pi}{2}}\theta_{0}^{-2\alpha}d\theta_{0}=O(n\delta(\phi(R)+\kappa\phi^{(\mathfrak{z})})).

Since δ1\delta\leq 1, the claimed integral bound follows from Lemma 18. ∎

4.2 Uniform lower bound

The goal of this subsection is to prove the following lower bound matching the upper bound of Proposition 19 under the assumption that at least a constant amount of time has passed since the process started (as stated in Theorem 15).

Proposition 20.

For every fixed arbitrarily small constant c>0c>0 there are large enough constants κ0,C1\kappa_{0},C\geq 1 such that if κκ0\kappa\geq\kappa_{0} and 𝔷16αlogκ+C\mathfrak{z}\geq\frac{16}{\alpha}\log\kappa+C satisfy ϕ(R)+κϕ(𝔷)π2c\phi(R)+\kappa\phi^{(\mathfrak{z})}\leq\frac{\pi}{2}-c, then

infx0𝒟(𝔷)(κ)x0(Tdet𝔷)=Ω(κ2α).\inf_{x_{0}\in\mathcal{D}^{(\mathfrak{z})}(\kappa)}\mathbb{P}_{x_{0}}(T_{det}\leq\mathfrak{z})=\Omega(\kappa^{-2\alpha}).
Proof.

Let 𝒟(𝔷):=𝒟(𝔷)(κ)\mathcal{D}^{(\mathfrak{z})}:=\mathcal{D}^{(\mathfrak{z})}(\kappa). To obtain the lower bound on the detection probability recall that any x0𝒟(𝔷)x_{0}\in\mathcal{D}^{(\mathfrak{z})} either satisfies |θ0|ϕ(R)+κϕ(𝔷)|\theta_{0}|\leq\phi(R)+\kappa\phi^{(\mathfrak{z})} or r0𝔯~0r_{0}\leq\widetilde{\mathfrak{r}}_{0} where 𝔯~0\widetilde{\mathfrak{r}}_{0} by definition is such that G𝔯0(𝔯~0)=δ(κ,𝔷)G_{\mathfrak{r}_{0}}(\widetilde{\mathfrak{r}}_{0})=\delta(\kappa,\mathfrak{z}). We deal first with the case 𝔯0<r0𝔯~0\mathfrak{r}_{0}<r_{0}\leq\widetilde{\mathfrak{r}}_{0} (the case r0𝔯0r_{0}\leq\mathfrak{r}_{0} is trivial since it implies that x0BQ(R)x_{0}\in B_{Q}(R) and thus Tdet=0T_{det}=0) by noticing that

x0(Tdet𝔷)\displaystyle\mathbb{P}_{x_{0}}(T_{det}\leq\mathfrak{z}) x0(T𝔯0𝔷|T𝔯0<TR)x0(T𝔯0<TR)\displaystyle\geq\,\mathbb{P}_{x_{0}}(T_{\mathfrak{r}_{0}}\leq\mathfrak{z}\,\big{|}\,T_{\mathfrak{r}_{0}}<T_{R})\mathbb{P}_{x_{0}}(T_{\mathfrak{r}_{0}}<T_{R})
(11𝔷𝔼x0(T𝔯0|T𝔯0<TR))x0(T𝔯0<TR).\displaystyle\geq\,\big{(}1-\tfrac{1}{\mathfrak{z}}\mathbb{E}_{x_{0}}(T_{\mathfrak{r}_{0}}\,\big{|}\,T_{\mathfrak{r}_{0}}<T_{R})\big{)}\mathbb{P}_{x_{0}}(T_{\mathfrak{r}_{0}}<T_{R}).

Using Part (iv) of Lemma 14 with y:=ry:=r, 𝐲0:=𝔯0{\bf y}_{0}:=\mathfrak{r}_{0} and Y:=RY:=R, recalling that G𝔯0(r)=r(T𝔯0<TR)G_{\mathfrak{r}_{0}}(r)=\mathbb{P}_{r}(T_{\mathfrak{r}_{0}}<T_{R}), since by case assumption r0𝔯~0r_{0}\leq\widetilde{\mathfrak{r}}_{0} and recalling again that by definition of 𝔯~0\widetilde{\mathfrak{r}}_{0} it holds that G𝔯0(𝔯~0)=δ:=δ(κ,𝔷)G_{\mathfrak{r}_{0}}(\widetilde{\mathfrak{r}}_{0})=\delta:=\delta(\kappa,\mathfrak{z}), we get

x0(Tdet𝔷)(11𝔷[2α(𝔯~0𝔯0)+2α2(1δ)])δ.\mathbb{P}_{x_{0}}\big{(}T_{det}\leq\mathfrak{z}\big{)}\geq\Big{(}1-\frac{1}{\mathfrak{z}}\Big{[}\frac{2}{\alpha}(\widetilde{\mathfrak{r}}_{0}-\mathfrak{r}_{0})+\frac{2}{\alpha^{2}}(1-\delta)\Big{]}\Big{)}\delta.

From Fact 17 it follows that 𝔯~0𝔯0=1αlog1δ+O(1)\widetilde{\mathfrak{r}}_{0}-\mathfrak{r}_{0}=\frac{1}{\alpha}\log\tfrac{1}{\delta}+O(1), and since we can bound from above 1δ1-\delta by log1δ\log\tfrac{1}{\delta}, we get that the term inside the brackets above is at most 4α2log1δ+O(1)\frac{4}{\alpha^{2}}\log\frac{1}{\delta}+O(1). By hypothesis, 𝔷C1\mathfrak{z}\geq C\geq 1, so δ(1+κ)2α=Ω(κ2α)\delta\geq(1+\kappa)^{-2\alpha}=\Omega(\kappa^{-2\alpha}) and hence, using again the hypothesis regarding 𝔷\mathfrak{z}, we see that 𝔷>16αlogκ+C8α2log1δ+12C\mathfrak{z}>\frac{16}{\alpha}\log\kappa+C\geq\frac{8}{\alpha^{2}}\log\frac{1}{\delta}+\frac{1}{2}C for CC large enough, so taking CC even larger if needed we conclude that

x0(Tdet𝔷)=Ω(δ)=Ω(κ2α).\mathbb{P}_{x_{0}}\big{(}T_{det}\leq\mathfrak{z}\big{)}=\Omega(\delta)=\Omega(\kappa^{-2\alpha}).

We now handle the points x0:=(r0,θ0)𝒟(𝔷)x_{0}:=(r_{0},\theta_{0})\in\mathcal{D}^{(\mathfrak{z})} for which |θ0|ϕ(R)+κϕ(𝔷)|\theta_{0}|\leq\phi(R)+\kappa\phi^{(\mathfrak{z})}. Observe that for any fixed θ0\theta_{0} the detection probability is minimized when r0=Rr_{0}=R, and for points such that r0=Rr_{0}=R the probability decreases with |θ0||\theta_{0}|, so it will be enough to consider the case where r0=Rr_{0}=R and |θ0|=ϕ(R)+κϕ(𝔷)|\theta_{0}|=\phi(R)+\kappa\phi^{(\mathfrak{z})}. To obtain lower bounds on the detection probability we will couple the radial movement rtr_{t} of the particle starting at x0x_{0} with a similar one, denoted by r~t\widetilde{r}_{t}, evolving as follows:

  • r~0=R\widetilde{r}_{0}=R and we let r~t\widetilde{r}_{t} evolve through the generator Δrad\Delta_{rad} on (0,R+1](0,R+1].

  • Every time r~t\widetilde{r}_{t} hits R+1R+1 it is immediately restarted at RR.

It follows that rtr_{t} and r~t\widetilde{r}_{t} can be coupled so that rtr~tr_{t}\leq\widetilde{r}_{t} for every t0t\geq 0, and hence the detection time T~𝔯0\widetilde{T}_{\mathfrak{r}_{0}} of the new process bounds the original Tdet=T𝔯0T_{det}=T_{\mathfrak{r}_{0}} from above. Thus, it will be enough to bound x0(T~𝔯0𝔷)\mathbb{P}_{x_{0}}(\widetilde{T}_{\mathfrak{r}_{0}}\leq\mathfrak{z}) from below. Observe that the trajectories of r~t\widetilde{r}_{t} are naturally divided into excursions from RR to R+1R+1, which we use to define a sequence of Bernoulli random variables {Ei}i1\{E_{i}\}_{i\geq 1} where Ei=1E_{i}=1 if rt~\widetilde{r_{t}} reaches 𝔯0\mathfrak{r}_{0} on the ii-th excursion. We also use this division to define a sequence {τi}i1\{\tau_{i}\}_{i\geq 1} of random variables, where τi\tau_{i} is the time it takes r~t\widetilde{r}_{t} to reach either 𝔯0\mathfrak{r}_{0} or R+1R+1 in the ii-th excursion. It follows from the strong Markov property that all excursions are independent of one another, and so are the EiE_{i}’s and τi\tau_{i}’s.

Fix m:=α24𝔷m:=\lfloor\tfrac{\alpha}{24}\mathfrak{z}\rfloor which from our hypothesis on 𝔷\mathfrak{z} satisfies m1m\geq 1, and observe that, since the EiE_{i}’s are i.i.d., the event :={1im,Ei=1}\mathcal{E}:=\{\exists 1\leq i\leq m,\,E_{i}=1\} has probability

()=1(1(E1))m1em(E1).\mathbb{P}(\mathcal{E})=1-(1-\mathbb{P}(E_{1}))^{m}\geq 1-e^{-m\mathbb{P}(E_{1})}.

From Part (iii) of Lemma 14 with y:=Ry:=R, 𝐲0:=𝔯0{\bf y}_{0}:=\mathfrak{r}_{0} and Y:=R+1Y:=R+1, using log(1+x)x2\log(1+x)\geq\frac{x}{2} for |x|<1|x|<1 and log(coth(x2))2ex1ex\log(\operatorname{coth}(\frac{x}{2}))\leq\frac{2e^{-x}}{1-e^{-x}}, we have

(E1)=log(tanh(α(R+1)/2)tanh(αR/2))log(tanh(αR/2)tanh(α𝔯0/2))log(1+2eαR(1eα)(1+eα(R+1))(1eαR))log(coth(α𝔯02))=Ω(eα(R𝔯0)(1eα𝔯0)).\mathbb{P}(E_{1})=\frac{\log\big{(}\tfrac{\tanh(\alpha(R+1)/2)}{\tanh(\alpha R/2)}\big{)}}{\log\big{(}\tfrac{\tanh(\alpha R/2)}{\tanh(\alpha\mathfrak{r}_{0}/2)}\big{)}}\geq\frac{\log\big{(}1+\frac{2e^{-\alpha R}(1-e^{-\alpha})}{(1+e^{-\alpha(R+1)})(1-e^{-\alpha R})}\big{)}}{\log\big{(}\operatorname{coth}(\alpha\frac{\mathfrak{r}_{0}}{2})\big{)}}=\Omega(e^{-\alpha(R-\mathfrak{r}_{0})}(1-e^{-\alpha\mathfrak{r}_{0}})).

Since ϕ(R)+κϕ(𝔷)π2c\phi(R)+\kappa\phi^{(\mathfrak{z})}\leq\frac{\pi}{2}-c, we get 𝔯0=Ω(1)\mathfrak{r}_{0}=\Omega(1) and deduce that (E1)=Ω(eα(R𝔯0))\mathbb{P}(E_{1})=\Omega(e^{-\alpha(R-\mathfrak{r}_{0})}). Recalling that ϕ(R)+κϕ(𝔷)=|θ0|=ϕ(𝔯0)\phi(R)+\kappa\phi^{(\mathfrak{z})}=|\theta_{0}|=\phi(\mathfrak{r}_{0}), by Part (v) of Lemma 8, we have that e12𝔯0=Θ(ϕ(R)+κϕ(𝔷))e^{-\frac{1}{2}\mathfrak{r}_{0}}=\Theta(\phi(R)+\kappa\phi^{(\mathfrak{z})}) and ϕ(R)=Θ(e12R)\phi(R)=\Theta(e^{-\frac{1}{2}R}). Since by hypothesis 𝔷,κ1\mathfrak{z},\kappa\geq 1, it follows that e12(R𝔯0)=Θ(e12R(ϕ(R)+κϕ(s)))=Θ(κ𝔷12α)e^{\frac{1}{2}(R-\mathfrak{r}_{0})}=\Theta(e^{\frac{1}{2}R}(\phi(R)+\kappa\phi^{(s)}))=\Theta(\kappa\mathfrak{z}^{\frac{1}{2\alpha}}). Thus, (E1)=Ω(κ2α/𝔷)\mathbb{P}(E_{1})=\Omega(\kappa^{-2\alpha}/\mathfrak{z}) so setting κ0\kappa_{0} sufficiently large we get

()1eΩ(κ2α)=Ω(κ2α).\mathbb{P}(\mathcal{E})\geq 1-e^{-\Omega(\kappa^{-2\alpha})}=\Omega(\kappa^{-2\alpha}).

Clearly, if iDi_{D} is the first index such that EiD=1E_{i_{D}}=1, then ={1iDm}\mathcal{E}=\{1\leq i_{D}\leq m\}. Hence,

x0(Tdet𝔷)x0(Tdet𝔷)x0()=x0(i=1iDτi𝔷1iDm)(),\mathbb{P}_{x_{0}}(T_{det}\leq\mathfrak{z})\geq\mathbb{P}_{x_{0}}(T_{det}\leq\mathfrak{z}\mid\mathcal{E})\mathbb{P}_{x_{0}}(\mathcal{E})=\mathbb{P}_{x_{0}}\Big{(}\sum_{i=1}^{i_{D}}\tau_{i}\leq\mathfrak{z}\mid 1\leq i_{D}\leq m\Big{)}\mathbb{P}(\mathcal{E}),

where the τi\tau_{i} were defined above. Using Markov’s inequality we obtain

x0(i=1iDτi>𝔷| 1iDm)1𝔷(m𝔼x0(τ1E1=0)+𝔼x0(τ1E1=1)),\mathbb{P}_{x_{0}}\Big{(}\sum_{i=1}^{i_{D}}\tau_{i}>\mathfrak{z}\,\big{|}\,1\leq i_{D}\leq m\Big{)}\leq\frac{1}{\mathfrak{z}}\big{(}m\mathbb{E}_{x_{0}}(\tau_{1}\mid E_{1}=0)+\mathbb{E}_{x_{0}}(\tau_{1}\mid E_{1}=1)\big{)},

since there are at most mm excursions where the particle fails to reach 𝔯0\mathfrak{r}_{0}, followed by a single excursion where it succeeds (here we are conditioning on these events through the value of E1E_{1}). For the first term, we have

𝔼x0(τ1E1=0)=𝔼R(T~R+1T~R+1T~𝔯0)6α,\mathbb{E}_{x_{0}}(\tau_{1}\mid E_{1}=0)=\mathbb{E}_{R}(\widetilde{T}_{R+1}\mid\widetilde{T}_{R+1}\leq\widetilde{T}_{\mathfrak{r}_{0}})\,\leq\,\frac{6}{\alpha},

as can be seen in [11] (formulas 3.0.4 and 3.0.6). by comparing the process to one with constant drift equal to α2\tfrac{\alpha}{2}. For the second term we use Part (iv) of Lemma 14 with y:=Ry:=R, 𝐲0:=𝔯0{\bf y}_{0}:=\mathfrak{r}_{0} and Y:=R+1Y:=R+1 to obtain

𝔼x0(τ1E1=1)=𝔼R(T~𝔯0T~𝔯0<T~R+1)2α(R+1𝔯0)+2α2.\mathbb{E}_{x_{0}}(\tau_{1}\mid E_{1}=1)=\mathbb{E}_{R}(\widetilde{T}_{\mathfrak{r}_{0}}\mid\widetilde{T}_{\mathfrak{r}_{0}}<\widetilde{T}_{R+1})\leq\frac{2}{\alpha}(R+1-\mathfrak{r}_{0})+\frac{2}{\alpha^{2}}.

Recalling that e12(R𝔯0)=Θ(κ𝔷12α)e^{\frac{1}{2}(R-\mathfrak{r}_{0})}=\Theta(\kappa\mathfrak{z}^{\frac{1}{2\alpha}}), we have R𝔯0=log(κ𝔷12α)+Θ(1)R-\mathfrak{r}_{0}=\log(\kappa\mathfrak{z}^{\frac{1}{2\alpha}})+\Theta(1). Summarizing,

x0(i=1iDτi>𝔷1iDm)=6mα𝔷+1𝔷(2αlog(κ𝔷12α)+O(1)).\mathbb{P}_{x_{0}}\Big{(}\sum_{i=1}^{i_{D}}\tau_{i}>\mathfrak{z}\mid 1\leq i_{D}\leq m\Big{)}=\frac{6m}{\alpha\mathfrak{z}}+\frac{1}{\mathfrak{z}}\Big{(}\frac{2}{\alpha}\log(\kappa\mathfrak{z}^{\frac{1}{2\alpha}})+O(1)\Big{)}.

By our choice of mm it is immediate that 6mα𝔷14\frac{6m}{\alpha\mathfrak{z}}\leq\frac{1}{4}. To bound the last term of the displayed equation recall that by hypothesis 𝔷16αlogκ+C\mathfrak{z}\geq\frac{16}{\alpha}\log\kappa+C, so that we have 2αlogκ18𝔷\frac{2}{\alpha}\log\kappa\leq\frac{1}{8}\mathfrak{z} and moreover we can choose CC large enough so that 1α2log𝔷18𝔷\frac{1}{\alpha^{2}}\log\mathfrak{z}\leq\frac{1}{8}\mathfrak{z} and finally conclude that

x0(Tdet𝔷)=(1x0(i=1iDτi>𝔷| 1iDm))()=Ω(κ2α).\mathbb{P}_{x_{0}}(T_{det}\leq\mathfrak{z})=\Big{(}1-\mathbb{P}_{x_{0}}\Big{(}\sum_{i=1}^{i_{D}}\tau_{i}>\mathfrak{z}\,\big{|}\,1\leq i_{D}\leq m\Big{)}\Big{)}\mathbb{P}(\mathcal{E})=\Omega(\kappa^{-2\alpha}).

Observe that the bounds established in the preceding proposition are exactly those claimed in the first part of Theorem 15.

4.3 Time spent within an interval containing the origin

In this section we consider a class of one-dimensional diffusion processes taking place in [0,Y][0,Y] with YY a positive integer and with given stationary distribution. Our study is motivated by the process {ys}s0\{y_{s}\}_{s\geq 0} in (0,Y](0,Y] with generator Δh\Delta_{h} (see (4)) and reflecting barrier at YY. This section’s main result will later in the paper be applied precisely to this latter process. However, the arguments of this section are applicable to less constrained diffusion processes and might even be further generalized to a larger class of processes, and so we have opted for an exposition which, instead of tailoring the proof arguments to the specific process with generator Δh\Delta_{h}, gives the conditions that need to be satisfied so that our results as a corollary also apply to our specific process.

Our goal is to formalize a rather intuitive fact. Specifically, assume 𝔷\mathfrak{z} is not a too small amount of time and that {ys}s0\{y_{s}\}_{s\geq 0} is a diffusion process with stationary distribution π()\pi(\cdot) on [0,Y][0,Y]. Roughly speaking, we show that under some general hypotheses, during a time interval of length 𝔷\mathfrak{z}, when starting from the stationary distribution, with constant probability the process spends within (0,k](0,k] (kk larger than a constant) a period of time proportional to the expected time to spend there, that is, proportional to 𝔷π((0,k])\mathfrak{z}\pi((0,k]). We remark that kk only needs to be larger than a sufficiently large constant CC, which does not depend on YY (otherwise the result would follow directly from Markov’s inequality), and kk does not depend on π\pi either.

We start by establishing two lemmas. The first one states that a diffusion process {ys}s0\{y_{s}\}_{s\geq 0} in [0,Y][0,Y] with stationary distribution π()\pi(\cdot), henceforth referred to as the original process, can be coupled with a process 𝔇\mathfrak{D} that is continuous in time but discretized in space with values in \mathbb{Z} and defined next. First, let πm:=π((m1,m])\pi_{m}:=\pi((m-1,m]) where 1mY1\leq m\leq Y. Let p1=0p^{\prime}_{1}=0 (this restriction could be relaxed, but we impose it for the sake of simplicity of presentation), and let p2,,pYp^{\prime}_{2},\ldots,p^{\prime}_{Y} be the unique solution to the following system of equations:

πm={pmπm+pm+1πm+1,if m=1,(1pm1)πm1+pm+1πm+1,if 1<m<Y,(1pm1)πm1+(1pm)πm,if m=Y.\pi_{m}=\begin{cases}p^{\prime}_{m}\pi_{m}+p^{\prime}_{m+1}\pi_{m+1},&\text{if $m=1$,}\\ (1-p^{\prime}_{m-1})\pi_{m-1}+p^{\prime}_{m+1}\pi_{m+1},&\text{if $1<m<Y$,}\\ (1-p^{\prime}_{m-1})\pi_{m-1}+(1-p^{\prime}_{m})\pi_{m},&\text{if $m=Y$.}\end{cases} (17)

Intuitively, one can think of the pmp^{\prime}_{m} as the probability that the process hits m1m-1 before it hits mm when starting from the stationary distribution conditioned on starting in the interval (m1,m](m-1,m]. Denoting next, for 0m<Y0\leq m<Y, by p^m\widehat{p}_{m} the probability that the original process starting at mm hits m1m-1 before hitting m+1m+1 (observe that p^0=0\widehat{p}_{0}=0). We then define

p:=sup{supm:1mYpm,supm:0m<Yp^m},p:=\sup\{\sup_{m\in\mathbb{N}:1\leq m\leq Y}p^{\prime}_{m},\sup_{m\in\mathbb{N}:0\leq m<Y}\widehat{p}_{m}\},

and let q:=1pq:=1-p. We are exclusively interested in processes for which the following holds:

(H1) p[c¯,c¯]p\in[\underline{c},\overline{c}], 0<c¯c¯<1/20<\underline{c}\leq\overline{c}<1/2, for constants c¯,c¯>0\underline{c},\overline{c}>0 that do not depend on YY.

In other words, we are interested in the case where the original process is subject to a drift towards the boundary YY.

Note however that condition (H1) does not preclude that the process {ys}s0\{y_{s}\}_{s\geq 0}, being at position yy (for some real 0yY0\leq y\leq Y), in expectation stays within an interval (y1,y+1][0,Y](y-1,y+1]\cap[0,Y] an amount of time that increases with YY. This explains why we focus on processes that satisfy the next stated additional hypothesis concerning the maximal time Ty±1T_{y\pm 1} the process stays within said interval:

(H2) There is a constant L>0L>0 (independent of YY) such that for all reals 0yY0\leq y\leq Y it holds that 𝔼y(Ty±1)L\mathbb{E}_{y}(T_{y\pm 1})\leq L.

The preceding hypothesis says that the process stays, when starting at position yy, in expectation, at most constant time within any giving interval (y1,y+1](y-1,y+1]. A last assumption that we make in order to derive this section’s main result is the following:

(H3) There is a constant C~\widetilde{C}, independent of YY, such that if C~kY\widetilde{C}\leq k\leq Y, 𝔷1/π((0,k])\mathfrak{z}\geq 1/\pi((0,k]) and ysky_{s}\leq k at some time moment s0s\geq 0, then with constant probability ytky_{t}\leq k for all sts+1s\leq t\leq s+1.

The preceding hypothesis says that only in the vicinity of 0 there might exist an infinite drift (towards YY). Once we reach a point that is a constant away from 0, we might stay at roughly this value for at least one unit of time with constant probability (this is a technical assumption that allows us to focus only on the probability to reach such a value when considering the expected time spent roughly there.

Now, we are ready to define the process 𝔇\mathfrak{D}: set the initial position of 𝔇\mathfrak{D} as y0𝔇:=y0y_{0}^{\mathfrak{D}}:=\lfloor y_{0}\rfloor. Suppose the original process starts in an interval (m1,m](m-1,m] for some integer 1mY1\leq m\leq Y. If the original process hits mm before hitting m1m-1, the original process stays put at m1m-1 and the coupling phase defined below starts. If the original process hits m1m-1 before hitting mm, then 𝔇\mathfrak{D} jumps to m2m-2 and the coupling phase starts as well. In the coupling phase now iteratively the following happens: suppose the original process is initially at integer mm, for some 1m<Y1\leq m<Y, and 𝔇\mathfrak{D} is at some integer mm^{\prime} (if m=Ym=Y, directly go to the move-independent phase defined below). Mark mm as the center of the current phase, and do the following: As long as the original process neither hits m1m-1 nor m+1m+1, 𝔇\mathfrak{D} stays put. If the original process hits m1m-1, then 𝔇\mathfrak{D} jumps to m1m^{\prime}-1 (note that this happens with probability p^mp\widehat{p}_{m}\leq p. If the original process hits m+1m+1, then 𝔇\mathfrak{D} jumps to m1m^{\prime}-1 with probability pp^mq^m,\frac{p-\widehat{p}_{m}}{\widehat{q}_{m}}, and with probability 1pp^mq^m,1-\frac{p-\widehat{p}_{m}}{\widehat{q}_{m}}, 𝔇\mathfrak{D} jumps to m+1m^{\prime}+1. Note that the total probability that 𝔇\mathfrak{D} jumps to m1m^{\prime}-1 is exactly pp. Moreover, if m+1=Ym+1=Y, the move-independent phase starts. In either case, mm is unmarked as the center of the current phase, and the new location of the original process (either m1m-1 or m+1m+1, respectively; note that m1=0m-1=0 could also happen, this is treated in the same way), is marked as the new center, and the next iteration of the coupling phase (move-independent phase, respectively) with this new location playing the role of mm.

In the move-independent phase only the instants of movements of the original process and 𝔇\mathfrak{D} are coupled, but not the movements itself: suppose that at the beginning of this phase the original process is at position mm for some integer 0mY0\leq m\leq Y, whereas 𝔇\mathfrak{D} is at some integer mm^{\prime}: at the instant when the original process hits m1m-1 or m+1m+1, 𝔇\mathfrak{D} jumps to m1m^{\prime}-1 with probability pp and to m+1m^{\prime}+1 with probability qq. Then, the next iteration of the independent phase starts with the new location of the original process (that is, either m1m-1 or m+1m+1) playing the role of mm.

By construction, it is clear that from the beginning of the coupling phase on, 𝔇\mathfrak{D} jumps from mm^{\prime} to m1m^{\prime}-1 with probability pp and from mm^{\prime} to m+1m^{\prime}+1 with probability 1p1-p. We also have the following observation that also follows immediately by construction:

Observation 21.

As long as the original process {ys}s0\{y_{s}\}_{s\geq 0} does not hit YY, at every time instant ss, ys𝔇ysy_{s}^{\mathfrak{D}}\leq y_{s}.

We next show that for TT being a sufficiently large constant CLC_{L} that depends on LL only, conditional under not yet having hit the boundary, 𝔇\mathfrak{D} is likely to have performed already quite a few steps up to time TT:

Lemma 22.

Assume (H2) holds. Let CLC_{L} be a large enough constant depending on LL only with LL as in (H2). For every constant c1c_{1}, there exists a constant c2>0c_{2}>0, depending only on c1c_{1} and LL, such that for any integer TCLT\geq C_{L}, with probability at least 1ec1T1-e^{c_{1}T}, at least c2Tc_{2}T steps are performed by 𝔇\mathfrak{D} up to time TT.

Proof.

Fix any real 0yY0\leq y\leq Y, and recall that Ty±1T_{y\pm 1} denotes the time the original process remains in the interval (y1,y+1][0,Y](y-1,y+1]\cap[0,Y]. By (H2) and Markov’s inequality, with probability at most 1/21/2, the original process exits (y1,y+1](y-1,y+1] before time 2L2L. Formally, for all 0yY0\leq y\leq Y, we have y(Ty±12L)1/2\mathbb{P}_{y}(T_{y\pm 1}\geq 2L)\leq 1/2. Since this bound is uniform over the starting position yy we can restart the process every 2L2L units of time, thus obtaining

(Ty±12kL)2k,\mathbb{P}(T_{y\pm 1}\geq 2kL)\leq 2^{-k}, (18)

for every integer k1k\geq 1, regardless of yy. Hence, if at the beginning of an iteration of the coupling phase the original process is at some integer 0mY0\leq m\leq Y, the time it takes 𝔇\mathfrak{D} to wait until its next step from that time moment on, is bounded in the same way as Tm±1T_{m\pm 1}, and the time for the first step to enter the coupling phase can be bounded in the same way as in (18). Now, for i1i\geq 1, let T(i)T^{(i)} denote the time spent between the instants Ti1T_{i-1} and TiT_{i}, that mark the beginning and end of the ii-th iteration of the coupling phase (the time before the start of the first iteration of the coupling phase, in case i=1i=1, respectively). By (18), the random variable T(i)2L\lfloor\frac{T^{(i)}}{2L}\rfloor is stochastically bounded from above by a geometric random variable with success probability 12\frac{1}{2} and support {1,2,3,}\{1,2,3,\ldots\}. The random variables T(1),T(2),T^{(1)},T^{(2)},\ldots are independent but not identically distributed, as the time to remain in some intervals might be longer than in others, but since (18) holds for all yy, we may bound iT(i)2L\sum_{i}\lfloor\frac{T^{(i)}}{2L}\rfloor by the sum of i.i.d. geometric random variables with success probability 12\frac{1}{2}. Hence, for any constant c1c_{1}, there is a sufficiently small c2>0c_{2}>0 such that

(i=0c2T1T(i)>T)(i=0c2T1T(i)2L>T3L)j=T3L+1(j1c2T1)2jec1T,\mathbb{P}\Big{(}\sum_{i=0}^{c_{2}T-1}T^{(i)}>T\Big{)}\leq\mathbb{P}\Big{(}\sum_{i=0}^{c_{2}T-1}\left\lfloor\frac{T^{(i)}}{2L}\right\rfloor>\frac{T}{3L}\Big{)}\leq\sum_{j=\lfloor\frac{T}{3L}\rfloor+1}^{\infty}\binom{j-1}{c_{2}T-1}2^{-j}\leq e^{-c_{1}T},

where we used that the sum of c2Tc_{2}T i.i.d. geometric random variables has a negative binomial distribution, and where c2c_{2} is chosen (depending on c1c_{1} and LL only) so that the last inequality holds. Hence, with probability at least 1ec1T1-e^{c_{1}T}, at least c2Tc_{2}T steps are performed by 𝔇\mathfrak{D} during time TT, and the lemma follows. ∎

From the previous lemma we have the following immediate corollary:

Corollary 23.

Assume (H2) holds. Let CLC_{L} be a large enough constant depending on LL only with LL as in (H2). Let CLC^{\prime}_{L} be a sufficiently large constant depending on LL only and let sCLs\geq C^{\prime}_{L} denote the number of steps performed by 𝔇\mathfrak{D}. For every constant c1>0c^{\prime}_{1}>0, there exists C1C^{\prime}_{1} depending only on LL and on c1c^{\prime}_{1}, such that with probability at least 1ec1s1-e^{c^{\prime}_{1}s}, the time elapsed for the ss steps is at most C1sC^{\prime}_{1}s.

When p({ys}s0)<1/2p(\{y_{s}\}_{s\geq 0})<1/2, the process {ys}s0\{y_{s}\}_{s\geq 0} is subject to a drift towards the boundary YY, so intuitively the process 𝔇\mathfrak{D} will also hit YY in a number of steps that is proportional to Yy0𝔇Y-y_{0}^{\mathfrak{D}} (the initial distance of the process 𝔇\mathfrak{D} to the boundary YY) and dependent on the intensity of the drift. Moreover, one should expect that the probability of the said number of steps exceeding its expected value by much decreases rapidly. The next result is a quantitative version of this intuition.

Lemma 24.

Let {ys}s0\{y_{s}\}_{s\geq 0} be a diffusion process on [0,Y][0,Y] and suppose that 𝔇\mathfrak{D} is such that y0𝔇my_{0}^{\mathfrak{D}}\geq m for some mm\in\mathbb{N}. Let p:=p({ys}s0)p:=p(\{y_{s}\}_{s\geq 0}) and assume (H1) holds. Then, for any 0<δ<12p10<\delta<\frac{1}{2p}-1 and all C(12(1+δ)p)1C\geq(1-2(1+\delta)p)^{-1}, with probability at least 1exp(13δ2pτ)1-\exp(-\frac{1}{3}\delta^{2}p\tau), the process 𝔇\mathfrak{D} hits YY in at most τ=τ(m):=C(Ym)\tau=\tau(m):=\lfloor C(Y-m)\rfloor steps.

Proof.

Denote by UU the random variable counting the number of time steps up to step τ\tau where 𝔇\mathfrak{D} decreases its value (that is, jumps from some integer mm^{\prime} to m1m^{\prime}-1). If the process 𝔇\mathfrak{D} does not hit YY during τ\tau steps, then it must have decreased for UU steps and increased during τU\tau-U steps (so y𝔇τ=τ2U+y𝔇0y^{\mathfrak{D}}_{\tau}=\tau-2U+y^{\mathfrak{D}}_{0}), and moreover y𝔇τy^{\mathfrak{D}}_{\tau} would have to be smaller than YY. Thus, it will suffice to bound from above the probability that τ2U=y𝔇τy𝔇0<Ym\tau-2U=y^{\mathfrak{D}}_{\tau}-y^{\mathfrak{D}}_{0}<Y-m. Since 𝔼(U)=pτ\mathbb{E}(U)=p\tau and by Chernoff bounds (see [32, Theorem 4.4(2)]), for any 0<δ<10<\delta<1,

(U(1+δ)𝔼(U))e13δ2𝔼(U),\mathbb{P}(U\geq(1+\delta)\mathbb{E}(U))\leq e^{-\frac{1}{3}\delta^{2}\mathbb{E}(U)},

the claim follows observing that 12(C1)(Ym)(1+δ)𝔼(U)\frac{1}{2}(C-1)(Y-m)\geq(1+\delta)\mathbb{E}(U) (by hypothesis on δ\delta and CC) and τ2U<Ym\tau-2U<Y-m if and only if U>12(C1)(Ym)U>\tfrac{1}{2}(C-1)(Y-m). ∎

Next, we show that, with high probability over the choice of the starting position, the boundary YY is hit quickly:

Lemma 25.

Let {ys}s0\{y_{s}\}_{s\geq 0} be a diffusion process on [0,Y][0,Y] with π()\pi(\cdot) its stationary distribution. Assume (H1) and (H2) hold. Then, there is a C^>0\widehat{C}>0 such that if 0<kY0<k\leq Y, then the original process has not hit YY by time C^log(1/π((0,k]))\widehat{C}\log(1/\pi((0,k])) with probability at most 3π([0,k])3\pi([0,k]).

Proof.

Define the event 1:={y0(0,k]}\mathcal{E}_{1}:=\{y_{0}\in(0,k]\} and observe that

π(1)=π((0,k]).\mathbb{P}_{\pi}(\mathcal{E}_{1})=\pi((0,k]).

Next, fix 0<δ10<\delta\leq 1, pp as before, and let C:=C(δ)3/(δ2p)C^{\prime}:=C^{\prime}(\delta)\geq 3/(\delta^{2}p) be a constant which is at least as large as the constant CC of Lemma 24 (this last result is applicable because (H1) holds). Define τ:=Clog(1/π((0,k]))\tau:=\lceil C^{\prime}\log(1/\pi((0,k]))\rceil and let 2\mathcal{E}_{2} be the event that the process 𝔇\mathfrak{D} does not hit YY during τ\tau steps. Conditioned on ¯1\overline{\mathcal{E}}_{1} (so y0𝔇ky_{0}^{\mathfrak{D}}\geq k since y0𝔇>y01>k1y_{0}^{\mathfrak{D}}>y_{0}-1>k-1), by our choice of CC^{\prime} and Lemma 24 with m:=km:=k,

π(2¯1)exp(13δ2pτ)π((0,k]).\mathbb{P}_{\pi}(\mathcal{E}_{2}\mid\overline{\mathcal{E}}_{1})\leq\exp\big{(}{-}\tfrac{1}{3}\delta^{2}p\tau\big{)}\leq\pi((0,k]).

Choosing CC^{\prime} sufficiently large, by Corollary 23, applied with c1:=1c^{\prime}_{1}:=1, there exists C1C^{\prime}_{1} large enough, so that with probability at least 1eτ1-e^{-\tau}, the time elapsed for τ\tau steps is at most C1τC^{\prime}_{1}\tau.

Let 3\mathcal{E}_{3} be the event that the original process has not hit YY by time C1τC^{\prime}_{1}\tau. Since δ,p1\delta,p\leq 1, by definition of CC^{\prime}, we have C>1C^{\prime}>1, so it follows by definition of τ\tau that τlog(1/π((0,k]))\tau\geq\log(1/\pi((0,k])), and thus our preceding discussion establishes that

π(3¯1,¯2)π((0,k]).\mathbb{P}_{\pi}(\mathcal{E}_{3}\mid\overline{\mathcal{E}}_{1},\overline{\mathcal{E}}_{2})\leq\pi((0,k]).

The lemma follows by a union bound over events 1\mathcal{E}_{1}, 2\mathcal{E}_{2}, 3\mathcal{E}_{3}, and by setting C^:=CC1\widehat{C}:=C^{\prime}C^{\prime}_{1}. ∎

In order to establish our main result we next state the following fact:

Fact 26.

For all integers 1<m<Y1<m<Y, qπm1pπmq\pi_{m-1}\leq p\pi_{m}.

Proof.

Let p1,,pYp^{\prime}_{1},...,p^{\prime}_{Y} be defined as in this subsection’s introduction and let qi:=1piq^{\prime}_{i}:=1-p^{\prime}_{i}, and recall that p1=0p^{\prime}_{1}=0 by assumption. We claim that qm1πm1=pmπmq^{\prime}_{m-1}\pi_{m-1}=p^{\prime}_{m}\pi_{m} for 1<mY1<m\leq Y. Indeed, by (17), we have that π1=p1π1+p2π2\pi_{1}=p^{\prime}_{1}\pi_{1}+p^{\prime}_{2}\pi_{2}, so q1π1=p2π2q^{\prime}_{1}\pi_{1}=p^{\prime}_{2}\pi_{2}. Assume the claim holds for 1m<Y1\leq m<Y. By (17) and the inductive hypothesis, we have pm+1πm+1=πmqm1πm1=πmpmπm=qmπmp^{\prime}_{m+1}\pi_{m+1}=\pi_{m}-q^{\prime}_{m-1}\pi_{m-1}=\pi_{m}-p^{\prime}_{m}\pi_{m}=q^{\prime}_{m}\pi_{m}, which concludes the inductive proof of our claim.

By definition of pp, we have pmpp^{\prime}_{m}\leq p and qmqq^{\prime}_{m}\geq q for all 1m<Y1\leq m<Y. Hence, by the preceding paragraph’s claim, for 1<m<Y1<m<Y we get qπm1qm1πm1=pmπmpπmq\pi_{m-1}\leq q^{\prime}_{m-1}\pi_{m-1}=p^{\prime}_{m}\pi_{m}\leq p\pi_{m}, and the fact follows. ∎

We now use the previous lemmata to show this section’s main result:

Proposition 27.

Let {ys}s0\{y_{s}\}_{s\geq 0} be a diffusion process on [0,Y][0,Y] with stationary distribution π()\pi(\cdot). Assume (H1), (H2) and (H3) hold. Then, there are constants C~>0\widetilde{C}>0, c~(0,1)\widetilde{c}\in(0,1), η~>0\widetilde{\eta}>0 all independent of YY such that for all C~kY\widetilde{C}\leq k\leq Y and 𝔷1/π((0,k])\mathfrak{z}\geq 1/\pi((0,k]) we have

π(Ikc~𝔷π((0,k]))η~,where Ik:=0𝔷𝟏(0,k](ys)ds.\mathbb{P}_{\pi}\big{(}I_{k}\geq\widetilde{c}\mathfrak{z}\pi((0,k])\big{)}\geq\widetilde{\eta},\qquad\text{where $I_{k}:=\int_{0}^{\mathfrak{z}}{\bf 1}_{(0,k]}(y_{s})ds$.}
Proof.

First, we establish that there is an integer Δ>0\Delta>0 depending on pp alone such that

π((0,kΔ])112π((0,k])for all 1kY.\pi((0,k-\Delta])\leq\frac{1}{12}\pi((0,k])\qquad\text{for all $1\leq k\leq Y$.} (19)

The claim is a consequence of (H1) and Fact 26: Indeed, let p,qp,q be as therein. For any kk, summing over mm with 1<mk1<m\leq k we get qπ((0,k1])pπ((0,k])q\pi((0,k-1])\leq p\pi((0,k]) and thus π((0,km])(p/q)mπ((0,k])\pi((0,k-m])\leq(p/q)^{m}\pi((0,k]) for any mm\in\mathbb{N}. By (H1) we know that p/qp/q is bounded away from 11 by a constant independent of YY. Taking mm large enough so that (p/q)m1/12(p/q)^{m}\leq 1/12 the claim follows setting Δ:=m\Delta:=m.

Next, we will split the time period [0,𝔷][0,\mathfrak{z}] into time intervals of one unit (throwing away the eventual remaining non-integer part): for the ii-th time interval let XiX_{i} be the indicator random variable being 11 if at time instant i1i-1 (that is, at the beginning of the ii-th time interval) the process {ys}s0\{y_{s}\}_{s\geq 0} is within (0,k](0,k], and 0 otherwise. Since π\pi is stationary, π(Xi=1)=π((0,k])\mathbb{P}_{\pi}(X_{i}=1)=\pi((0,k]) for any ii. Thus, setting X:=i=1𝔷XiX:=\sum_{i=1}^{\lfloor\mathfrak{z}\rfloor}X_{i}, we have (𝔷1)π(X1=1)<𝔼π(X)𝔷π(X1=1)2𝔼π(X)(\mathfrak{z}-1)\mathbb{P}_{\pi}(X_{1}=1)<\mathbb{E}_{\pi}(X)\leq\mathfrak{z}\mathbb{P}_{\pi}(X_{1}=1)\leq 2\mathbb{E}_{\pi}(X). Since (H3) holds, for C~\widetilde{C} the constant therein, if Xi=1X_{i}=1 and kC~k\geq\widetilde{C}, then with constant probability the process {ys}s0\{y_{s}\}_{s\geq 0} stays throughout the entire ii-th time interval within (0,k](0,k]. So by our previous discussion, to establish the desired result, it suffices to show that for some ξ>0\xi>0 the probability that XX is smaller than ξ𝔼π(X)\xi\mathbb{E}_{\pi}(X) is at most a constant strictly less than 11. We rely on Chebyshev’s inequality to do so and thus need to bound from above the variance of XX. This explains why we now undertake the determination of an upper bound for 𝔼π(X2)\mathbb{E}_{\pi}(X^{2}). Since there are at most Pd𝔷P_{d}\leq\mathfrak{z} pairs of random variables XiX_{i} and XjX_{j} such that |ij|=d|i-j|=d,

𝔼π(X2)=i,jπ(Xi=1,Xj=1)d=0𝔷Pdπ(X1=1,X1+d=1).\mathbb{E}_{\pi}(X^{2})=\sum_{i,j}\mathbb{P}_{\pi}(X_{i}=1,X_{j}=1)\leq\sum_{d=0}^{\lfloor\mathfrak{z}\rfloor}P_{d}\mathbb{P}_{\pi}(X_{1}=1,X_{1+d}=1).

In order to bound π(X1=1,X1+d=1)\mathbb{P}_{\pi}(X_{1}=1,X_{1+d}=1) we construct two processes {y~s}s0\{\widetilde{y}_{s}\}_{s\geq 0} and {y^s}s0\{\widehat{y}_{s}\}_{s\geq 0} as follows: Initially, y~0\widetilde{y}_{0} is sampled with distribution π((0,k])π((0,k])\frac{\pi(\cdot\cap(0,k])}{\pi((0,k])}. Also, with probability π((0,k])\pi((0,k]) we let y^0=y~0\widehat{y}_{0}=\widetilde{y}_{0}, and otherwise we sample y^0\widehat{y}_{0} with distribution π((k,Y])π((k,Y])\frac{\pi(\cdot\cap(k,Y])}{\pi((k,Y])}. Then, both processes evolve independently according to the same law as the original process until they first meet. Afterward, they still move as the original process but coupled together. It follows directly that {y^s}s0\{\widehat{y}_{s}\}_{s\geq 0} is a copy of the original process, while {y~s}s0\{\widetilde{y}_{s}\}_{s\geq 0} is a version of the original process conditioned to start within (0,k](0,k]. Let T~Y\widetilde{T}_{Y} be the first time that {y~s}s0\{\widetilde{y}_{s}\}_{s\geq 0} hits YY, and define X~i\widetilde{X}_{i} and X^i\widehat{X}_{i} analogously to the variables XiX_{i} with y~s\widetilde{y}_{s} and y^s\widehat{y}_{s} playing the role of ysy_{s}, respectively. Since π(X1=1,X1+d=1)=(X~1+d=1)π((0,k])\mathbb{P}_{\pi}(X_{1}=1,X_{1+d}=1)=\mathbb{P}(\widetilde{X}_{1+d}=1)\pi((0,k]), it follows that

π(X1=1,X1+d=1)=(X~1+d=1,T~Yd)π((0,k])+(X1=1,X1+d=1,T~Y>d).\mathbb{P}_{\pi}(X_{1}=1,X_{1+d}=1)\,=\,\mathbb{P}(\widetilde{X}_{1+d}=1,\widetilde{T}_{Y}\leq d)\pi((0,k])+\mathbb{P}(X_{1}=1,X_{1+d}=1,\widetilde{T}_{Y}>d).

For the first term on the right-hand side above observe that y~0y^0\widetilde{y}_{0}\leq\widehat{y}_{0}, hence on the event T~Yd\widetilde{T}_{Y}\leq d both processes have met before time dd, and so by definition X~1+d=X^1+d\widetilde{X}_{1+d}=\widehat{X}_{1+d}, yielding

(X~1+d=1,T~Yd)π((0,k])(X^1+d=1)π((0,k])=(π(X1=1))2.\mathbb{P}(\widetilde{X}_{1+d}=1,\widetilde{T}_{Y}\leq d)\pi((0,k])\leq\mathbb{P}(\widehat{X}_{1+d}=1)\pi((0,k])=\big{(}\mathbb{P}_{\pi}(X_{1}=1)\big{)}^{2}.

It remains then to bound the term (X1=1,X1+d=1,T~Y>d)\mathbb{P}(X_{1}=1,X_{1+d}=1,\widetilde{T}_{Y}>d). To do so recall that {y~s}s0\{\widetilde{y}_{s}\}_{s\geq 0} is a version conditioned to start at (0,k](0,k] so calling TYT_{Y} the first time {ys}s0\{y_{s}\}_{s\geq 0} hits YY, we have

(X1=1,X1+d=1,T~Y>d)=π(X1=1,X1+d=1,TY>d).\mathbb{P}(X_{1}=1,X_{1+d}=1,\widetilde{T}_{Y}>d)=\mathbb{P}_{\pi}(X_{1}=1,X_{1+d}=1,T_{Y}>d).

First, recall that by Lemma 22, if dCLd\geq C_{L} for some large enough constant depending on LL only, for every c1>0c_{1}>0 there exists c2>0c_{2}>0, depending only on c1c_{1} and on LL, so that the process 𝔇\mathfrak{D} performs at least c2dc_{2}d steps during the time interval [0,d][0,d] with probability at least 1ec1d1-e^{-c_{1}d}. Note that c1c_{1} and c2c_{2} do not depend on C~\widetilde{C}. Let \mathcal{E} be the event that 𝔇\mathfrak{D} performed at least c2dc_{2}d steps up to time dd. Now,

π(X1=1,X1+d=1,TY>d)\displaystyle\mathbb{P}_{\pi}(X_{1}=1,X_{1+d}=1,T_{Y}>d)
π(X1=1,X1+d=1,TY>d,)+π(X1=1,¯)\displaystyle\qquad\leq\mathbb{P}_{\pi}(X_{1}=1,X_{1+d}=1,T_{Y}>d,\mathcal{E})+\mathbb{P}_{\pi}(X_{1}=1,\overline{\mathcal{E}})
π(X1=1,X1+d=1,TY>d,)+ec1dπ(X1=1).\displaystyle\qquad\leq\mathbb{P}_{\pi}(X_{1}=1,X_{1+d}=1,T_{Y}>d,\mathcal{E})+e^{-c_{1}d}\mathbb{P}_{\pi}(X_{1}=1).

Next, recall that as long as the original process does not hit the boundary, by Observation 21, the auxiliary process satisfies ys𝔇ysy_{s}^{\mathfrak{D}}\leq y_{s} for every instant ss. Formally,

π(X1=1,X1+d=1,TY>d)(y0𝔇k,yd𝔇k)=π(X1=1)(yd𝔇ky0𝔇k),\mathbb{P}_{\pi}(X_{1}=1,X_{1+d}=1,T_{Y}>d)\leq\mathbb{P}(y_{0}^{\mathfrak{D}}\leq k,y_{d}^{\mathfrak{D}}\leq k)=\mathbb{P}_{\pi}(X_{1}=1)\mathbb{P}(y_{d}^{\mathfrak{D}}\leq k\mid y_{0}^{\mathfrak{D}}\leq k),

and also

π(X1=1,X1+d=1,TY>d,)\displaystyle\mathbb{P}_{\pi}(X_{1}=1,X_{1+d}=1,T_{Y}>d,\mathcal{E}) (y0𝔇k,yd𝔇k,)\displaystyle\leq\mathbb{P}(y_{0}^{\mathfrak{D}}\leq k,y_{d}^{\mathfrak{D}}\leq k,\mathcal{E})
π(X1=1)(yd𝔇k,y0𝔇k),\displaystyle\leq\mathbb{P}_{\pi}(X_{1}=1)\mathbb{P}(y_{d}^{\mathfrak{D}}\leq k\mid\mathcal{E},y_{0}^{\mathfrak{D}}\leq k),

and our goal is thus to bound (yd𝔇k,y0𝔇k)\mathbb{P}(y_{d}^{\mathfrak{D}}\leq k\mid\mathcal{E},y_{0}^{\mathfrak{D}}\leq k). Recalling that p<1/2p<1/2 (by hypothesis) is the probability that 𝔇\mathfrak{D} makes a decreasing step, and q:=1pq:=1-p, we have for some large enough constant C1>0C_{1}>0

(yd𝔇=ky0𝔇=k,):2c2d(2)(pq):2c2d(4pq)C1(4pq)c2d/2.\mathbb{P}(y_{d}^{\mathfrak{D}}=k\mid y_{0}^{\mathfrak{D}}=k,\mathcal{E})\leq\sum_{\ell:2\ell\geq c_{2}d}\binom{2\ell}{\ell}(pq)^{\ell}\leq\sum_{\ell:2\ell\geq c_{2}d}(4pq)^{\ell}\leq C_{1}(4pq)^{c_{2}d/2}.

The same upper bound holds for (yd𝔇=k1y0𝔇=k,)\mathbb{P}(y_{d}^{\mathfrak{D}}=k{-}1\mid y_{0}^{\mathfrak{D}}=k,\mathcal{E}). For (yd𝔇=k2y0𝔇=k,)\mathbb{P}(y_{d}^{\mathfrak{D}}=k{-}2\mid y_{0}^{\mathfrak{D}}=k,\mathcal{E}) and (yd𝔇=k3y0𝔇=k,)\mathbb{P}(y_{d}^{\mathfrak{D}}=k{-}3\mid y_{0}^{\mathfrak{D}}=k,\mathcal{E}) note that 𝔇\mathfrak{D} has to make one more decreasing step and one less increasing step, yielding an upper bound of C1(4pq)c2d/2pqC_{1}(4pq)^{c_{2}d/2}\frac{p}{q}. In the same way, (yd𝔇=k2iy0𝔇=k,)C1(4pq)c2d/2(pq)i\mathbb{P}(y_{d}^{\mathfrak{D}}=k{-}2i\mid y_{0}^{\mathfrak{D}}=k,\mathcal{E})\leq C_{1}(4pq)^{c_{2}d/2}(\frac{p}{q})^{i}, so that for C2:=C2(C1)>0C_{2}:=C_{2}(C_{1})>0 large enough, (yd𝔇ky0𝔇=k,)C2(4pq)c2d/2\mathbb{P}(y_{d}^{\mathfrak{D}}\leq k\mid y_{0}^{\mathfrak{D}}=k,\mathcal{E})\leq C_{2}(4pq)^{c_{2}d/2}. We can get the same upper bound also for (yd𝔇ky0𝔇=k1,)\mathbb{P}(y_{d}^{\mathfrak{D}}\leq k\mid y_{0}^{\mathfrak{D}}=k{-}1,\mathcal{E}). Moreover, we have

(yd𝔇=ky0𝔇=k2,)\displaystyle\mathbb{P}(y_{d}^{\mathfrak{D}}=k\mid y_{0}^{\mathfrak{D}}=k{-}2,\mathcal{E}) :2c2d(21)p1q+1qp:2c2d(2)(pq)qpC1(4pq)c2d2.\displaystyle\leq\sum_{\ell:2\ell\geq c_{2}d}\binom{2\ell}{\ell-1}p^{\ell{-}1}q^{\ell+1}\leq\frac{q}{p}\sum_{\ell:2\ell\geq c_{2}d}\binom{2\ell}{\ell}(pq)^{\ell}\leq\frac{q}{p}C_{1}(4pq)^{\frac{c_{2}d}{2}}.

By the argument from before, we get (yd𝔇ky0𝔇=k2,)(q/p)C2(4pq)c2d2\mathbb{P}(y_{d}^{\mathfrak{D}}\leq k\mid y_{0}^{\mathfrak{D}}=k{-}2,\mathcal{E})\leq(q/p)C_{2}(4pq)^{\frac{c_{2}d}{2}} and iterating the argument, also obtain (yd𝔇ky0𝔇=k2i,)(q/p)iC2(4pq)c2d2.\mathbb{P}(y_{d}^{\mathfrak{D}}\leq k\mid y_{0}^{\mathfrak{D}}=k{-}2i,\mathcal{E})\leq(q/p)^{i}C_{2}(4pq)^{\frac{c_{2}d}{2}}. Hence, if kk is even,

(yd𝔇k,y0𝔇k)\displaystyle\mathbb{P}(y_{d}^{\mathfrak{D}}\leq k,y_{0}^{\mathfrak{D}}\leq k\mid\mathcal{E})
=j=0k(yd𝔇ky0𝔇=kj,)(y0𝔇=kj)\displaystyle\quad=\sum_{j=0}^{k}\mathbb{P}(y_{d}^{\mathfrak{D}}\leq k\mid y_{0}^{\mathfrak{D}}=k{-}j,\mathcal{E})\mathbb{P}(y_{0}^{\mathfrak{D}}=k{-}j)
(yd𝔇ky0𝔇=0,)(y0𝔇=0)+i=0k/21(yd𝔇ky0𝔇=k2i,)(y0𝔇{k2i,k2i1})\displaystyle\quad\leq\mathbb{P}(y_{d}^{\mathfrak{D}}\leq k\mid y_{0}^{\mathfrak{D}}=0,\mathcal{E})\mathbb{P}(y_{0}^{\mathfrak{D}}=0)+\!\!\sum_{i=0}^{k/2-1}\mathbb{P}(y_{d}^{\mathfrak{D}}\leq k\mid y_{0}^{\mathfrak{D}}=k{-}2i,\mathcal{E})\mathbb{P}(y_{0}^{\mathfrak{D}}\in\{k{-}2i,k{-}2i{-}1\})
(q/p)k/2C2(4pq)c2d2π((0,1])+2i=0k/21(qp)iC2(4pq)c2d2π((k2i2,k2i])\displaystyle\quad\leq(q/p)^{k/2}C_{2}(4pq)^{\frac{c_{2}d}{2}}\pi((0,1])+2\sum_{i=0}^{k/2-1}\Big{(}\frac{q}{p}\Big{)}^{i}C_{2}(4pq)^{\frac{c_{2}d}{2}}\pi((k{-}2i{-}2,k{-}2i])
4C3(4pq)c2d2π((0,k]),\displaystyle\quad\leq 4C_{3}(4pq)^{\frac{c_{2}d}{2}}\pi((0,k]),

where for the secondlast inequality we used Fact 26. If kk is odd, then take {0,,12(k1)}\{0,...,\frac{1}{2}(k-1)\} as the range over which the second index of the summation (the one over ii above); the remaining bounds are still valid, and we obtain the same bound as for kk even. Thus, recalling that Pd𝔷P_{d}\leq\mathfrak{z} denotes the number of pairs of random variables XiX_{i} and XjX_{j} for which |ij|=d|i-j|=d, and recalling that ¯\overline{\mathcal{E}} holds with probability at most ec1de^{-c_{1}d} (in which case the joint probability is only bounded by the probability of X1=1X_{1}=1), and adding also the joint probability ((X1=1))2(\mathbb{P}(X_{1}=1))^{2} in case T~Yd\widetilde{T}_{Y}\leq d, we get

𝔼π(X2)\displaystyle\mathbb{E}_{\pi}(X^{2}) 𝔷CLπ(X1=1)+dCL𝔷Pd((4C3(4pq)c2d2+ec1d)π(X1=1)+(π(X1=1))2)\displaystyle\leq\mathfrak{z}C_{L}\mathbb{P}_{\pi}(X_{1}=1)+\sum_{d\geq C_{L}}^{\lfloor\mathfrak{z}\rfloor}P_{d}\Big{(}\big{(}4C_{3}(4pq)^{\frac{c_{2}d}{2}}+e^{-c_{1}d}\big{)}\mathbb{P}_{\pi}(X_{1}=1)+\big{(}\mathbb{P}_{\pi}(X_{1}=1)\big{)}^{2}\Big{)}
CL𝔼π(X)+2C𝔼π(X)+(𝔼π(X))2.\displaystyle\leq C_{L}\mathbb{E}_{\pi}(X)+2C^{*}\mathbb{E}_{\pi}(X)+(\mathbb{E}_{\pi}(X))^{2}.

where for the second inequality we used that Pd𝔷P_{d}\leq\mathfrak{z} and then that 𝔷π(X1=1)2𝔼π(X)\mathfrak{z}\mathbb{P}_{\pi}(X_{1}=1)\leq 2\mathbb{E}_{\pi}(X), and that d0(4C3(4p(1p))c2d/2+ec1d)C\sum_{d\geq 0}(4C_{3}(4p(1-p))^{c_{2}d/2}+e^{-c_{1}d})\leq C^{*} for some CC^{*} depending on c1c_{1} and c2c_{2} only. Thus, since 𝔼π(X)C4\mathbb{E}_{\pi}(X)\geq C_{4} for some constant C4C_{4} large by our hypothesis 𝔷1/π((0,k])\mathfrak{z}\geq 1/\pi((0,k]), we conclude that 𝔼π(X2)43(𝔼π(X))2\mathbb{E}_{\pi}(X^{2})\leq\frac{4}{3}(\mathbb{E}_{\pi}(X))^{2}, so 𝕍π(X)13(𝔼π(X))2\mathbb{V}_{\pi}(X)\leq\frac{1}{3}(\mathbb{E}_{\pi}(X))^{2}. Thus, by Chebyshev’s inequality,

(|X𝔼π(X)|34𝔼π(X))𝕍π(X)/(34𝔼π(X))21627,\mathbb{P}(|X-\mathbb{E}_{\pi}(X)|\geq\tfrac{3}{4}\mathbb{E}_{\pi}(X))\leq\mathbb{V}_{\pi}(X)/(\tfrac{3}{4}\mathbb{E}_{\pi}(X))^{2}\leq\tfrac{16}{27},

and the statement follows taking c~:=14\widetilde{c}:=\frac{1}{4} and η~:=1127\widetilde{\eta}:=\frac{11}{27}. ∎

To conclude this section, we show that the process we studied in Sections 4.1 and 4.2 satisfies (H1) through (H3) and hence Proposition 27 holds for the said process. Reaching such conclusion was the motivation for this section, and it provides a result on which the analysis of the combined radial and angular processes, which we will treat in the next section, relies.

Corollary 28.

Let {ys}s0\{y_{s}\}_{s\geq 0} be the diffusion process on (0,Y](0,Y] with generator Δh\Delta_{h} and a reflecting barrier at YY. Then, there are constants C~>0\widetilde{C}>0, c~,η~(0,1)\widetilde{c},\widetilde{\eta}\in(0,1), such that for all C~kY\widetilde{C}\leq k\leq Y and 𝔷1/π((0,k])\mathfrak{z}\geq 1/\pi((0,k]) we have

π(Ikc~𝔷π((0,k]))η~,where Ik:=0𝔷𝟏(0,k](ys)ds.\mathbb{P}_{\pi}\big{(}I_{k}\geq\widetilde{c}\mathfrak{z}\pi((0,k])\big{)}\geq\widetilde{\eta},\qquad\text{where $I_{k}:=\int_{0}^{\mathfrak{z}}{\bf 1}_{(0,k]}(y_{s})ds$.}
Proof.

It will be enough to verify that (H1)-(H3) hold and apply Proposition 27 with C~\widetilde{C} as therein and satisfying our just stated condition.

Let Cα,Y:=(cosh(αY)1)1C_{\alpha,Y}:=(\cosh(\alpha Y)-1)^{-1}. By definition πm:=π((m1,m])=Cα,Y(cosh(αm)cosh(α(m1)))\pi_{m}:=\pi((m{-}1,m])=C_{\alpha,Y}(\cosh(\alpha m)-\cosh(\alpha(m{-}1))). Using coshxcoshy=2sinh(12(x+y))sinh(12(xy))\cosh x-\cosh y=2\sinh(\frac{1}{2}(x+y))\sinh(\frac{1}{2}(x-y)), we get

πm=2Cα,Ysinh(α(m12))sinh(12α)for all 1mY.\pi_{m}=2C_{\alpha,Y}\sinh(\alpha(m-\tfrac{1}{2}))\sinh(\tfrac{1}{2}\alpha)\qquad\text{for all $1\leq m\leq Y$}. (20)

To show that pp is bounded away from 0 as required by (H1), it suffices to observe that by Fact 26, p/qp2/q1=π1/π2=sinh(12α)/sinh(32α)p/q\geq p_{2}/q_{1}=\pi_{1}/\pi_{2}=\sinh(\frac{1}{2}\alpha)/\sinh(\frac{3}{2}\alpha). To establish that pp is also bounded from above by a constant strictly smaller than 1/21/2, recall the definition of pmp^{\prime}_{m} from this subsection’s introduction and let qm:=1pmq^{\prime}_{m}:=1-p^{\prime}_{m}. Note that by definition of qiq^{\prime}_{i}, we have qiπi=πipiπiq^{\prime}_{i}\pi_{i}=\pi_{i}-p^{\prime}_{i}\pi_{i}, so from the proof of Fact 26, pmπm=qm1πm1=πm1pm2πm2=πm1πm2+pm3πm3p^{\prime}_{m}\pi_{m}=q^{\prime}_{m-1}\pi_{m-1}=\pi_{m-1}-p^{\prime}_{m-2}\pi_{m-2}=\pi_{m-1}-\pi_{m-2}+p^{\prime}_{m-3}\pi_{m-3}. Hence, pmπmπm1πm2+πm3p^{\prime}_{m}\pi_{m}\leq\pi_{m-1}-\pi_{m-2}+\pi_{m-3} for 3<mY3<m\leq Y. It is easy to see that the right-hand side is, as a function in α\alpha, maximized for α=1/2\alpha=1/2. Moreover, for every α\alpha, it is increasing in mm, we see it is bounded from above by 0.46180.4618. By direct calculation, p2=π1/π2=sinh(α/2)/sinh(3α/2)0.3072p^{\prime}_{2}=\pi_{1}/\pi_{2}=\sinh(\alpha/2)/\sinh(3\alpha/2)\leq 0.3072 and p3=(1p2)π2/π30.3557p^{\prime}_{3}=(1-p^{\prime}_{2})\pi_{2}/\pi_{3}\leq 0.3557, where we used that both expressions for p2p^{\prime}_{2} and p3p^{\prime}_{3} are decreasing as functions in α\alpha. We conclude that sup1mYpm0.4618\sup_{1\leq m\leq Y}p^{\prime}_{m}\leq 0.4618 for all 1<mY1<m\leq Y. To conclude that p<1/2p<1/2, we also need to bound p^m\widehat{p}_{m}, for 1m<Y1\leq m<Y: in order to do so, observe that the diffusion process defined by (2) can be coupled with a process of constant drift towards m+1m+1 so that the real process is always to the right of the process of constant drift; for a process with constant drift towards m+1m+1 starting at mm it is well known that the probability to reach m1m-1 before m+1m+1 is at most 1/2ε1/2-\varepsilon for some ε>0\varepsilon>0 (see [11], formula 3.0.4). Hence p^m12ε\widehat{p}_{m}\leq\frac{1}{2}-\varepsilon for any 1m<Y1\leq m<Y, and hence p<1/2p<1/2, as desired.

Next, we establish (H2). Recalling that T(Y1)±1T_{(Y-1)\pm 1} is the random variable counting the maximal time the process spends in the interval (Y2,Y](Y-2,Y] starting at Y1Y-1; by Part (ii) of Lemma 14 applied with 𝐲0:=Y2{\bf y}_{0}:=Y-2, for any y(Y2,Y]y\in(Y-2,Y], we get 𝔼y(T𝐲0)4α2e2α\mathbb{E}_{y}(T_{{\bf y}_{0}})\leq\frac{4}{\alpha^{2}}e^{2\alpha}, and since supY2yY𝔼y(Ty±1)supY2<yY𝔼y(T𝐲0)\sup_{Y-2\leq y\leq Y}\mathbb{E}_{y}(T_{y\pm 1})\leq\sup_{Y-2<y\leq Y}\mathbb{E}_{y}(T_{{\bf y}_{0}}), conditon (H2) is satisfied for y>Y2y>Y-2. In order to bound 𝔼y(Ty±1)\mathbb{E}_{y}(T_{y\pm 1}) for y<Y1y<Y-1, note that the time to exit from such an interval can only increase when imposing a reflecting barrier at y+1y+1, and we can then apply Part (ii) of Lemma 14 to the process with this reflecting barrier at yy, applied with 𝐲0:=y1{\bf y}_{0}:=y-1. Hence, for any starting position yy, the expected time remaining in the interval (y1,y+1](y-1,y+1] is at most a constant LL, and (H2) is satisfied.

Finally, to check that (H3) holds, simply observe that for all C~kY\widetilde{C}\leq k\leq Y (where C~\widetilde{C} is at least a constant strictly greater than 11), the process {ys}s0\{y_{s}\}_{s\geq 0} dominates a process of constant drift towards YY, which wherever conditioned on ysky_{s}\leq k has a constant probability of staying within (0,k](0,k] during the unit length time interval [s,s+1][s,s+1], thus establishing the claim and concluding the proof of the corollary. ∎

5 General case: strategies for detection

In this section we define the set 𝒟(𝔷)\mathcal{D}^{(\mathfrak{z})} similarly as in the radial section, without the restriction that |θ0|π2|\theta_{0}|\leq\tfrac{\pi}{2}, since also points that belong to the half disk of BO(R)B_{O}(R) opposite to QQ are starting positions from which particles can detect (mainly thanks to the angular movement component). As in previous sections, on a high level, this set 𝒟(𝔷)\mathcal{D}^{(\mathfrak{z})} is chosen in such a way that at least a constant part of the measure of the overall detection probability comes from 𝒟(𝔷)\mathcal{D}^{(\mathfrak{z})}. The precise definition of 𝒟(𝔷)\mathcal{D}^{(\mathfrak{z})} (given in the two following theorems) depends on the fact whether 𝔷\mathfrak{z} is small or large. For 𝔷\mathfrak{z} large, we can moreover provide uniform lower bounds of detection for every element in 𝒟(𝔷)\mathcal{D}^{(\mathfrak{z})} and matching uniform upper bounds of detection for every element in 𝒟¯(𝔷)\overline{\mathcal{D}}^{(\mathfrak{z})}. To make this intuition precise, we have the following two theorems, depending on whether 𝔷\mathfrak{z} is small or large:

Theorem 29.

Assume β1/2\beta\leq 1/2, 𝔷=Ω((eβR/n)2)O(1)\mathfrak{z}=\Omega((e^{\beta R}/n)^{2})\cap O(1). Let

𝒟(𝔷):={x0=(r0,θ0)BO(R)|θ0|ϕ(r0)+𝔷eβr0}.\mathcal{D}^{(\mathfrak{z})}:=\{x_{0}=(r_{0},\theta_{0})\in B_{O}(R)\mid|\theta_{0}|\leq\phi(r_{0})+\sqrt{\mathfrak{z}}e^{-\beta r_{0}}\}. (21)

Then

𝒟(𝔷)x0(Tdet𝔷)dμ(x0)=Θ(μ(𝒟(𝔷)))=Θ(neβR𝔷)\int_{\mathcal{D}^{(\mathfrak{z})}}\mathbb{P}_{x_{0}}(T_{det}\leq\mathfrak{z})d\mu(x_{0})=\Theta(\mu(\mathcal{D}^{(\mathfrak{z})}))=\Theta(ne^{-\beta R}\sqrt{\mathfrak{z}})

and

𝒟¯(𝔷)x0(Tdet𝔷)dμ(x0)=O(μ(𝒟(𝔷))).\int_{\overline{\mathcal{D}}^{(\mathfrak{z})}}\mathbb{P}_{x_{0}}(T_{det}\leq\mathfrak{z})d\mu(x_{0})=O(\mu(\mathcal{D}^{(\mathfrak{z})})).
Theorem 30.

Let κA>1\kappa_{A}>1. Define then κR:=κA\kappa_{R}:=\kappa_{A} if α<2β\alpha<2\beta and κR:=eκA2\kappa_{R}:=e^{\kappa_{A}^{2}} if α2β\alpha\geq 2\beta. Let

𝒟(𝔷)(κA):={x0=(r0,θ0)BO(R)|θ0|ϕ(R)+κAϕ(𝔷)|θ0|ϕ(r0)+κRe(β12)r0},\mathcal{D}^{(\mathfrak{z})}(\kappa_{A}):=\{x_{0}=(r_{0},\theta_{0})\in B_{O}(R)\mid|\theta_{0}|\leq\phi(R)+\kappa_{A}\phi^{(\mathfrak{z})}\vee|\theta_{0}|\leq\phi(r_{0})+\kappa_{R}e^{-(\beta\wedge\frac{1}{2})r_{0}}\},

where

ϕ(𝔷):={(𝔷1α/eR)β12,if α<2β,eβR𝔷log𝔷,if α=2β,eβR𝔷,if α>2β.\phi^{(\mathfrak{z})}:=\begin{cases}(\mathfrak{z}^{\frac{1}{\alpha}}/e^{R})^{\beta\wedge\frac{1}{2}},&\text{if $\alpha<2\beta$},\\[2.0pt] e^{-\beta R}\sqrt{\mathfrak{z}\log\mathfrak{z}},&\text{if $\alpha=2\beta$},\\[2.0pt] e^{-\beta R}\sqrt{\mathfrak{z}},&\text{if $\alpha>2\beta$}.\end{cases}

Furthermore, define

:={eαR,if α<2β,eαR/(αR),if α=2β,e2βR,if α>2β.\mathfrak{Z}:=\begin{cases}e^{\alpha R},&\text{if $\alpha<2\beta$,}\\ e^{\alpha R}/(\alpha R),&\text{if $\alpha=2\beta$,}\\ e^{2\beta R},&\text{if $\alpha>2\beta$.}\end{cases}

Then, for 𝔷=Ω(1)O()\mathfrak{z}=\Omega(1)\cap O(\mathfrak{Z}) we have

𝒟¯(𝔷)(κA)x0(Tdet𝔷)dμ(x0)=O(μ(𝒟(𝔷)(κA))).\int_{\overline{\mathcal{D}}^{(\mathfrak{z})}(\kappa_{A})}\mathbb{P}_{x_{0}}(T_{det}\leq\mathfrak{z})d\mu(x_{0})=O(\mu(\mathcal{D}^{(\mathfrak{z})}(\kappa_{A}))).

Furthermore, under the additional assumption 𝔷=ω(1)\mathfrak{z}=\omega(1) we have

infx0𝒟(𝔷)(κA)x0(Tdet𝔷)={eO(κA2), if α2β,Ω(κAα/(β12)), if α<2β,\inf_{x_{0}\in\mathcal{D}^{(\mathfrak{z})}(\kappa_{A})}\mathbb{P}_{x_{0}}(T_{det}\leq\mathfrak{z})=\begin{cases}e^{-O(\kappa_{A}^{2})},&\text{ if $\alpha\geq 2\beta$,}\\[2.0pt] \Omega(\kappa_{A}^{-\alpha/(\beta\wedge\frac{1}{2})}),&\text{ if $\alpha<2\beta$,}\end{cases}

and

supx0𝒟¯(𝔷)(κA)x0(Tdet𝔷)={eΩ(κA2), if α2β,O(κAα/(β12)), if α<2β.\sup_{x_{0}\in\overline{\mathcal{D}}^{(\mathfrak{z})}(\kappa_{A})}\mathbb{P}_{x_{0}}(T_{det}\leq\mathfrak{z})=\begin{cases}e^{-\Omega(\kappa_{A}^{2})},&\text{ if $\alpha\geq 2\beta$,}\\[2.0pt] O(\kappa_{A}^{-\alpha/(\beta\wedge\frac{1}{2})}),&\text{ if $\alpha<2\beta$.}\end{cases}
x𝔷x_{\mathfrak{z}}x0x_{0}θ^0{}_{\widehat{\theta}_{0}}12θ^0{}_{\frac{1}{2}\widehat{\theta}_{0}}θ0{}_{\theta_{0}}ϕ(R)+κAϕ(𝔷){}_{\phi(R)+\kappa_{A}\phi^{(\mathfrak{z})}}ϕ(r0){}_{\phi(r_{0})}r0{}_{r_{0}}r{}_{r^{\prime\prime}}r{}_{r^{\prime}}r^0{}_{\widehat{r}_{0}}QO
Figure 3: (a) The lightly shaded area corresponds to 𝒟(𝔷)(κA)BQ(R)\mathcal{D}^{(\mathfrak{z})}(\kappa_{A})\setminus B_{Q}(R) as defined in Theorem 30 and the strongly shaded region represents BQ(R)B_{Q}(R). (b) The smallest (respectively largest) circumference whose boundary is a dashed line is of radius rr^{\prime} (rr^{\prime\prime}, respectively). (c) The region whose boundary is the dashed-dot line corresponds to the box (x0):=[r^0,R]×[θ0θ^0,θ0+θ^0]\mathcal{B}(x_{0}):=[\widehat{r}_{0},R]\times[\theta_{0}-\widehat{\theta}_{0},\theta_{0}+\widehat{\theta}_{0}] when 𝔷\mathfrak{z} is large.

We refer to Figure 3(a) for an illustration of 𝒟(𝔷)(κA)\mathcal{D}^{(\mathfrak{z})}(\kappa_{A}) as defined in the last theorem.

The following subsection is dedicated to proving the lower bounds of the two theorems just stated (in fact, as can be seen from the proof, the assumptions about 𝔷\mathfrak{z} are slightly milder in the proofs). The subsequent subsection then deals with the corresponding upper bounds.

5.1 Lower bounds

In this subsection we prove the lower bounds of Theorem 29 and 30.

5.1.1 The case 𝔷\mathfrak{z} small

We start with the proof of the lower bound of Theorem 29. We establish the following proposition, which proves the lower bound of Theorem 29.

Proposition 31.

Fix β12\beta\leq\frac{1}{2} and take 𝒟(𝔷)\mathcal{D}^{(\mathfrak{z})} as in (21). If 𝔷=O(1)\mathfrak{z}=O(1), then

𝒟(𝔷)x0(Tdet𝔷)dμ(x0)=Ω(neβR𝔷).\int_{\mathcal{D}^{(\mathfrak{z})}}\mathbb{P}_{x_{0}}(T_{det}\leq\mathfrak{z})d\mu(x_{0})\;=\;\Omega(ne^{-\beta R}\sqrt{\mathfrak{z}}).
Proof.

Consider x0x_{0} with |θ0|ϕ(r0)+𝔷eβr0|\theta_{0}|\leq\phi(r_{0})+\sqrt{\mathfrak{z}}e^{-\beta r_{0}} and r0cr_{0}\geq c for some constant c>0c>0. Note that for such choice of r0r_{0} we have coth(r0)coth(c)=O(1)\coth(r_{0})\leq\coth(c)=O(1) and thus within time O(1)O(1), with probability bounded away from zero, the radial coordinate changes by at most 11. Conditioning on this, we consider the angular movement variance 11 Brownian motion BI(𝔷)B_{I(\mathfrak{z})} where now I(𝔷):=0𝔷cosech2(βr𝔷)d𝔷(𝔷/sinh(β(r0+1))2I(\mathfrak{z}):=\int_{0}^{\mathfrak{z}}\operatorname{cosech}^{2}(\beta r_{\mathfrak{z}})d\mathfrak{z}\geq(\sqrt{\mathfrak{z}}/\sinh(\beta(r_{0}+1))^{2}. As in the proof of Theorem 9, define the exit time H[a,b]H_{[-a,b]} from the interval [a,b][-a,b] where a:=ϕ(r0+1)|θ0|a:=\phi(r_{0}+1)-|\theta_{0}| and b:=2πϕ(r0+1)|θ0|b:=2\pi-\phi(r_{0}+1)-|\theta_{0}|, and as in (10) in the proof of Theorem 9,

x0(Tdet𝔷)(H[a,b]I(𝔷))=Ω(Φ((ϕ(r0+1)|θ0|)sinh(β(r0+1))/𝔷)).\mathbb{P}_{x_{0}}(T_{det}\leq\mathfrak{z})\geq\mathbb{P}(H_{[-a,b]}\leq I(\mathfrak{z}))=\Omega\big{(}\Phi\big{(}(\phi(r_{0}+1)-|\theta_{0}|)\sinh(\beta(r_{0}+1))/\sqrt{\mathfrak{z}}\big{)}\big{)}.

Since ϕ()>0\phi(\cdot)>0 and sinh(x)ex\sinh(x)\leq e^{x} we conclude that x0(Tdet𝔷)=Ω(Φ(|θ0|eβ(r0+1)/𝔷))\mathbb{P}_{x_{0}}(T_{det}\leq\mathfrak{z})=\Omega(\Phi(-|\theta_{0}|e^{\beta(r_{0}+1)}/\sqrt{\mathfrak{z}})). Integrating over the elements of 𝒟(𝔷)\mathcal{D}^{(\mathfrak{z})} satisfying |θ0|𝔷eβr0|\theta_{0}|\leq\sqrt{\mathfrak{z}}e^{-\beta r_{0}}, we have

𝒟(𝔷)x0(Tdet𝔷)dμ(x0)=Ω(neαR)1R0𝔷eβr^0Φ(θ0eβr^0/𝔷)sinh(αr^0)dθ0dr^0.\int_{\mathcal{D}^{(\mathfrak{z})}}\mathbb{P}_{x_{0}}(T_{det}\leq\mathfrak{z})d\mu(x_{0})=\Omega(ne^{-\alpha R})\int_{1}^{R}\int_{0}^{\sqrt{\mathfrak{z}}e^{-\beta\widehat{r}_{0}}}\Phi(-\theta_{0}e^{\beta\widehat{r}_{0}}/\sqrt{\mathfrak{z}})\sinh(\alpha\widehat{r}_{0})d\theta_{0}d\widehat{r}_{0}.

Performing the change of variables y0:=θ0eβr^0/𝔷y_{0}:=\theta_{0}e^{\beta\widehat{r}_{0}}/\sqrt{\mathfrak{z}}, since α>β\alpha>\beta, and using sinh(x)=Θ(ex)\sinh(x)=\Theta(e^{x}) for x=Ω(1)x=\Omega(1), we get for the latter

Ω(neαR)01Φ(y0)𝔷dy01Re(αβ)r^0dr^0=Ω(neβR𝔷)01Φ(y0)dy0.\Omega(ne^{-\alpha R})\int_{0}^{1}\Phi(-y_{0})\sqrt{\mathfrak{z}}dy_{0}\int_{1}^{R}e^{(\alpha-\beta)\widehat{r}_{0}}d\widehat{r}_{0}=\Omega(ne^{-\beta R}\sqrt{\mathfrak{z}})\int_{0}^{1}\Phi(-y_{0})dy_{0}.

Note that the last integral is Ω(1)\Omega(1), and thus the stated result follows. ∎

5.1.2 The case 𝔷\mathfrak{z} large

We now prove the lower bound of Theorem 30. In fact, we will only need to assume the milder condition 𝔷Ce2κA2\mathfrak{z}\geq Ce^{2\kappa_{A}^{2}} with CC being a large enough constant. We start by showing that particles that start close to the origin detect quickly with at least constant probability:

Lemma 32.

Let τ>0\tau>0, and let c:=C/βc^{*}:=C^{*}/\beta for some arbitrarily large constant C>0C^{*}>0. Then,

infx0BO(c)x0(Tdetτ)=Ω(1).\inf_{x_{0}\in B_{O}(c^{*})}\mathbb{P}_{x_{0}}(T_{det}\leq\tau)=\Omega(1).
Proof.

Let ThT_{h} be the smallest time tt such that rt=2cr_{t}=2c^{*} and notice that, calling r\mathbb{P}^{r} the law of the radial component of the process, for any point x0=(r0,θ0)x_{0}=(r_{0},\theta_{0}) with r0cr_{0}\leq c^{*}, it holds that rr0(Thτ)rc(Thτ).\mathbb{P}^{r}_{r_{0}}(T_{h}\geq\tau)\,\geq\,\mathbb{P}^{r}_{c^{*}}(T_{h}\geq\tau). Now, fix any realization {rs}s0\{r_{s}\}_{s\geq 0} of the radial component of the process such that ThτT_{h}\geq\tau and observe that for any such realization, detection is guaranteed if |θsθ0|>2π|\theta_{s}-\theta_{0}|>2\pi for some 0sτ0\leq s\leq\tau. Under the event ThτT_{h}\geq\tau the radial coordinate at any time sτs\leq\tau is at most 2c2c^{*}. Thus, the angular movement θτθ0\theta_{\tau}-\theta_{0} has a normal distribution centered at zero with variance at least

0τsinh2(βrs)dsτsinh2(2C)=Ω(1).\int_{0}^{\tau}\sinh^{-2}(\beta r_{s})ds\,\geq\,\tau\sinh^{-2}(2C^{*})=\Omega(1).

Thus, with constant probability, within time τ\tau the angular movement covers an angle of 2π2\pi. The lemma follows. ∎

We now deal with the remaining starting points x0𝒟(𝔷)x_{0}\in\mathcal{D}^{(\mathfrak{z})}. Before doing so we establish a simple fact concerning the random variables defined for nonnegative integer values of kk as follows:

Ik:=0𝔷𝟏(0,k](rs)ds.I_{k}:=\int_{0}^{\mathfrak{z}}{\bf 1}_{(0,k]}(r_{s})ds. (22)

Recall that π(r):=αsinh(αr)cosh(αR)1\pi(r):=\frac{\alpha\sinh(\alpha r)}{\cosh(\alpha R)-1}, 0rR0\leq r\leq R, is the stationary distribution of the process {rs}s0\{r_{s}\}_{s\geq 0}.

Fact 33.

If k1αlog2k\geq\frac{1}{\alpha}\log 2, then 𝔼π(Ik)14𝔷eα(Rk)\mathbb{E}_{\pi}(I_{k})\geq\tfrac{1}{4}\mathfrak{z}e^{-\alpha(R-k)}.

Proof.

Suppose r0r_{0} is distributed according to the stationary distribution π\pi. Since cosh(x)112ex\cosh(x)-1\leq\frac{1}{2}e^{x}, by definition of IkI_{k}, we have

𝔼π(Ik)=0𝔷π(rsk)ds=𝔷π((0,k])=𝔷cosh(αk)1cosh(αR)12𝔷eαR(cosh(αk)1).\mathbb{E}_{\pi}(I_{k})=\int_{0}^{\mathfrak{z}}\mathbb{P}_{\pi}(r_{s}\leq k)ds=\mathfrak{z}\pi((0,k])=\mathfrak{z}\cdot\frac{\cosh(\alpha k)-1}{\cosh(\alpha R)-1}\geq 2\mathfrak{z}e^{-\alpha R}(\cosh(\alpha k)-1).

The desired conclusion follows observing that the map x1+e2x2exx\mapsto 1+e^{-2x}-2e^{-x} is non-decreasing, and thus for xlog2x\geq\log 2, we have cosh(x)1=12ex(1+e2x2ex)14ex\cosh(x)-1=\frac{1}{2}e^{x}(1+e^{-2x}-2e^{-x})\geq\frac{1}{4}e^{x}. ∎

We are now ready to state and prove the proposition dealing with particles whose initial position are points satisfying the first condition in the definition of 𝒟(𝔷)\mathcal{D}^{(\mathfrak{z})}:

Proposition 34.

Let κA>1\kappa_{A}>1. There is a sufficiently large constant C>0C>0 such that if 𝔷Ce2κA2\mathfrak{z}\geq Ce^{2\kappa_{A}^{2}} and 𝔷=O()\mathfrak{z}=O(\mathfrak{Z}), then for any x0=(r0,θ0)x_{0}=(r_{0},\theta_{0}) with |θ0|ϕ(R)+κAϕ(𝔷)|\theta_{0}|\leq\phi(R)+\kappa_{A}\phi^{(\mathfrak{z})}, we have the following:

  1. (i)

    If α2β\alpha\geq 2\beta, then x0(Tdet𝔷)=Ω(1κAe12κA2)=Ω(e0.7κA2)\displaystyle\mathbb{P}_{x_{0}}(T_{det}\leq\mathfrak{z})=\Omega(\tfrac{1}{\kappa_{A}}e^{-\frac{1}{2}\kappa_{A}^{2}})=\Omega(e^{-0.7\kappa_{A}^{2}}).

  2. (ii)

    If α<2β\alpha<2\beta, then x0(Tdet𝔷)=Ω(κAα/(β12))\displaystyle\mathbb{P}_{x_{0}}(T_{det}\leq\mathfrak{z})=\Omega(\kappa_{A}^{-\alpha/(\beta\wedge\frac{1}{2})}).

To prove the just stated proposition we use the following standard fact several times, so we state it explicitly.

Fact 35 ([20]).

Given the radial component {rs}s0\{r_{s}\}_{s\geq 0} of a particle’s trajectory, the angular component {θs}s0\{\theta_{s}\}_{s\geq 0} law is that of a Brownian motion indexed by I(𝔷):=0𝔷cosech2(βrs)dsI(\mathfrak{z}):=\int_{0}^{\mathfrak{z}}\operatorname{cosech}^{2}(\beta r_{s})ds. If I(𝔷)σ2>0I(\mathfrak{z})\geq\sigma^{2}>0 and κ>0\kappa>0, then

(sup0s𝔷BI(s)κσ{rs}s0)κ2π(κ2+1)e12κ2=Ω(1κe12κ2).\mathbb{P}(\sup_{0\leq s\leq\mathfrak{z}}B_{I(s)}\geq\kappa\sigma\mid\{r_{s}\}_{s\geq 0})\geq\frac{\kappa}{\sqrt{2\pi}(\kappa^{2}+1)}e^{-\frac{1}{2}\kappa^{2}}=\Omega\Big{(}\frac{1}{\kappa}e^{-\frac{1}{2}\kappa^{2}}\Big{)}.

Now we proceed with the pending proof.

Proof of Proposition 34..

Assume first β1/2\beta\geq 1/2 (and thus necessarily α<2β\alpha<2\beta). In this case the radial movement dominates and the proof of (ii) is very similar to the one given for x0𝒟(𝔷)x_{0}\in\mathcal{D}^{(\mathfrak{z})} where |θ0|ϕ(R)+κϕ(𝔷)|\theta_{0}|\leq\phi(R)+\kappa\phi^{(\mathfrak{z})}: assume first that 𝔷\mathfrak{z} (and κA\kappa_{A}) is such that |θ0|ϕ(R)+κAϕ(𝔷)π/2c|\theta_{0}|\leq\phi(R)+\kappa_{A}\phi^{(\mathfrak{z})}\leq\pi/2-c for some c>0c>0. Define r¯0\overline{r}_{0} be as 𝔯0\mathfrak{r}_{0} in the proof of Proposition 20 (that is r¯0\overline{r}_{0} corresponds to the absorption radius in case there was radial movement only; since ϕ(R)+κAϕ(𝔷)π/2c\phi(R)+\kappa_{A}\phi^{(\mathfrak{z})}\leq\pi/2-c, we have r¯0=Ω(1)\overline{r}_{0}=\Omega(1)). By the same argument as given in the proof of Proposition 20, with κA\kappa_{A} playing the role of κ\kappa therein, with probability Ω(κA2α)\Omega(\kappa_{A}^{-2\alpha}), there exists a time moment T𝔷T\leq\mathfrak{z}, at which a radial value of r¯0\overline{r}_{0} is reached. In this case, by symmetry of the angular movement, with probability at least 1/21/2, either the angle at time TT satisfies |θT||θ0||\theta_{T}|\leq|\theta_{0}| or there exists tTt\leq T where the angle θt=0\theta_{t}=0 and we detected already by time tt, and hence in this case QQ is detected by time TT with probability Ω(κA2α)\Omega(\kappa_{A}^{-2\alpha}). Assume then that 𝔷\mathfrak{z} (and κA\kappa_{A}) is such that π/2cϕ(R)+κAϕ(𝔷)\pi/2-c\leq\phi(R)+\kappa_{A}\phi^{(\mathfrak{z})}. In this case, let r¯0\overline{r}_{0} be equal (assuming there were radial movement only) to the absorption radius 𝔯0\mathfrak{r}_{0} that would correspond to an angle exactly |θ0|=π/2c|\theta_{0}|=\pi/2-c. Note that in this case r¯0=Θ(1)\overline{r}_{0}=\Theta(1). By the proof of Proposition 20, with probability Ω(κA2α)\Omega(\kappa_{A}^{-2\alpha}) a radial value of r¯0\overline{r}_{0} is reached at a time moment T12𝔷T\leq\frac{1}{2}\mathfrak{z}. In this case, with probability at least 1/21/2, as before, either there existed a moment tt with θt=0\theta_{t}=0, and we would have detected QQ, or |θT||θ0||\theta_{T}|\leq|\theta_{0}|. In this case, by Lemma 32, since r¯0=Θ(1)\overline{r}_{0}=\Theta(1), with constant probability we detect QQ in time 12𝔷=Ω(1)\frac{1}{2}\mathfrak{z}=\Omega(1), and hence also in this case QQ is detected by time 𝔷\mathfrak{z} with probability Ω(κA2α)\Omega(\kappa_{A}^{-2\alpha}).

Consider thus x0𝒟(𝔷)x_{0}\in\mathcal{D}^{(\mathfrak{z})} with |θ0|ϕ(R)+κAϕ(𝔷)|\theta_{0}|\leq\phi(R)+\kappa_{A}\phi^{(\mathfrak{z})} under the assumption β<1/2\beta<1/2. Recall that we also may assume that r0cr_{0}\geq c^{*} for cc^{*} as in the statement of Lemma 32 since other values of r0r_{0} were already dealt with in said lemma. First, note that for x0BQ(R)x_{0}\in B_{Q}(R) the lower bound trivially holds. So, henceforth let x0𝒟(𝔷)BQ(R)x_{0}\in\mathcal{D}^{(\mathfrak{z})}\setminus B_{Q}(R). Since by hypothesis 𝔷O()=O(eαR)\mathfrak{z}\in O(\mathfrak{Z})=O(e^{\alpha R}), we may assume that 𝔷<exp(α(R1αlog21+c))\mathfrak{z}<\exp(\alpha(R-\frac{1}{\alpha}\log 2-1+c)) for c>0c>0 sufficiently large. Let ρ:=R1αlog𝔷+c\rho:=R-\frac{1}{\alpha}\log\mathfrak{z}+c and note that 1αlog2+1<ρ\frac{1}{\alpha}\log 2+1<\rho by the previous assumption on 𝔷\mathfrak{z}. Denote by Tρ1T_{\rho-1} the random variable corresponding to the first time the process reaches BO(ρ1)B_{O}(\rho-1). Recall from Part (ii) of Lemma 14, with 𝐲0=ρ1{\bf y}_{0}=\rho-1 and Y=RY=R, that for the radial component {rs}s0\{r_{s}\}_{s\geq 0} of the movement, and for any r0[ρ1,R]r_{0}\in[\rho-1,R],

𝔼r0(Tρ1)4α2eα(Rρ+1)=4α2𝔷eα(c1).\mathbb{E}_{r_{0}}(T_{\rho-1})\leq\frac{4}{\alpha^{2}}e^{\alpha(R-\rho+1)}=\frac{4}{\alpha^{2}}\mathfrak{z}e^{-\alpha(c-1)}.

By Markov’s inequality, for cc sufficiently large so that 4α2eα(c1)14\frac{4}{\alpha^{2}}e^{-\alpha(c-1)}\leq\frac{1}{4}, it follows that the event 𝒜\mathcal{A} corresponding to the process reaching BO(ρ1)B_{O}(\rho-1) before time 12𝔷\frac{1}{2}\mathfrak{z} happens with probability

(𝒜)=1r0(Tρ112𝔷)12.\mathbb{P}(\mathcal{A})=1-\mathbb{P}_{r_{0}}(T_{\rho-1}\geq\tfrac{1}{2}\mathfrak{z})\geq\tfrac{1}{2}.

Now, let ρ¯:=max{ρβlogκA,c}.\overline{\rho}:=\max\{\rho-\beta\log\kappa_{A},c^{*}\}. Moreover, define \mathcal{B} as the event that starting from radius ρ1\rho-1 we hit the radial value ρ¯\overline{\rho} before hitting the radial value RR. Define g(r):=log(tanh(12αR)/tanh(12αr))g(r):=\log(\tanh(\frac{1}{2}\alpha R)/\tanh(\frac{1}{2}\alpha r)) as in Fact 17 and observe that as argued therein, since ρ=Ω(1)\rho=\Omega(1) we have g(ρ1)=O(eαρ)g(\rho-1)=O(e^{-\alpha\rho}) and because ρ¯=RΩ(1)\overline{\rho}=R-\Omega(1) we have g(ρ¯)=Ω(eαρ¯)g(\overline{\rho})=\Omega(e^{-\alpha\overline{\rho}}). Recall from Part (iii) of Lemma 14 that g(ρ1)/g(ρ¯)g(\rho-1)/g(\overline{\rho}) is the probability that starting from radius ρ1\rho-1 we hit the radial value ρ¯\overline{\rho} before hitting the radial value RR, and we have

()=g(ρ1)g(ρ¯)=Ω(eα(ρ¯ρ))=Ω(κAα/β).\mathbb{P}(\mathcal{B})=\frac{g(\rho-1)}{g(\overline{\rho})}=\Omega(e^{\alpha(\overline{\rho}-\rho)})=\Omega(\kappa_{A}^{-\alpha/\beta}).

From Part (iv) of Lemma 14 we obtain

𝔼ρ1(Tρ¯Tρ¯<TR)2α(βlogκA+1)+2α2𝔷8,\mathbb{E}_{\rho-1}(T_{\overline{\rho}}\mid T_{\overline{\rho}}<T_{R})\leq\frac{2}{\alpha}(\beta\log\kappa_{A}+1)+\frac{2}{\alpha^{2}}\leq\frac{\mathfrak{z}}{8},

where the last inequality follows (with room to spare) by our assumption of 𝔷\mathfrak{z}. Thus, by Markov’s inequality, conditionally under 𝒜\mathcal{A}\cap\mathcal{B}, with probability at least 12\frac{1}{2}, we reach ρ¯\overline{\rho} before time 34𝔷\frac{3}{4}\mathfrak{z}. Let 𝒞\mathcal{C} be the corresponding event. Conditional under 𝒞\mathcal{C}, with constant probability the particle stays one unit of time inside BO(ρ¯+1)B_{O}(\overline{\rho}+1) before time 𝔷\mathfrak{z}, and let this event be called 𝒟\mathcal{D}. Conditional under 𝒟\mathcal{D}, the angular component’s law during one unit of time inside BO(ρ¯+1)B_{O}(\overline{\rho}+1) is BI(𝔷)B_{I(\mathfrak{z})} with

I(𝔷)cosech2(β(ρ¯+1))e2β(ρ¯+1)e2β(R1αlog𝔷+c1βlogκA+1)=:σ2.I(\mathfrak{z})\geq\operatorname{cosech}^{2}(\beta(\overline{\rho}+1))\geq e^{-2\beta(\overline{\rho}+1)}\geq e^{-2\beta(R-\frac{1}{\alpha}\log\mathfrak{z}+c-\frac{1}{\beta}\log\kappa_{A}+1)}=:\sigma^{2}.

Note that σ=eβR𝔷βακAeβ(c+1)\sigma=e^{-\beta R}\mathfrak{z}^{\frac{\beta}{\alpha}}\kappa_{A}e^{-\beta(c+1)} for the absolute constant c>0c>0 independent of κA\kappa_{A} from above, and recall also that |θ0|ϕ(R)+κA𝔷βαeβR2κA𝔷βαeβR|\theta_{0}|\leq\phi(R)+\kappa_{A}\mathfrak{z}^{\frac{\beta}{\alpha}}e^{-\beta R}\leq 2\kappa_{A}\mathfrak{z}^{\frac{\beta}{\alpha}}e^{-\beta R} by assumptions on β<1/2\beta<1/2 and 𝔷\mathfrak{z}, with room to spare. Let \mathcal{E} be the event that when reaching BO(ρ¯+1)B_{O}(\overline{\rho}+1) the angle at the origin spanned by the particle and QQ is (in absolute value) is at most |θ0||\theta_{0}| or there was a moment tt before reaching BO(ρ¯+1)B_{O}(\overline{\rho}+1) with θt=0\theta_{t}=0 (and thus we detected at time tt). Note that by symmetry of the angular movement, ()12\mathbb{P}(\mathcal{E})\geq\frac{1}{2}. Conditional under 𝒟\mathcal{D}\cap\mathcal{E}, by Fact 35, since κA>1\kappa_{A}>1, for some c1=c1(c)c_{1}=c_{1}(c) we have

x0(Tdet𝔷)(sup0s𝔷BI(s)c1σ{rs}s0)=Ω(1),\mathbb{P}_{x_{0}}(T_{det}\leq\mathfrak{z})\geq\mathbb{P}(\sup_{0\leq s\leq\mathfrak{z}}B_{I(s)}\geq c_{1}\sigma\mid\{r_{s}\}_{s\geq 0})=\Omega(1),

and since 𝒟\mathcal{D}\cap\mathcal{E} holds with probability Ω(κAα/β)\Omega(\kappa_{A}^{-\alpha/\beta}), we have x0(Tdet𝔷)=Ω(κAα/β)\mathbb{P}_{x_{0}}(T_{det}\leq\mathfrak{z})=\Omega(\kappa_{A}^{-\alpha/\beta}), which finishes the proof of the case α<2β\alpha<2\beta and β<1/2\beta<1/2 (and thus the proof of (ii)).

We now deal with the remaining α2β\alpha\geq 2\beta case (and therefore necessarily β<1/2\beta<1/2). Assume first α>2β\alpha>2\beta. This argument is analogous to the one for angular movement: we repeat it for convenience. Given the trajectory of {rs}s0\{r_{s}\}_{s\geq 0}, recall that the angular component’s law is that of a Brownian motion BI(𝔷)B_{I(\mathfrak{z})}, where I(𝔷):=0𝔷cosech2(βrs)ds4𝔷e2βR=:σ2I(\mathfrak{z}):=\int_{0}^{\mathfrak{z}}\operatorname{cosech}^{2}(\beta r_{s})ds\geq 4\mathfrak{z}e^{-2\beta R}=:\sigma^{2}. Hence, using Fact 35, since κA>1\kappa_{A}>1 and using that 0.2x2logx0.2x^{2}\geq\log x for all x1x\geq 1,

x0(Tdet𝔷)(sup0s𝔷BI(s)κAσ{rs}s0)=Ω(1κAe12κA2)=Ω(e0.7κA2),\mathbb{P}_{x_{0}}(T_{det}\leq\mathfrak{z})\geq\mathbb{P}(\sup_{0\leq s\leq\mathfrak{z}}B_{I(s)}\geq\kappa_{A}\sigma\mid\{r_{s}\}_{s\geq 0})=\Omega\big{(}-\tfrac{1}{\kappa_{A}}e^{-\frac{1}{2}\kappa_{A}^{2}}\big{)}={\Omega(e^{-0.7\kappa_{A}^{2}})},

showing the result in the case α>2β\alpha>2\beta.

Finally, suppose α=2β\alpha=2\beta (therefore β>14\beta>\frac{1}{4}). First, suppose that the starting point r0r_{0} of the radial component is distributed according to the stationary distribution of the process, that is, π(r):=αsinh(αr)cosh(αR)1\pi(r):=\frac{\alpha\sinh(\alpha r)}{\cosh(\alpha R)-1} with 0rR0\leq r\leq R. We will see that the contribution to I(𝔷):=0𝔷cosech2(βrs)dsI(\mathfrak{z}):=\int_{0}^{\mathfrak{z}}\operatorname{cosech}^{2}(\beta r_{s})ds of the time spent around different radial values is roughly the same, forcing a logarithmic correction. To bound I(𝔷)I(\mathfrak{z}) from below, let k¯=R1αlog(𝔷/4)\overline{k}=R-\lfloor\tfrac{1}{\alpha}\log(\mathfrak{z}/4)\rfloor, which by our hypothesis 𝔷=O()=O(eαR/(αR))\mathfrak{z}=O(\mathfrak{Z})=O(e^{\alpha R}/(\alpha R)) implies k¯=ω(1)\overline{k}=\omega(1) (in particular it implies also k¯1αlog2\overline{k}\geq\frac{1}{\alpha}\log 2). Note that

I(𝔷)=0k¯cosech2(βrs)ds+k=k¯+1Rk1kcosech2(βrs)ds4(Ik¯e2βk¯+k=k¯+1R(IkIk1)e2βk).I(\mathfrak{z})=\int_{0}^{\overline{k}}\operatorname{cosech}^{2}(\beta r_{s})ds+\!\sum_{k=\overline{k}+1}^{R}\int_{k-1}^{k}\operatorname{cosech}^{2}(\beta r_{s})ds\geq 4\Big{(}I_{\overline{k}}e^{-2\beta\overline{k}}+\!\sum_{k=\overline{k}+1}^{R}(I_{k}-I_{k-1})e^{-2\beta k}\Big{)}.

Hence, using that β14\beta\geq\frac{1}{4} implies that 4(1e2β)14(1-e^{-2\beta})\geq 1,

I(𝔷)4(k=k¯R1Ike2βk(1e2β)+IRe2βR)k=k¯Re2βkIk.I(\mathfrak{z})\geq 4\Big{(}\sum_{k=\overline{k}}^{R-1}I_{k}e^{-2\beta k}(1-e^{-2\beta})+I_{R}e^{-2\beta R}\Big{)}\geq\sum_{k=\overline{k}}^{R}e^{-2\beta k}I_{k}. (23)

So, recalling that k¯1αlog2\overline{k}\geq\frac{1}{\alpha}\log 2, by Fact 33, for all k{k¯,,R}k\in\{\overline{k},\ldots,R\}, we have 𝔼π(Ik)14𝔷eα(Rk)\mathbb{E}_{\pi}(I_{k})\geq\frac{1}{4}\mathfrak{z}e^{-\alpha(R-k)}, which gives an estimate for the value of the IkI_{k} variables. Moreover, by Corollary 28, since kk¯k\geq\overline{k} and by definition of k¯\overline{k} we have 𝔷4eα(Rk¯)4eα(Rk)\mathfrak{z}\geq 4e^{\alpha(R-\overline{k})}\geq 4e^{\alpha(R-k)}. Since cosh(x)1=12ex(1ex)212ex\cosh(x)-1=\frac{1}{2}e^{x}(1-e^{-x})^{2}\leq\frac{1}{2}e^{x}, using once more that kk¯1αlog2k\geq\overline{k}\geq\frac{1}{\alpha}\log 2, we obtain π((0,k])eα(Rk)(1e12αk)214eα(Rk)\pi((0,k])\geq e^{-\alpha(R-k)}(1-e^{-\frac{1}{2}\alpha k})^{2}\geq\frac{1}{4}e^{-\alpha(R-k)}, so 𝔷1/π((0,k])\mathfrak{z}\geq 1/\pi((0,k]) and the assumptions of Corollary 28 are satisfied. Hence, there exist c~,η~(0,1)\widetilde{c},\widetilde{\eta}\in(0,1), such that for each k¯kR\overline{k}\leq k\leq R, we have that the expectation of the indicator ZkZ_{k} of the event {Ikc~𝔷eα(Rk)}\{I_{k}\geq\widetilde{c}\mathfrak{z}e^{-\alpha(R-k)}\} is at least η~\widetilde{\eta}. Let Z:=k=k¯RZkZ:=\sum_{k=\overline{k}}^{R}Z_{k} and that 𝔼(Z)(Rk¯+1)η~\mathbb{E}(Z)\geq(R-\overline{k}+1)\widetilde{\eta}. Define then the event :={Z>η~2(Rk¯+1)}\mathcal{E}:=\{Z>\frac{\widetilde{\eta}}{2}(R-\overline{k}+1)\}. Since 𝔼(Z)(()+η~2(¯))(Rk¯+1)\mathbb{E}(Z)\leq(\mathbb{P}(\mathcal{E})+\frac{\widetilde{\eta}}{2}\mathbb{P}(\overline{\mathcal{E}}))(R-\overline{k}+1), it must be the case that ()η~2/(1η~2)η~2\mathbb{P}(\mathcal{E})\geq\frac{\widetilde{\eta}}{2}/(1-\frac{\widetilde{\eta}}{2})\geq\frac{\widetilde{\eta}}{2}. Thus, for a fixed realization of {rs}s0\{r_{s}\}_{s\geq 0} satisfying \mathcal{E}, since α=2β\alpha=2\beta, we have

k=k¯Re2βkIkk=k¯Re2βkc~𝔷eα(Rk)Zk=c~𝔷e2βRk=k¯RZk>c~𝔷e2βRη~2(Rk¯+1).\sum_{k=\overline{k}}^{R}e^{-2\beta k}I_{k}\geq\sum_{k=\overline{k}}^{R}e^{-2\beta k}\widetilde{c}\mathfrak{z}e^{-\alpha(R-k)}Z_{k}=\widetilde{c}\mathfrak{z}e^{-2\beta R}\sum_{k=\overline{k}}^{R}Z_{k}>\widetilde{c}\mathfrak{z}e^{-2\beta R}\frac{\widetilde{\eta}}{2}(R-\overline{k}+1).

Note that Rk¯+11αlog(𝔷/4)R-\overline{k}+1\geq\frac{1}{\alpha}\log(\mathfrak{z}/4) by definition of k¯\overline{k}, so k=k¯Re2βkIkc~2αη~𝔷e2βRlog(𝔷/4))c~3αη~𝔷e2βRlog𝔷,\sum_{k=\overline{k}}^{R}e^{-2\beta k}I_{k}\geq\frac{\widetilde{c}}{2\alpha}\widetilde{\eta}\mathfrak{z}e^{-2\beta R}\log(\mathfrak{z}/4))\geq\frac{\widetilde{c}}{3\alpha}\widetilde{\eta}\mathfrak{z}e^{-2\beta R}\log\mathfrak{z}, where we used the assumption that 𝔷\mathfrak{z} is at least a sufficiently large constant. Thus, under \mathcal{E} the angular movement dominates stochastically a Brownian motion Bc~3αη~𝔷e2βRlog𝔷B_{\frac{\widetilde{c}}{3\alpha}\widetilde{\eta}\mathfrak{z}e^{-2\beta R}\log\mathfrak{z}} with standard deviation eβR(c~𝔷η~/(3α))log𝔷=:σe^{-\beta R}\sqrt{(\widetilde{c}\mathfrak{z}\widetilde{\eta}/(3\alpha))\log\mathfrak{z}}=:\sigma. By Fact 35,

(sup0s𝔷BI(s)κAσ{rs}s0)=Ω(1κAe12κA2).\mathbb{P}(\sup_{0\leq s\leq\mathfrak{z}}B_{I(s)}\geq\kappa_{A}\sigma\mid\{r_{s}\}_{s\geq 0})=\Omega(\tfrac{1}{\kappa_{A}}e^{-\frac{1}{2}\kappa_{A}^{2}}).

Thus, for x0𝒟(𝔷)x_{0}\in\mathcal{D}^{(\mathfrak{z})} with |θ0|ϕ(R)+κAϕ(𝔷)2κAeβR𝔷log𝔷|\theta_{0}|\leq\phi(R)+\kappa_{A}\phi^{(\mathfrak{z})}\leq 2\kappa_{A}e^{-\beta R}\sqrt{\mathfrak{z}\log\mathfrak{z}}, since x0()η~2\mathbb{P}_{x_{0}}(\mathcal{E})\geq\frac{\widetilde{\eta}}{2} and using that 0.2x2logx0.2x^{2}\geq\log x for all x1x\geq 1,

x0(Tdet𝔷)η~2x0(Tdet 𝔷)=Ω(1κAe12κA2)=Ω(e0.7κA2).\mathbb{P}_{x_{0}}(T_{det}\leq\mathfrak{z})\geq\tfrac{\widetilde{\eta}}{2}\mathbb{P}_{x_{0}}(T_{det}\leq \mathfrak{z}\mid\mathcal{E})=\Omega(\tfrac{1}{\kappa_{A}}e^{-\frac{1}{2}\kappa_{A}^{2}})=\Omega(e^{-0.7\kappa_{A}^{2}}). (24)

It remains to show thus that with positive probability the trajectory starting with a fixed initial radius for x0𝒟(𝔷)x_{0}\in\mathcal{D}^{(\mathfrak{z})} can be coupled in such a way that the probability of detection of the target by time 𝔷\mathfrak{z} can be bounded from below by the probability of detection when the radius is chosen according to the stationary distribution π()\pi(\cdot). Denote by r^t\widehat{r}_{t} the radial component at time tt when starting according to the stationary distribution. Consider the event 𝒜\mathcal{A} that the initial radial value r^0\widehat{r}_{0} is at most one unit away from the boundary of BO(R)B_{O}(R), that is, 𝒜:={r^0[R1,R]}\mathcal{A}:=\{\widehat{r}_{0}\in[R-1,R]\}. Clearly, (𝒜)=π([R1,R])=Ω(1)\mathbb{P}(\mathcal{A})=\pi([R-1,R])=\Omega(1). Let \mathcal{B} be the event that {r^s}s0\{\widehat{r}_{s}\}_{s\geq 0} starting from R1R-1 hits RR by time 12𝔷\frac{1}{2}\mathfrak{z}. Conditional under 𝒜\mathcal{A}, since the time to hit RR is clearly dominated by the time a standard Brownian motion (corresponding to a one-dimensional radial movement) hits RR starting from R1R-1, by our lower bound on 𝔷\mathfrak{z}, we have that (𝒜)=Ω(1)\mathbb{P}(\mathcal{B}\mid\mathcal{A})=\Omega(1). Note that conditional under 𝒜\mathcal{A}\cap\mathcal{B} either the trajectories starting with a fixed value of r0r_{0} on the one hand and with r^0\widehat{r}_{0} according to the stationary distribution π()\pi(\cdot) on the other hand must cross by time 12𝔷\frac{1}{2}\mathfrak{z} (and they can be coupled naturally from then on for a time interval of 12𝔷\frac{1}{2}\mathfrak{z}), or rtr^tr_{t}\leq\widehat{r}_{t} for any 0t12𝔷0\leq t\leq\frac{1}{2}\mathfrak{z}, and during an initial time period of length 12𝔷\frac{1}{2}\mathfrak{z} the detection probability starting from r0r_{0} stochastically dominates the one when starting from r^0\widehat{r}_{0}. Thus, with probability (𝒜)=Ω(1)\mathbb{P}(\mathcal{A}\cap\mathcal{B})=\Omega(1), the process {rs}s0\{r_{s}\}_{s\geq 0} can be successfully coupled with {r^s}s0\{\widehat{r}_{s}\}_{s\geq 0}, and since we aim for a lower bound, we assume that 𝒜\mathcal{A}\cap\mathcal{B} holds. Conditional under 𝒜\mathcal{A}\cap\mathcal{B}, we may thus apply the reasoning yielding (24) with 𝔷\mathfrak{z} replaced by 12𝔷\frac{1}{2}\mathfrak{z}, and the result follows. ∎

We still have to deal with particles whose initial location are points satisfying the second condition in the definition of 𝒟(𝔷)\mathcal{D}^{(\mathfrak{z})}. The next proposition does this.

Proposition 36.

Let κR>1\kappa_{R}>1 and assume that 𝔷Ce2κA2\mathfrak{z}\geq Ce^{2\kappa_{A}^{2}} for CC large enough. Then, for any x0=(r0,θ0)x_{0}=(r_{0},\theta_{0}) with (|θ0|ϕ(r0))e(β12)r0κR(|\theta_{0}|-\phi(r_{0}))e^{(\beta\wedge\frac{1}{2})r_{0}}\leq\kappa_{R}, we have the following:

  1. (i)

    If α2β\alpha\geq 2\beta, then x0(Tdet𝔷)=Ω(κRαβ)=Ω(eαβκA2).\displaystyle\mathbb{P}_{x_{0}}(T_{det}\leq\mathfrak{z})=\Omega(\kappa_{R}^{-\frac{\alpha}{\beta}})=\Omega(e^{-{\frac{\alpha}{\beta}\kappa_{A}^{2}}}).

  2. (ii)

    If α<2β\alpha<2\beta, then x0(Tdet𝔷)=Ω(κRα/(β12))=Ω(κAα/(β12)).\displaystyle\mathbb{P}_{x_{0}}(T_{det}\leq\mathfrak{z})=\Omega(\kappa_{R}^{-\alpha/(\beta\wedge\frac{1}{2})})=\Omega(\kappa_{A}^{-\alpha/(\beta\wedge\frac{1}{2})}).

Proof.

Thanks to Lemma 32 we may, and will, assume throughout the proof that r0>cr_{0}>c^{*} for an arbitrarily large cc^{*}. Under said condition, the proof argument formalizes the following intuitive fact: Either there is a chance for the particle to move radially towards the origin so that the boundary of BQ(R)B_{Q}(R) is reached and detection happens, or there is a chance to move radially towards the origin, enough so that the particle stays long enough in such a region close enough to the origin so that the relatively large angle the particle traverses makes detection probable.

We first assume β<1/2\beta<1/2. Observe that we may assume |θ0|2ϕ(r0)|\theta_{0}|\geq 2\phi(r_{0}), since otherwise during one time unit there is constant probability that the radial value is at most min{R,r0+1}\min\{R,r_{0}+1\}, and conditional under this event, during this time unit with constant probability an angular movement of standard deviation at least eβ(r0+1)e^{-\beta(r_{0}+1)} is performed, thus covering with constant probability an angular distance of eβ(r0+1)2ϕ(r0)e^{-\beta(r_{0}+1)}\geq 2\phi(r_{0}) in the case of β<1/2\beta<1/2 for r0>cr_{0}>c^{*}. We may and will thus below replace the condition (|θ0|ϕ(r0))e(β12)r0κR(|\theta_{0}|-\phi(r_{0}))e^{(\beta\wedge\frac{1}{2})r_{0}}\leq\kappa_{R} by |θ0|e(β12)r02κR|\theta_{0}|e^{(\beta\wedge\frac{1}{2})r_{0}}\leq 2\kappa_{R}. We consider the worst-case scenario, i.e., |θ0|=2κReβr0|\theta_{0}|=2\kappa_{R}e^{-\beta r_{0}}, or equivalently r0=1βlog(2κR/|θ0|)r_{0}=\frac{1}{\beta}\log(2\kappa_{R}/|\theta_{0}|). We restrict our discussion to vertices with r0<RlogκAr_{0}<R-\log\kappa_{A}, as other vertices were already considered before: indeed, for values of r0RlogκAr_{0}\geq R-\log\kappa_{A}, in case (i) we have 2κReβr0=2eκA2eβr0eκA2eβRκAκA𝔷eβR=κAϕ(𝔷)2\kappa_{R}e^{-\beta r_{0}}=2e^{\kappa_{A}^{2}}e^{-\beta r_{0}}\leq e^{\kappa_{A}^{2}}e^{-\beta R}\kappa_{A}\leq\kappa_{A}\sqrt{\mathfrak{z}}e^{-\beta R}=\kappa_{A}\phi^{(\mathfrak{z})}, where the second inequality follows from our assumption of 𝔷Ce2κA2\mathfrak{z}\geq Ce^{2\kappa_{A}^{2}} with CC large. Hence such values of x0=(θ0,r0)x_{0}=(\theta_{0},r_{0}) satisfy the first condition of 𝒟(𝔷)\mathcal{D}^{(\mathfrak{z})} and were treated in Proposition 34. In case (ii), for values of r0RlogκAr_{0}\geq R-\log\kappa_{A}, we have 2κReβr0=2κAeβr02κA2eβRκAϕ(𝔷)2\kappa_{R}e^{-\beta r_{0}}=2\kappa_{A}e^{-\beta r_{0}}\leq 2\kappa_{A}^{2}e^{-\beta R}\leq\kappa_{A}\phi^{(\mathfrak{z})}, where again the second inequality follows from the same assumption on 𝔷Ce2κA2\mathfrak{z}\geq Ce^{2\kappa_{A}^{2}} (again with room to spare), and again this case was already dealt with in Proposition 34. Define g(r):=log(tanh(12αR)/tanh(12αr))g(r):=\log(\tanh(\frac{1}{2}\alpha R)/\tanh(\frac{1}{2}\alpha r)) as in Fact 17 and let r¯0:=max{r01βlogκR,c}\overline{r}_{0}:=\max\{r_{0}-\frac{1}{\beta}\log\kappa_{R},c^{*}\}. By arguments given in said fact, since we have restricted our discussion to the case where r0=RΩ(1)r_{0}=R-\Omega(1), we have g(r0)=Ω(eαr0)g(r_{0})=\Omega(e^{-\alpha r_{0}}) and, since r¯0c\overline{r}_{0}\geq c^{*} with cc^{*} large, we also have g(r¯0)=O(eαr¯0)g(\overline{r}_{0})=O(e^{-\alpha\overline{r}_{0}}). Recall from Part (iii) of Lemma 14 that g(r0)/g(r¯0)g(r_{0})/g(\overline{r}_{0}) is the probability that starting from radius r0r_{0} we hit the radial value r¯0\overline{r}_{0} before hitting the radial value RR, and we have

g(r0)g(r¯0)=Ω(eα(r¯0r0))=Ω(eα(max{r01βlogκR,c}r0))=Ω(κRα/β).\frac{g(r_{0})}{g(\overline{r}_{0})}=\Omega(e^{\alpha(\overline{r}_{0}-r_{0})})=\Omega(e^{\alpha(\max\{r_{0}-\frac{1}{\beta}\log\kappa_{R},c^{*}\}-r_{0})})=\Omega(\kappa_{R}^{-\alpha/\beta}).

Thus, by Part (iv) of Lemma 14 with y0:=r¯0y_{0}:=\overline{r}_{0} and Y:=RY:=R, we have that 𝔼r0(Tr¯0Tr¯0<TR)2αβlogκR+2α214𝔷\mathbb{E}_{r_{0}}(T_{\overline{r}_{0}}\mid T_{\overline{r}_{0}}<T_{R})\leq\frac{2}{\alpha\beta}\log\kappa_{R}+\frac{2}{\alpha^{2}}\leq\frac{1}{4}\mathfrak{z} by our lower bound hypothesis of 𝔷\mathfrak{z} with C:=C(β)C:=C(\beta) sufficiently large. By Markov’s inequality, conditional under Tr¯0<TRT_{\overline{r}_{0}}<T_{R}, starting at radius r0r_{0}, with probability at least 1/21/2, the value of r¯0\overline{r}_{0} is hit by time 12𝔷\frac{1}{2}\mathfrak{z}. In this case, if r¯0=c\overline{r}_{0}=c^{*}, then by Lemma 32 the target is detected with constant probability in constant time, and we are done. If r¯0=r01βlogκR>c\overline{r}_{0}=r_{0}-\frac{1}{\beta}\log\kappa_{R}>c^{*}, then with constant probability, during the ensuing unit time interval, the radial coordinate rr is always at most r¯0+1\overline{r}_{0}+1. Call this event 𝒜\mathcal{A}. By symmetry of the angular movement, with probability at least 1/21/2, either the angle θ\theta at the hitting time of r¯0\overline{r}_{0} is in absolute value at most |θ0||\theta_{0}| or there was a time moment tt with θt=0\theta_{t}=0 before, and we had detected QQ by time tt already. Call this event \mathcal{B}. Conditional under 𝒜\mathcal{A}\cap\mathcal{B}, the variance of the angular movement after reaching r¯0\overline{r}_{0} during the following unit time interval is at least e2β(r01βlogκR+1)=e2β(r0+1)κR2e^{-2\beta(r_{0}-\frac{1}{\beta}\log\kappa_{R}+1)}=e^{-2\beta(r_{0}+1)}\kappa_{R}^{2}, so recalling that by our worst-case assumption |θ0|=2κReβr0|\theta_{0}|=2\kappa_{R}e^{-\beta r_{0}}, the standard deviation is thus at least eβ(r0+1)κR=eβ|θ0|/2e^{-\beta(r_{0}+1)}\kappa_{R}=e^{-\beta}|\theta_{0}|/2. Hence, with constant probability, independently of κR\kappa_{R} (and also independently of β\beta), in this case an angle of |θ0||\theta_{0}| is covered during a unit time interval, and by our lower bound assumption of 𝔷\mathfrak{z}, the particle is detected by time 𝔷\mathfrak{z}, finishing also this case and establishing the proposition when β<1/2\beta<1/2 by plugging in the respective relations between κA\kappa_{A} and κR\kappa_{R} in order to obtain the last equality of the the two parts of the statement.

Next, we consider Part (ii) which encompasses a case similar to the one analyzed in Section 4, so we give only a short sketch of how it is handled. By hypothesis and case assumption |θ0|ϕ(r0)+κRe12r0|\theta_{0}|\leq\phi(r_{0})+\kappa_{R}e^{-\frac{1}{2}r_{0}}. Since κR>1\kappa_{R}>1, we thus have |θ0|3κRe12r0|\theta_{0}|\leq 3\kappa_{R}e^{-\frac{1}{2}r_{0}}. Again, it suffices to show the statement for the worst-case scenario, that is, we may assume |θ0|=3κRe12r0|\theta_{0}|=3\kappa_{R}e^{-\frac{1}{2}r_{0}}. We may assume that r0<RκRr_{0}<R-\kappa_{R}, since for vertices with r0RκRr_{0}\geq R-\kappa_{R}, using our assumption of 𝔷Ce2κA2\mathfrak{z}\geq Ce^{2\kappa_{A}^{2}} (with room to spare) we have 3κRe12r03κReκRe12RκA𝔷1/(2α)e12R=κAϕ(𝔷)3\kappa_{R}e^{-\frac{1}{2}r_{0}}\leq 3\kappa_{R}e^{\kappa_{R}}e^{-\frac{1}{2}R}\leq\kappa_{A}\mathfrak{z}^{1/(2\alpha)e^{-\frac{1}{2}R}}=\kappa_{A}\phi^{(\mathfrak{z})}, and such cases were already dealt with in the proof of Part (ii) of Proposition 34. By Lemma 32 we may also assume r0>cr_{0}>c^{*} for arbitrarily large cc^{*}. Define r¯0:=max{r02logκR2,c}\overline{r}_{0}:=\max\{r_{0}-2\log\kappa_{R}-2,c^{*}\}. As before, recall from Part (iii) of Lemma 14 that g(r0)/g(r¯0)g(r_{0})/g(\overline{r}_{0}) is the probability that starting from radius r0r_{0} we hit the radial value r¯0\overline{r}_{0} before hitting the radial value RR. Since r0=RΩ(1)r_{0}=R-\Omega(1) and since r¯0c\overline{r}_{0}\geq c^{*}, we have

g(r0)g(r¯0)=Ω(eα(r¯0r0))=Ω(eα(max{r02logκR2,c}r0))=Ω(κR2α),\frac{g(r_{0})}{g(\overline{r}_{0})}=\Omega(e^{\alpha(\overline{r}_{0}-r_{0})})=\Omega(e^{\alpha(\max\{r_{0}-2\log\kappa_{R}-2,c^{*}\}-r_{0})})=\Omega(\kappa_{R}^{-2\alpha}),

and let 𝒜\mathcal{A} be the event that this indeed happens. By Part (iv) of Lemma 14 with y0:=r¯0y_{0}:=\overline{r}_{0} and Y:=RY:=R, we have that 𝔼x0(Tr¯0𝒜)4αlogκR+4α+2α214𝔷\mathbb{E}_{x_{0}}(T_{\overline{r}_{0}}\mid\mathcal{A})\leq\frac{4}{\alpha}\log\kappa_{R}+\frac{4}{\alpha}+\frac{2}{\alpha^{2}}\leq\frac{1}{4}\mathfrak{z} by our lower bound hypothesis on 𝔷\mathfrak{z}. By Markov’s inequality, with probability at least 1/21/2, Tr¯0<12𝔷T_{\overline{r}_{0}}<\frac{1}{2}\mathfrak{z}. Let \mathcal{B} be the event to reach the radius r¯0\overline{r}_{0} by time 12𝔷\frac{1}{2}\mathfrak{z}. We have

x0()x0(𝒜)x0(𝒜)=Ω(κR2α).\mathbb{P}_{x_{0}}(\mathcal{B})\geq\mathbb{P}_{x_{0}}(\mathcal{B}\mid\mathcal{A})\mathbb{P}_{x_{0}}(\mathcal{A})=\Omega(\kappa_{R}^{-2\alpha}).

Let 𝒞\mathcal{C} be the event that one of the following events occurs: Either at the moment TT when reaching r¯0\overline{r}_{0} the angle made at the origin between the particle and QQ is in absolute value at most |θ0||\theta_{0}|, or there was a moment tTt\leq T where θt=0\theta_{t}=0 (and we already detected QQ by time tt). Note that by symmetry of the angular movement, x0(𝒞)=x0(𝒞)1/2\mathbb{P}_{x_{0}}(\mathcal{C}\mid\mathcal{B})=\mathbb{P}_{x_{0}}(\mathcal{C})\geq 1/2. Hence, x0(𝒞)=Ω(κR2α)\mathbb{P}_{x_{0}}(\mathcal{C}\cap\mathcal{B})=\Omega(\kappa_{R}^{-2\alpha}). To conclude observe first the following: If it is the case that r¯0=c\overline{r}_{0}=c^{*}, then conditional under 𝒞\mathcal{C}\cap\mathcal{B}, by Lemma 32, with constant probability, in time O(1)O(1) the particle QQ is detected, and since 12𝔷+O(1)𝔷\frac{1}{2}\mathfrak{z}+O(1)\leq\mathfrak{z}, we are done. In particular, if we had |θ0|>π/2c|\theta_{0}|>\pi/2-c for some sufficiently small c>0c>0, then, since |θ0|3κReβr0|\theta_{0}|\leq 3\kappa_{R}e^{-\beta r_{0}}, we must have r0<cr_{0}<c^{*}, and therefore clearly also r¯0=c\overline{r}_{0}=c^{*}. If on the other hand we had |θ0|π/2c|\theta_{0}|\leq\pi/2-c for some arbitrarily small c>0c>0 and also r¯0=r02logκR\overline{r}_{0}=r_{0}-2\log\kappa_{R}, then observe that since r¯0>c\overline{r}_{0}>c^{*} with cc^{*} large enough, we have ϕ(r¯0)e1.99κRer0/23κRer0/2|θ0|\phi(\overline{r}_{0})\geq e1.99\kappa_{R}e^{-r_{0}/2}\geq 3\kappa_{R}e^{-r_{0}/2}\geq|\theta_{0}|, and QQ is detected by time TT, since ϕ(r¯0)\phi(\overline{r}_{0}) corresponds to the maximum angle at the origin between a particle at radius r¯0\overline{r}_{0} and QQ that guarantees detection. ∎

The uniform lower bound of Theorem 30 now follows directly by combining Proposition 34 and Proposition 36. The integral lower bound of Theorem 30 follows directly from Proposition 34 applied with κA\kappa_{A} being any fixed constant and the fact that μ(𝒟(𝔷))=Ω(nϕ(𝔷))\mu(\mathcal{D}^{(\mathfrak{z})})=\Omega(n\phi^{(\mathfrak{z})}) (by considering the condition |θ0|ϕ(R)+κAϕ(𝔷)|\theta_{0}|\leq\phi(R)+\kappa_{A}\phi^{(\mathfrak{z})} only), and the proofs of the lower bounds of Theorem 29 and Theorem 30 are finished.

5.2 Upper bounds

In this section we will show the corresponding upper bounds of Theorems 29 and 30: that is, we show that particles initially placed outside 𝒟(𝔷)\mathcal{D}^{(\mathfrak{z})}, with 𝒟(𝔷)\mathcal{D}^{(\mathfrak{z})} as in the corresponding theorems, have only a small chance of detecting the target by time 𝔷\mathfrak{z}. For all values of 𝔷\mathfrak{z}, we will show that

𝒟¯(𝔷)x0(Tdet𝔷)dμ(x0)=O(μ(𝒟(𝔷))),\int_{\overline{\mathcal{D}}^{(\mathfrak{z})}}\mathbb{P}_{x_{0}}(T_{det}\leq\mathfrak{z})d\mu(x_{0})=O(\mu(\mathcal{D}^{(\mathfrak{z})})),

thus establishing that the significant contribution to the tail probabilities comes from particles inside 𝒟(𝔷)\mathcal{D}^{(\mathfrak{z})}, and for large values of 𝔷\mathfrak{z} we will show uniform upper bounds for the detection probability of every point outside 𝒟(𝔷)\mathcal{D}^{(\mathfrak{z})}. In order to analyze the trajectory of a particle we will make use of the fact that the generator driving its motion can be separated into its radial and angular component, given respectively by

Δrad=122r2+α21tanh(αr)randΔang=12sinh2(βr)2θ2.\Delta_{rad}=\frac{1}{2}\frac{\partial^{2}}{\partial r^{2}}+\frac{\alpha}{2}\frac{1}{\tanh(\alpha r)}\frac{\partial}{\partial r}\quad\text{and}\quad\Delta_{ang}=\frac{1}{2\sinh^{2}(\beta r)}\frac{\partial^{2}}{\partial\theta^{2}}.

Since the generator driving the radial part of the motion does not depend on θ\theta, our approach consists in sampling the radial component {rs}s0\{r_{s}\}_{s\geq 0} of the trajectories first and then, conditional under a given radial trajectory, sampling the angular component {θs}s0\{\theta_{s}\}_{s\geq 0}. With this approach we can make use of the results obtained in Section 4 in order to study {rs}s0\{r_{s}\}_{s\geq 0}, while the angular component is distributed according to a time-changed Brownian motion θs=BI(s)\theta_{s}=B_{I(s)}, where

I(s):=0scosech2(βru)duI(s):=\int_{0}^{s}\operatorname{cosech}^{2}(\beta r_{u})du

relates the radial trajectory to the angular variance of {θs}s0\{\theta_{s}\}_{s\geq 0}, as already seen in Section 3. To be able to apply the insight obtained by studying each component separately, we will need to replace the hitting time of the target (which translates into hitting BQ(R)B_{Q}(R) whose boundary defines a curve relating rr and θ\theta) by the exit time of a simpler set: let x0𝒟¯(𝔷)x_{0}\in\overline{\mathcal{D}}^{(\mathfrak{z})} be any fixed initial location with θ0(0,π)\theta_{0}\in(0,\pi) and define a box (x0)\mathcal{B}(x_{0}) containing x0x_{0}, of the form

(x0):=[r^0,R]×[θ0θ^0,θ0+θ^0]B¯Q(R)\mathcal{B}(x_{0}):=[\widehat{r}_{0},R]\times[\theta_{0}-\widehat{\theta}_{0},\theta_{0}+\widehat{\theta}_{0}]\subseteq\overline{B}_{Q}(R) (25)

where θ^0\widehat{\theta}_{0} and r^0\widehat{r}_{0} will be defined later on. Since (x0)B¯Q(R)\mathcal{B}(x_{0})\subseteq\overline{B}_{Q}(R), it follows that in order for a particle starting at x0x_{0} to detect the target, it must first exit the box through either its upper boundary or its side boundaries. Denoting by Trad0T^{rad}_{0} and Tang0T^{ang}_{0} the respective hitting times of said boundaries, this implies that Trad0Tang0TdetT^{rad}_{0}\wedge T^{ang}_{0}\leq T_{det}, and we can obtain the bound

x0(Tdet𝔷)x0(Trad0𝔷)+x0(Tang0𝔷<Trad0).\mathbb{P}_{x_{0}}(T_{det}\leq\mathfrak{z})\,\leq\,\mathbb{P}_{x_{0}}(T^{rad}_{0}\leq\mathfrak{z})+\mathbb{P}_{x_{0}}(T^{ang}_{0}\leq\mathfrak{z}<T^{rad}_{0}). (26)

The advantage of addressing the exit time of (x0)\mathcal{B}(x_{0}) instead of TdetT_{det} is straightforward; the first event {T0rad𝔷}\{T_{0}^{rad}\leq\mathfrak{z}\} is independent of the angular component of the trajectory, allowing us to bound from above its probability with the tools developed in Section 4, while the treatment of the second event {T0ang𝔷<T0rad}\{T_{0}^{ang}\leq\mathfrak{z}<T_{0}^{rad}\} will follow from standard results on Brownian motion and the control of I(𝔷)I(\mathfrak{z}). The following result allows us to bound the second term on the right-hand side of (26):

Proposition 37.

Define J(r):=cosech2(βr)J(r):=\operatorname{cosech}^{-2}(\beta r). Then,

x0(Tang0𝔷Trad0) 4Φ(θ^0𝔷J(r^0)).\mathbb{P}_{x_{0}}(T^{ang}_{0}\leq\mathfrak{z}\leq T^{rad}_{0})\,\leq\,4\Phi\Big{(}{-}\frac{\widehat{\theta}_{0}}{\sqrt{\mathfrak{z}J(\widehat{r}_{0})}}\Big{)}.

where Φ\Phi stands for the error function. Furthermore,

x0(Tang0𝔷)2π0θ^0σ3/2eθ^022σx0(I(𝔷)σ)dσ.\mathbb{P}_{x_{0}}(T^{ang}_{0}\leq\mathfrak{z})\leq\sqrt{\frac{2}{\pi}}\int_{0}^{\infty}\frac{\widehat{\theta}_{0}}{\sigma^{3/2}}e^{-\frac{\widehat{\theta}_{0}^{2}}{2\sigma}}\mathbb{P}_{x_{0}}\big{(}I(\mathfrak{z})\geq\sigma\big{)}d\sigma.
Proof.

Denote by x0({ru}u0)\mathbb{P}_{x_{0}}(\cdot\mid\{r_{u}\}_{u\geq 0}) the law of the angular process given a realization of its radial component. Given such a realization we already know that the trajectory {θu}u0\{\theta_{u}\}_{u\geq 0} is equal in law to that of the time-changed Brownian motion {BI(u)+θ0}u0\{B_{I(u)}+\theta_{0}\}_{u\geq 0}, for which Tang0T^{ang}_{0} is equal to the exit time of BI(u)B_{I(u)} from [θ^0,θ^0][-\widehat{\theta}_{0},\widehat{\theta}_{0}]. Since the exit time of a Brownian motion is well known, we proceed as in Section 3 using the reflection principle to obtain

x0(Tang0𝔷|{ru}u0)4x0(BI(𝔷)θ^0{ru}u0)=4Φ(θ^0I(𝔷))\mathbb{P}_{x_{0}}(T^{ang}_{0}\leq\mathfrak{z}\,|\{r_{u}\}_{u\geq 0})\leq 4\mathbb{P}_{x_{0}}(B_{I(\mathfrak{z})}\leq-\widehat{\theta}_{0}\mid\{r_{u}\}_{u\geq 0})=4\Phi\Big{(}{-}\frac{\widehat{\theta}_{0}}{\sqrt{I(\mathfrak{z})}}\Big{)}

where the factor 4 is obtained since the probability of exiting by hitting one of the angles is twice the probability of hitting any of the two angles, which is twice the probability of hitting one fixed angle. Taking expectations with respect to the law of {ru}u0\{r_{u}\}_{u\geq 0} we obtain

x0(Tang0𝔷Trad0)4𝔼x0(Φ(θ^0I(𝔷))𝟏{𝔷Trad0}),\mathbb{P}_{x_{0}}(T^{ang}_{0}\leq\mathfrak{z}\leq T^{rad}_{0})\leq 4\mathbb{E}_{x_{0}}\Big{(}\Phi\Big{(}{-}\frac{\widehat{\theta}_{0}}{\sqrt{I(\mathfrak{z})}}\Big{)}{\bf 1}_{\{\mathfrak{z}\leq T^{rad}_{0}\}}\Big{)},

where the expression within the expectation depends on the radial movement alone. To apply this last bound observe that for any realization of {ru}u0\{r_{u}\}_{u\geq 0} such that Trad0𝔷T^{rad}_{0}\geq\mathfrak{z} we have that inf0u𝔷rur^0\inf_{0\leq u\leq\mathfrak{z}}r_{u}\geq\widehat{r}_{0}, so I(𝔷)𝔷J(r^0)I(\mathfrak{z})\leq\mathfrak{z}J(\widehat{r}_{0}), which gives the first part of the proposition’s statement. For the second part of the proposition we can make the same computation as before without taking into account the event {Trad0𝔷}\{T^{rad}_{0}\geq\mathfrak{z}\}, giving

x0(Tang0𝔷) 4𝔼x0(Φ(θ^0I(𝔷)))=40Φ(θ^0σ)dx0(I(𝔷)σ),\mathbb{P}_{x_{0}}(T^{ang}_{0}\leq\mathfrak{z})\,\leq\,4\mathbb{E}_{x_{0}}\Big{(}\Phi\Big{(}{-}\frac{\widehat{\theta}_{0}}{\sqrt{I(\mathfrak{z})}}\Big{)}\Big{)}\,=\,-4\int_{0}^{\infty}\Phi\Big{(}{-}\frac{\widehat{\theta}_{0}}{\sqrt{\sigma}}\Big{)}d\mathbb{P}_{x_{0}}\big{(}I(\mathfrak{z})\geq\sigma\big{)},

and the result follows after using integration by parts, since ddσΦ(θ^0σ)=122πθ^0σ3/2eθ^022σ\frac{d}{d\sigma}\Phi(-\frac{\widehat{\theta}_{0}}{\sqrt{\sigma}})=\frac{1}{2\sqrt{2\pi}}\frac{\widehat{\theta}_{0}}{\sigma^{3/2}}e^{-\frac{\widehat{\theta}_{0}^{2}}{2\sigma}}. ∎

On a high level, in the proposition above, the bound for x0(Tang0𝔷<Trad0)\mathbb{P}_{x_{0}}(T^{ang}_{0}\leq\mathfrak{z}<T^{rad}_{0}) will be useful as long as 𝔷=O(1)\mathfrak{z}=O(1) since in this case the inequality 𝔷J(r^0)I(𝔷)\mathfrak{z}J(\widehat{r}_{0})\leq I(\mathfrak{z}) is not too loose. However, when 𝔷=ω(1)\mathfrak{z}=\omega(1) we need to control this term using the second part of Proposition 37, which relies heavily on bounding probabilities of the form (I(𝔷)σ)\mathbb{P}(I(\mathfrak{z})\geq\sigma). To provide these bounds observe that when inf0u𝔷ru\inf_{0\leq u\leq\mathfrak{z}}r_{u} is not too close to zero the drift function is almost constant so the trajectories of {ru}u0\{r_{u}\}_{u\geq 0} resemble those of a Brownian motion with drift α2\tfrac{\alpha}{2} (with a reflecting boundary at RR). Even further, on the event where inf0u𝔷ru\inf_{0\leq u\leq\mathfrak{z}}r_{u} is not too close to zero we also have

I(𝔷)0𝔷e2βrudu,I(\mathfrak{z})\approx\int_{0}^{\mathfrak{z}}e^{-2\beta r_{u}}du, (27)

where the integral appearing above on the right-hand side has been widely studied in the case when {ru}u0\{r_{u}\}_{u\geq 0} is an (unbounded) Brownian motion with drift (see [31, 41, 16, 15]), revealing that their distributions are heavy-tailed with the tail exponent given analytically in terms of β\beta and the drift of {ru}u0\{r_{u}\}_{u\geq 0}. In order to make use of known results we must first address the change in behavior arising from the reflecting boundary of our process. To do so, we consider an auxiliary process {r~u}u0\{\widetilde{r}_{u}\}_{u\geq 0} akin to the one introduced in the proof of Proposition 20:

  • {r~u}u0\{\widetilde{r}_{u}\}_{u\geq 0} begins at r0r_{0} and evolves according to Δrad\Delta_{rad} up until hitting RR.

  • Every time {r~u}u0\{\widetilde{r}_{u}\}_{u\geq 0} hits RR, it is immediately restarted at R1R-1 and continues to evolve as before.

It is not hard to see that {r~u}u0\{\widetilde{r}_{u}\}_{u\geq 0} is stochastically dominated by {ru}u0\{r_{u}\}_{u\geq 0}, in the sense that the radius of the auxiliary process is always smaller. Thus, in particular, I(𝔷)I~(𝔷):=0scosech2(βr~u)duI(\mathfrak{z})\leq\widetilde{I}(\mathfrak{z}):=\int_{0}^{s}\operatorname{cosech}^{2}(\beta\widetilde{r}_{u})du, and hence it will be enough to bound probabilities of the form (I~(𝔷)σ)\mathbb{P}(\widetilde{I}(\mathfrak{z})\geq\sigma) from above. Now, from the definition of r~u\widetilde{r}_{u}, it is natural to use the subsequent hitting times of RR, say 0<TR(1)<TR(2)<0<T_{R}^{(1)}<T_{R}^{(2)}<\ldots, to divide the trajectory of {r~u}u0\{\widetilde{r}_{u}\}_{u\geq 0} into excursions from R1R-1 to RR (or from r0r_{0} to RR, in the case of the first one), giving

I~(𝔷)0T(1)Rcosech2(βr~u)du+i=1M(𝔷)T(i)RT(i+1)Rcosech2(βr~u)du,\widetilde{I}(\mathfrak{z})\,\leq\,\int_{0}^{T^{(1)}_{R}}\operatorname{cosech}^{2}(\beta\widetilde{r}_{u})du\,+\,\sum_{i=1}^{M(\mathfrak{z})}\int_{T^{(i)}_{R}}^{T^{(i+1)}_{R}}\operatorname{cosech}^{2}(\beta\widetilde{r}_{u})du,

where M(𝔷)M(\mathfrak{z}) is a random variable equal to the amount of times {r~u}u0\{\widetilde{r}_{u}\}_{u\geq 0} has hit RR by time 𝔷\mathfrak{z}. Let {r~(0)u}u0,{r~(1)u}u0,{r~(2)u}u0,\{\widetilde{r}^{(0)}_{u}\}_{u\geq 0},\{\widetilde{r}^{(1)}_{u}\}_{u\geq 0},\{\widetilde{r}^{(2)}_{u}\}_{u\geq 0},\ldots be independent random diffusion processes (that will correspond to the different excursions), each evolving on (0,)(0,\infty) according to the generator Δrad\Delta_{rad} without the reflecting boundary at RR, and such that {r~(0)u}u0\{\widetilde{r}^{(0)}_{u}\}_{u\geq 0} starts at r0r_{0} while the rest of the {r~(i)u}u0\{\widetilde{r}^{(i)}_{u}\}_{u\geq 0} start at R1R-1. From the definition of the auxiliary process it follows that within each time interval of the form [T(i)R,T(i+1)R][T^{(i)}_{R},T^{(i+1)}_{R}] (or [0,T(1)R][0,T^{(1)}_{R}] in the case of the first excursion) the trajectory of {r~u}u0\{\widetilde{r}_{u}\}_{u\geq 0} is equal to the one of {r~(i)u}u0\{\widetilde{r}^{(i)}_{u^{\prime}}\}_{u^{\prime}\geq 0} for uT(i+1)RT(i)Ru^{\prime}\leq T^{(i+1)}_{R}-T^{(i)}_{R}. Hence,

I~(𝔷)0T(1)Rcosech2(βr~(0)u)du+i=1M(𝔷)0T(i+1)RT(i)Rcosech2(βr~(i)u)dui=0M(𝔷)I~(i),\widetilde{I}(\mathfrak{z})\,\leq\,\int_{0}^{T^{(1)}_{R}}\operatorname{cosech}^{2}(\beta\widetilde{r}^{(0)}_{u})du\,+\,\sum_{i=1}^{M(\mathfrak{z})}\int_{0}^{T^{(i+1)}_{R}-T^{(i)}_{R}}\operatorname{cosech}^{2}(\beta\widetilde{r}^{(i)}_{u})du\,\leq\,\sum_{i=0}^{M(\mathfrak{z})}\widetilde{I}^{(i)},

where for i=0,,M(𝔷)i=0,\ldots,M(\mathfrak{z})

I~(i):=0cosech2(βr~(i)u)du.\widetilde{I}^{(i)}:=\int_{0}^{\infty}\operatorname{cosech}^{2}(\beta\widetilde{r}^{(i)}_{u})du.

Observe that the bound on I~(𝔷)\widetilde{I}(\mathfrak{z}) involves a sum of M(𝔷)+1M(\mathfrak{z})+1 random variables. Intuitively speaking the number of times the process hits RR should be proportional to 𝔷\mathfrak{z}, so we introduce an auxiliary parameter v0>0v_{0}>0 depending on x0x_{0}, to be fixed later, so that the event M(𝔷)>v0𝔷M(\mathfrak{z})>\lceil v_{0}\mathfrak{z}\rceil occurs with a very small probability. Using this new parameter we obtain

x0(I(𝔷)σ)x0(M(𝔷)>v0𝔷)+x0(I~(0)12σ)+x0(i=1v0𝔷I~(i)12σ).\mathbb{P}_{x_{0}}(I(\mathfrak{z})\geq\sigma)\,\leq\,\mathbb{P}_{x_{0}}\big{(}M(\mathfrak{z})>\lceil v_{0}\mathfrak{z}\rceil\big{)}+\mathbb{P}_{x_{0}}\big{(}\widetilde{I}^{(0)}\geq\tfrac{1}{2}\sigma\big{)}+\mathbb{P}_{x_{0}}\Big{(}\sum_{i=1}^{\lceil v_{0}\mathfrak{z}\rceil}\widetilde{I}^{(i)}\geq\tfrac{1}{2}\sigma\Big{)}. (28)

The advantage of the bound above is that it involves the sum of independent random variables I~(i)\widetilde{I}^{(i)} which, aside from I~(0)\widetilde{I}^{(0)}, are also identically distributed. The bound also gives valuable intuition regarding the detection of the target: roughly speaking, in order for detection to take place either x0x_{0} must be sufficiently close to the origin so that the angular variance coming from I~(0)\widetilde{I}^{(0)} becomes significant, or 𝔷\mathfrak{z} must be large enough so that many excursions starting near the boundary occur, allowing for the contribution i=1M(𝔷)I~(i)\sum_{i=1}^{M(\mathfrak{z})}\widetilde{I}^{(i)} to be large enough. As mentioned above (the discussion following (27)), the distributions of the I~(i)\widetilde{I}^{(i)} variables are heavy-tailed. In order to control their sum we make use of the following lemma, whose proof mimics closely what was done in [35]: since our result is slightly more precise than the one in [35], we provide it in the appendix for the sake of completeness.

Lemma 38.

Let Sm:=i=1mZiS_{m}:=\sum_{i=1}^{m}Z_{i} where {Zi}i\{Z_{i}\}_{i\in\mathbb{N}} is a sequence of i.i.d. absolutely continuous random variables taking values in [1,)[1,\infty) such that there are V,γ>0V,\gamma>0 for which for all x0x\geq 0

1FZi(x)=(Zix)Vxγ.1-F_{Z_{i}}(x)=\mathbb{P}(Z_{i}\geq x)\leq Vx^{-\gamma}.

Then, there are ,𝔏>0\mathfrak{C},\mathfrak{L}>0 depending on VV, γ\gamma and 𝔼(Z1)\mathbb{E}(Z_{1}) (if it exists) alone such that:

  • If γ<1\gamma<1 and L>𝔏L>\mathfrak{L}, then (SmLm1γ)Lγ\displaystyle\mathbb{P}(S_{m}\geq Lm^{\frac{1}{\gamma}})\leq\mathfrak{C}L^{-\gamma}.

  • If γ=1\gamma=1 and L>𝔏L>\mathfrak{L}, then (SmLmlog(m))(Llog(m))1𝔏L\displaystyle\mathbb{P}(S_{m}\geq Lm\log(m))\leq\Big{(}\frac{\mathfrak{C}}{L\log(m)}\Big{)}^{1-\frac{\mathfrak{L}}{L}}.

  • If γ>1\gamma>1 and L>𝔏L>\mathfrak{L}, then (SmLm)Lγm((γ1)γ2)\displaystyle\mathbb{P}(S_{m}\geq Lm)\,\leq\,\mathfrak{C}L^{-\gamma}m^{-((\gamma-1)\wedge\frac{\gamma}{2})}.

We can now prove the following result, which will allow us to control x0(I(𝔷)σ)\mathbb{P}_{x_{0}}(I(\mathfrak{z})\geq\sigma):

Proposition 39.

For any 0<c<10<c<1 there are ,𝔏\mathfrak{C}^{\prime},\mathfrak{L}^{\prime} large depending on α\alpha and β\beta alone, such that for any 𝔷c\mathfrak{z}\geq c, v0>4cv_{0}>\frac{4}{c}, and x0BO(R)x_{0}\in B_{O}(R) with r0>1r_{0}>1, defining σ0\sigma_{0} as

σ0:={𝔏e2βR(v0𝔷)12βα,if α2β,𝔏e2βRv0𝔷log(v0𝔷),if α=2β,\sigma_{0}:=\begin{cases}\mathfrak{L}^{\prime}e^{-2\beta R}(v_{0}\mathfrak{z})^{1\vee\frac{2\beta}{\alpha}},&\text{if $\alpha\neq 2\beta$,}\\ \mathfrak{L}^{\prime}e^{-2\beta R}v_{0}\mathfrak{z}\log(v_{0}\mathfrak{z}),&\text{if $\alpha=2\beta$,}\end{cases}

the following statements hold for all σ>σ0\sigma>\sigma_{0} and all RR sufficiently large:

  1. (i)

    x0(M(𝔷)>v0𝔷)e116v02𝔷\mathbb{P}_{x_{0}}(M(\mathfrak{z})>\lceil v_{0}\mathfrak{z}\rceil)\leq e^{-\frac{1}{16}v_{0}^{2}\mathfrak{z}},

  2. (ii)

    x0(I~(0)12σ)log(tanh(αr0/2))log(tanh(α/2))+σα2βeαr0\mathbb{P}_{x_{0}}(\widetilde{I}^{(0)}\geq\tfrac{1}{2}\sigma)\leq\frac{\log(\tanh(\alpha r_{0}/2))}{\log(\tanh(\alpha/2))}+\mathfrak{C}^{\prime}\sigma^{-\frac{\alpha}{2\beta}}e^{-\alpha r_{0}}, and

  3. (iii)

    x0(i=1v0𝔷I~(i)12σ)2v0𝔷log(tanh(α((R1)/2))log(tanh(α/2))+((v0𝔷)1α4βσα2βeαR)1𝔢\mathbb{P}_{x_{0}}\big{(}\sum_{i=1}^{\lceil v_{0}\mathfrak{z}\rceil}\widetilde{I}^{(i)}\geq\tfrac{1}{2}\sigma\big{)}\leq 2v_{0}\mathfrak{z}\frac{\log(\tanh(\alpha((R-1)/2))}{\log(\tanh(\alpha/2))}+\big{(}\mathfrak{C}^{\prime}(v_{0}\mathfrak{z})^{1\vee\frac{\alpha}{4\beta}}\sigma^{-\frac{\alpha}{2\beta}}e^{-\alpha R}\big{)}^{1-\mathfrak{e}} where

    𝔢:={0, if α2β,1σe2βR𝔏v0𝔷log(v0𝔷), if α=2β.\mathfrak{e}:=\begin{cases}0,&\mbox{ if }\alpha\neq 2\beta,\\[3.0pt] \frac{1}{\sigma e^{2\beta R}}\mathfrak{L}^{\prime}v_{0}\mathfrak{z}\log(v_{0}\mathfrak{z}),&\mbox{ if }\alpha=2\beta.\end{cases}
Proof.

We begin with the upper bound for the term x0(M(𝔷)>v0𝔷)\mathbb{P}_{x_{0}}(M(\mathfrak{z})>\lceil v_{0}\mathfrak{z}\rceil). By definition, the event {M(𝔷)>v0𝔷}\{M(\mathfrak{z})>\lceil v_{0}\mathfrak{z}\rceil\} is equal to {T(v0𝔷)R𝔷}\{T^{(\lceil v_{0}\mathfrak{z}\rceil)}_{R}\leq\mathfrak{z}\}. Writing T(v0𝔷)R=T(1)R+i=2v0𝔷(T(i)RT(i1)R)T^{(\lceil v_{0}\mathfrak{z}\rceil)}_{R}=T^{(1)}_{R}+\sum_{i=2}^{\lceil v_{0}\mathfrak{z}\rceil}(T^{(i)}_{R}-T^{(i-1)}_{R}), by Markov’s inequality, for any λ>0\lambda>0 we deduce

x0(M(𝔷)>v0𝔷)x0(i=2v0𝔷(T(i)RT(i1)R)𝔷)eλ𝔷𝔼x0(exp(λi=2v0𝔷(T(i)RT(i1)R)))\mathbb{P}_{x_{0}}\big{(}M(\mathfrak{z})>\lceil v_{0}\mathfrak{z}\rceil\big{)}\,\leq\,\mathbb{P}_{x_{0}}\Big{(}\sum_{i=2}^{\lceil v_{0}\mathfrak{z}\rceil}(T^{(i)}_{R}-T^{(i-1)}_{R})\leq\mathfrak{z}\Big{)}\,\leq\,e^{\lambda\mathfrak{z}}\mathbb{E}_{x_{0}}\Big{(}\exp\big{(}{-}\lambda\sum_{i=2}^{\lceil v_{0}\mathfrak{z}\rceil}(T^{(i)}_{R}-T^{(i-1)}_{R})\big{)}\Big{)}

where the differences T(i)RT(i1)RT^{(i)}_{R}-T^{(i-1)}_{R} are i.i.d. random variables which indicate the time it takes for excursion r~(i1)\widetilde{r}^{(i-1)} starting at R1R-1 to reach RR. Since each {r~(i)u}u0\{\widetilde{r}^{(i)}_{u}\}_{u\geq 0} is stochastically bounded from below by a Brownian motion with constant drift α2\tfrac{\alpha}{2}, we can use Formula 2.0.1 in [11] to obtain

x0(M(𝔷)>v0𝔷)eλ𝔷𝔼(eλ(T(2)RT(1)R))v0𝔷1eλ𝔷(eα2α24+2λ)v0𝔷1.\mathbb{P}_{x_{0}}\big{(}M(\mathfrak{z})>\lceil v_{0}\mathfrak{z}\rceil\big{)}\,\leq\,e^{\lambda\mathfrak{z}}\mathbb{E}\big{(}e^{-\lambda(T^{(2)}_{R}-T^{(1)}_{R})}\big{)}^{\lceil v_{0}\mathfrak{z}\rceil-1}\,\leq\,e^{\lambda\mathfrak{z}}\big{(}e^{\frac{\alpha}{2}-\sqrt{\frac{\alpha^{2}}{4}+2\lambda}}\big{)}^{\lceil v_{0}\mathfrak{z}\rceil-1}.

The exponent λ𝔷+(v0𝔷1)(α2α24+2λ)\lambda\mathfrak{z}+(\lceil v_{0}\mathfrak{z}\rceil-1)(\frac{\alpha}{2}-\sqrt{\frac{\alpha^{2}}{4}+2\lambda}) appearing in the last term is minimized as a function of λ\lambda when 𝔷α24+2λ=v0𝔷1\mathfrak{z}\sqrt{\tfrac{\alpha^{2}}{4}+2\lambda}=\lceil v_{0}\mathfrak{z}\rceil-1, which defines a positive λ\lambda since v0𝔷>4v_{0}\mathfrak{z}>4 and α,c<1\alpha,c<1, giving an expression for the exponent of the right-hand side of the form

12𝔷(v0𝔷1)218α2𝔷+12α(v0𝔷1)12(v0𝔷1)(α1𝔷(v0𝔷1))116v02𝔷-\frac{1}{2\mathfrak{z}}(\lceil v_{0}\mathfrak{z}\rceil-1)^{2}-\frac{1}{8}\alpha^{2}\mathfrak{z}+\frac{1}{2}\alpha(\lceil v_{0}\mathfrak{z}\rceil-1)\leq\frac{1}{2}(\lceil v_{0}\mathfrak{z}\rceil-1)\Big{(}\alpha-\frac{1}{\mathfrak{z}}(\lceil v_{0}\mathfrak{z}\rceil-1)\Big{)}\,\leq\,-\frac{1}{16}\,v_{0}^{2}\mathfrak{z}

where in the last inequality we used that v0𝔷112v0𝔷\lceil v_{0}\mathfrak{z}\rceil-1\geq\frac{1}{2}v_{0}\mathfrak{z} and that α12v014v0\alpha-\frac{1}{2}v_{0}\leq-\frac{1}{4}v_{0}. This proves the bound in (i). In order to control the probabilities appearing in (ii) and (iii) it is convenient from the point of view of computations to work under the assumption that the r~(i)\widetilde{r}^{(i)} processes never get too close to 0. To do so observe that by our assumption of r0>1r_{0}>1, all the r~(i)\widetilde{r}^{(i)} start in (1,)(1,\infty), and denote by τ(i)1\tau^{(i)}_{1} the hitting time of radius 11 by r~(i)\widetilde{r}^{(i)}. Observe that

x0(I~(0)12σ)x0(τ1(0)<)+x0(I~(0)12σ,τ(0)1=).\mathbb{P}_{x_{0}}(\widetilde{I}^{(0)}\geq\tfrac{1}{2}\sigma)\,\leq\,\mathbb{P}_{x_{0}}(\tau_{1}^{(0)}<\infty)+\mathbb{P}_{x_{0}}(\widetilde{I}^{(0)}\geq\tfrac{1}{2}\sigma,\tau^{(0)}_{1}=\infty).

Using Part (iii) from Lemma 14, with 𝐲0:=1{\bf y}_{0}:=1 and Y:=RY:=R, we obtain

x0(τ1(0)<)=limRx0(τ1(0)<TR)=log(tanh(αr0/2))log(tanh(α/2)),\mathbb{P}_{x_{0}}(\tau_{1}^{(0)}<\infty)=\lim_{R\to\infty}\mathbb{P}_{x_{0}}(\tau_{1}^{(0)}<T_{R})=\frac{\log(\tanh(\alpha r_{0}/2))}{\log(\tanh(\alpha/2))},

which gives the first term of the bound for RR (and thus nn) sufficiently large. For the second term, observe that on the event τ(0)1=\tau^{(0)}_{1}=\infty we have r~(0)u>1\widetilde{r}^{(0)}_{u}>1 for all u0u\geq 0 so there is some constant CC depending on β\beta alone such that for all u0u\geq 0, we have cosech2(βr~(0)u)Ce2βr~(0)u\operatorname{cosech}^{2}(\beta\widetilde{r}^{(0)}_{u})\leq Ce^{-2\beta\widetilde{r}^{(0)}_{u}} and hence

x0(I~(0)12σ,τ(0)1=)x0(C0e2βr~(0)udu12σ).\mathbb{P}_{x_{0}}(\widetilde{I}^{(0)}\geq\tfrac{1}{2}\sigma,\tau^{(0)}_{1}=\infty)\,\leq\,\mathbb{P}_{x_{0}}\Big{(}C\int_{0}^{\infty}e^{-2\beta\widetilde{r}^{(0)}_{u}}du\geq\tfrac{1}{2}\sigma\Big{)}.

Now, notice that {r~u(0)r0}u0\{\widetilde{r}_{u}^{(0)}-r_{0}\}_{u\geq 0} is stochastically bounded from below by XuX_{u}, a Brownian motion with constant drift α2\tfrac{\alpha}{2} so we can bound the integral within the last term from above by 0e2β(Xu+r0)du\int_{0}^{\infty}e^{-2\beta(X_{u}+r_{0})}du. This particular functional of Brownian motion was studied in [15, Proposition 4.4.4], where it was shown that

0e2βXudu=W,where W is a positive r.v. with fW(x):=(2β2)α2βΓ(α2β)xα2β1e2β2x\int_{0}^{\infty}e^{-2\beta X_{u}}du\,=\,W,\quad\text{where }W\text{ is a positive r.v.~{}with }\quad f_{W}(x):=\frac{(2\beta^{2})^{\frac{\alpha}{2\beta}}}{\Gamma(\frac{\alpha}{2\beta})}x^{-\frac{\alpha}{2\beta}-1}e^{-\frac{2\beta^{2}}{x}}

that is, WW is distributed according to an inverse gamma distribution with parameters α2β\frac{\alpha}{2\beta} and 2β22\beta^{2}. From the density of the variable WW above the only relevant feature for our purposes is its heavy tail: in order to ease calculations we will bound WW stochastically from above by a random variable ZZ following a Pareto distribution with a sufficiently large scale ω\omega depending on α\alpha and β\beta alone (in particular we will assume ω>1\omega>1), and shape α2β\frac{\alpha}{2\beta}. That is, WZW\preccurlyeq Z with

fZ(x)=2βαω(ωx)α2β+1𝟏(ω,)(x).f_{Z}(x)\,=\,\frac{2\beta}{\alpha\omega}\Big{(}\frac{\omega}{x}\Big{)}^{\frac{\alpha}{2\beta}+1}{\bf 1}_{(\omega,\infty)}(x).

Using this auxiliary variable we readily obtain

x0(C0e2βr~(0)udu12σ)(Zσ2Ce2βr0)(σe2βr02Cω)α2β,\mathbb{P}_{x_{0}}\Big{(}C\int_{0}^{\infty}e^{-2\beta\widetilde{r}^{(0)}_{u}}du\geq\tfrac{1}{2}\sigma\Big{)}\leq\mathbb{P}\big{(}Z\geq\tfrac{\sigma}{2C}e^{2\beta r_{0}}\big{)}\leq\Big{(}\frac{\sigma e^{2\beta r_{0}}}{2C\omega}\Big{)}^{-\frac{\alpha}{2\beta}},

giving the last term of (ii). To obtain the bound in (iii), define the event :={1iv0𝔷,τ1(i)<}\mathcal{E}:=\{\exists 1\leq i\leq\lceil v_{0}\mathfrak{z}\rceil,\,\tau_{1}^{(i)}<\infty\} where τ1(i)\tau_{1}^{(i)} was defined previously as the hitting time of r=1r=1 on the ii-th excursion. By the same analysis as in the previous paragraph, we get

x0(i=1v0𝔷I~(i)12σ)\displaystyle\mathbb{P}_{x_{0}}\Big{(}\sum_{i=1}^{\lceil v_{0}\mathfrak{z}\rceil}\widetilde{I}^{(i)}\geq\tfrac{1}{2}\sigma\Big{)} x0()+x0(i=1v0𝔷I~(i)12σ,¯)\displaystyle\leq\,\mathbb{P}_{x_{0}}(\mathcal{E})+\mathbb{P}_{x_{0}}\Big{(}\sum_{i=1}^{\lceil v_{0}\mathfrak{z}\rceil}\widetilde{I}^{(i)}\geq\tfrac{1}{2}\sigma,\,\overline{\mathcal{E}}\Big{)}
v0𝔷x0(τ1(1)<)+x0(Ci=1v0𝔷0e2βr~(i)udu12σ)\displaystyle\leq\,\lceil v_{0}\mathfrak{z}\rceil\mathbb{P}_{x_{0}}(\tau_{1}^{(1)}<\infty)+\mathbb{P}_{x_{0}}\Big{(}C\sum_{i=1}^{\lceil v_{0}\mathfrak{z}\rceil}\int_{0}^{\infty}e^{-2\beta\widetilde{r}^{(i)}_{u}}du\geq\tfrac{1}{2}\sigma\Big{)}
v0𝔷x0(τ1(1)<)+x0(i=1v0𝔷Z(i)σ2Ce2β(R1))\displaystyle\leq\,\lceil v_{0}\mathfrak{z}\rceil\mathbb{P}_{x_{0}}(\tau_{1}^{(1)}<\infty)+\mathbb{P}_{x_{0}}\Big{(}\sum_{i=1}^{\lceil v_{0}\mathfrak{z}\rceil}Z^{(i)}\geq\tfrac{\sigma}{2C}e^{2\beta(R-1)}\Big{)}

where in this case we have used that each r~(i)\widetilde{r}^{(i)} begins at R1R-1 and the Z(i)Z^{(i)}’s stand for i.i.d. Pareto random variables with scale ω\omega and shape α2β\frac{\alpha}{2\beta} as in the previous case. For the first term we use the same treatment as with x0(τ1(0)<)\mathbb{P}_{x_{0}}(\tau_{1}^{(0)}<\infty) to obtain

v0𝔷x0(τ1(1)<) 2v0𝔷log(tanh(α(R1)/2))log(tanh(α/2))\lceil v_{0}\mathfrak{z}\rceil\mathbb{P}_{x_{0}}(\tau_{1}^{(1)}<\infty)\,\leq\,2v_{0}\mathfrak{z}\frac{\log(\tanh(\alpha(R-1)/2))}{\log(\tanh(\alpha/2))}

where we have used that in the first excursion the starting radius is R1R-1, and that v0𝔷4v_{0}\mathfrak{z}\geq 4. For the second term recall that the Z(i)Z^{(i)} variables are bounded from below by ω1\omega\geq 1, and that 1FZ(x)=(ωx)α2β1-F_{Z}(x)=(\frac{\omega}{x})^{\frac{\alpha}{2\beta}} for any xωx\geq\omega. Hence, we can apply Lemma 38 with m:=v0𝔷m:=\lceil v_{0}\mathfrak{z}\rceil and Sm:=i=1v0𝔷Z(i)S_{m}:=\sum_{i=1}^{\lceil v_{0}\mathfrak{z}\rceil}Z^{(i)} depending on the value of γ:=α2β\gamma:=\frac{\alpha}{2\beta} as follows:

  • If α2β<1\frac{\alpha}{2\beta}<1, then we can take L:=σ2Cv0𝔷2βαe2β(R1)L:=\tfrac{\sigma}{2C}\lceil v_{0}\mathfrak{z}\rceil^{-\frac{2\beta}{\alpha}}e^{2\beta(R-1)} in the lemma so that

    x0(i=1v0𝔷Z(i)σ2Ce2β(R1))=x0(Sv0𝔷Lv0𝔷2βα)Lα2β\mathbb{P}_{x_{0}}\Big{(}\sum_{i=1}^{\lceil v_{0}\mathfrak{z}\rceil}Z^{(i)}\geq\tfrac{\sigma}{2C}e^{2\beta(R-1)}\Big{)}=\mathbb{P}_{x_{0}}\Big{(}S_{\lceil v_{0}\mathfrak{z}\rceil}\geq L\lceil v_{0}\mathfrak{z}\rceil^{\frac{2\beta}{\alpha}}\Big{)}\leq\mathfrak{C}L^{-\frac{\alpha}{2\beta}}

    as soon as L>𝔏L>\mathfrak{L} for some 𝔏\mathfrak{L} and \mathfrak{C} depending on α,β\alpha,\beta and ω\omega alone. Observing that Lα2β=(2C)α2βv0𝔷σα2βeα(R1)L^{-\frac{\alpha}{2\beta}}=(2C)^{\frac{\alpha}{2\beta}}\lceil v_{0}\mathfrak{z}\rceil\sigma^{-\frac{\alpha}{2\beta}}e^{-\alpha(R-1)} and that the condition L>𝔏L>\mathfrak{L} is equivalent to σ>2𝔏Cv0𝔷2βαe2β(R1)\sigma>2\mathfrak{L}C\lceil v_{0}\mathfrak{z}\rceil^{\frac{2\beta}{\alpha}}e^{-2\beta(R-1)}, the result follows by defining 𝔏\mathfrak{L}^{\prime} and \mathfrak{C}^{\prime} accordingly.

  • If α2β=1\frac{\alpha}{2\beta}=1 we can take L:=σ2C(v0𝔷logv0𝔷)1e2β(R1)L:=\tfrac{\sigma}{2C}(\lceil v_{0}\mathfrak{z}\rceil\log\lceil v_{0}\mathfrak{z}\rceil)^{-1}e^{2\beta(R-1)} in the lemma so that

    x0(i=1v0𝔷Z(i)σ2Ce2β(R1))=x0(Sv0𝔷Lv0𝔷logv0𝔷)(Llogv0𝔷)1𝔏L\mathbb{P}_{x_{0}}\Big{(}\sum_{i=1}^{\lceil v_{0}\mathfrak{z}\rceil}Z^{(i)}\geq\tfrac{\sigma}{2C}e^{2\beta(R-1)}\Big{)}=\mathbb{P}_{x_{0}}\big{(}S_{\lceil v_{0}\mathfrak{z}\rceil}\geq L\lceil v_{0}\mathfrak{z}\rceil\log\lceil v_{0}\mathfrak{z}\rceil\big{)}\leq\Big{(}\frac{\mathfrak{C}}{L\log\lceil v_{0}\mathfrak{z}\rceil}\Big{)}^{1-\frac{\mathfrak{L}}{L}}

    as soon as L>𝔏L>\mathfrak{L} for some 𝔏\mathfrak{L} and \mathfrak{C} depending on α,β\alpha,\beta and ω\omega alone, which is satisfied since by hypothesis σ>𝔏e2βRv0𝔷log(v0𝔷)\sigma>\mathfrak{L}^{\prime}e^{-2\beta R}v_{0}\mathfrak{z}\log(v_{0}\mathfrak{z}) and we can choose 𝔏>2Ce2β𝔏\mathfrak{L}^{\prime}>2Ce^{2\beta}\mathfrak{L}. Since α=2β\alpha=2\beta, we have Llogv0𝔷=σeαRv0𝔷\tfrac{\mathfrak{C}}{L\log\lceil v_{0}\mathfrak{z}\rceil}=\frac{\mathfrak{C}^{\prime}}{\sigma}e^{-\alpha R}\lceil v_{0}\mathfrak{z}\rceil for :=2Ceα\mathfrak{C}^{\prime}:=2C\mathfrak{C}e^{\alpha}, giving the result.

  • Finally, assume that α2β>1\frac{\alpha}{2\beta}>1 and take L:=σ2Cv0𝔷e2β(R1)L:=\tfrac{\sigma}{2C\lceil v_{0}\mathfrak{z}\rceil}e^{2\beta(R-1)} in the lemma so that

    x0(i=1v0𝔷Z(i)σ2Ce2β(R1))=x0(Sv0𝔷Lv0𝔷)Lα2βv0𝔷(α2β1α4β)\mathbb{P}_{x_{0}}\Big{(}\sum_{i=1}^{\lceil v_{0}\mathfrak{z}\rceil}Z^{(i)}\geq\tfrac{\sigma}{2C}e^{2\beta(R-1)}\Big{)}=\mathbb{P}_{x_{0}}\big{(}S_{\lceil v_{0}\mathfrak{z}\rceil}\geq L\lceil v_{0}\mathfrak{z}\rceil\big{)}\leq\mathfrak{C}L^{-\frac{\alpha}{2\beta}}\lceil v_{0}\mathfrak{z}\rceil^{-(\frac{\alpha}{2\beta}-1\wedge\frac{\alpha}{4\beta})}

    as soon as L>𝔏L>\mathfrak{L} for some 𝔏\mathfrak{L} and \mathfrak{C} depending on α,β\alpha,\beta and ω\omega alone. The condition follows directly from the hypothesis σ>𝔏e2βR(v0𝔷)\sigma>\mathfrak{L}^{\prime}e^{-2\beta R}(v_{0}\mathfrak{z}) for 𝔏\mathfrak{L}^{\prime} adequately chosen, and the bound in (iii) is obtained from the previous inequality by noticing that Lα2β=(2Cv0𝔷)α2βσα2βeα(R1)L^{-\frac{\alpha}{2\beta}}=(2C\lceil v_{0}\mathfrak{z}\rceil)^{\frac{\alpha}{2\beta}}\sigma^{-\frac{\alpha}{2\beta}}e^{-\alpha(R-1)} and defining \mathfrak{C}^{\prime} accordingly.

5.2.1 The case 𝔷\mathfrak{z} small

In this subsection we obtain upper bound for 𝒟¯(𝔷)x0(Tdet𝔷)dμ(x0)\int_{\overline{\mathcal{D}}^{(\mathfrak{z})}}\mathbb{P}_{x_{0}}(T_{det}\leq\mathfrak{z})d\mu(x_{0}) under the assumption of β1/2\beta\leq 1/2 and 𝔷=O(1)\mathfrak{z}=O(1) as required by Theorem 29. Since for β=1/2\beta=1/2 we always have 𝔷=Ω(1)\mathfrak{z}=\Omega(1) and the next subsection deals with all such values, here we even assume β<1/2\beta<1/2. The main result in this section is the following:

Proposition 40.

Let β<12\beta<\frac{1}{2}. If 𝒟(𝔷)\mathcal{D}^{(\mathfrak{z})} is as defined in Theorem 29 and 𝔷=Ω((eβR/n)2)O(1)\mathfrak{z}=\Omega((e^{\beta R}/n)^{2})\cap O(1), then

𝒟¯(𝔷)x0(Tdet𝔷)dμ(x0)=O(neβR𝔷).\int_{\overline{\mathcal{D}}^{(\mathfrak{z})}}\mathbb{P}_{x_{0}}(T_{det}\leq\mathfrak{z})d\mu(x_{0})\;=\;O\big{(}ne^{-\beta R}\sqrt{\mathfrak{z}}\big{)}.
Proof.

Notice first that for any C>0C^{\prime}>0, Lemma 6 gives μ(BO(C))=O(neαR)=o(1)\mu(B_{O}(C^{\prime}))=O(ne^{-\alpha R})=o(1) so the contribution of points in 𝒟¯(𝔷)BO(C)\overline{\mathcal{D}}^{(\mathfrak{z})}\cap B_{O}(C^{\prime}) to the upper bound is o(neβR𝔷)o(ne^{-\beta R}\sqrt{\mathfrak{z}}) from our assumption 𝔷=Ω((e2βR/n)2)\mathfrak{z}=\Omega((e^{2\beta R}/n)^{2}), and hence such points can be neglected. Take then C>0C^{\prime}>0 large (to be fixed later) so we only need to consider particles initially located at points with radial component in [C,R][C^{\prime},R]. For such points we follow the proof argument described in the previous section, where we bound the detection time of the target by the exit time of the box (x0)\mathcal{B}(x_{0}) for θ^0\widehat{\theta}_{0} and r^0\widehat{r}_{0} given in the next sentence, allowing us to bound x0(Tdet𝔷)\mathbb{P}_{x_{0}}(T_{det}\leq\mathfrak{z}) as in (26). More in detail, given x0𝒟¯(𝔷)x_{0}\in\overline{\mathcal{D}}^{(\mathfrak{z})} we construct the box (x0)\mathcal{B}(x_{0}) by choosing

θ^0:=13(|θ0|ϕ(r0))andr^0:=r0log(1+(|θ0|ϕ(r0))eβr0).\widehat{\theta}_{0}:=\tfrac{1}{3}(|\theta_{0}|-\phi(r_{0}))\qquad\text{and}\qquad\widehat{r}_{0}:=r_{0}-\log\big{(}1+(|\theta_{0}|-\phi(r_{0}))e^{\beta r_{0}}\big{)}.

Notice first that θ^0>0\widehat{\theta}_{0}>0, r^0<r0\widehat{r}_{0}<r_{0}, and assuming C>13C^{\prime}>13 we also have r^0>13r0\widehat{r}_{0}>\frac{1}{3}r_{0}: indeed, this last statement is equivalent to e23r0>1+(|θ0|ϕ(r0))eβr0e^{\frac{2}{3}r_{0}}>1+(|\theta_{0}|-\phi(r_{0}))e^{\beta r_{0}} which is satisfied for r0>2/(23β)+113r_{0}>2/(\frac{2}{3}-\beta)+1\geq 13 since β1/2\beta\leq 1/2 and |θ0|π|\theta_{0}|\leq\pi. The previous inequalities show that the box is well defined and contain the point (r0,θ0)(r_{0},\theta_{0}), but we still need to prove that (x0)B¯Q(R)\mathcal{B}(x_{0})\subseteq\overline{B}_{Q}(R). To do so assume, without loss of generality, that θ0>0\theta_{0}>0 and observe that it is enough to show that the upper-left corner of the box, (r^0,θ0θ^0)(\widehat{r}_{0},\theta_{0}-\widehat{\theta}_{0}) is in B¯Q(R)\overline{B}_{Q}(R), or equivalently, that ϕ(r^0)<θ0θ^0=23(θ0ϕ(r0))+ϕ(r0)\phi(\widehat{r}_{0})<\theta_{0}-\widehat{\theta}_{0}=\frac{2}{3}(\theta_{0}-\phi(r_{0}))+\phi(r_{0}). Using that r^0<r0\widehat{r}_{0}<r_{0} and Part (i) of Lemma 8 we obtain that

cos(ϕ(r0))cos(ϕ(r^0))\displaystyle\cos(\phi(r_{0}))-\cos(\phi(\widehat{r}_{0})) =(tanh(r02)tanh(r^02))coth(R)=coth(R)2(er0er^0)(er0+1)(er^0+1)\displaystyle=(\tanh(\tfrac{r_{0}}{2})-\tanh(\tfrac{\widehat{r}_{0}}{2}))\coth(R)=\coth(R)\frac{2(e^{r_{0}}-e^{\widehat{r}_{0}})}{(e^{r_{0}}+1)(e^{\widehat{r}_{0}}+1)}
4(er^0er0)=4e(β1)r0(|θ0|ϕ(r0)).\displaystyle\leq 4(e^{-\widehat{r}_{0}}-e^{-r_{0}})=4e^{(\beta-1)r_{0}}(|\theta_{0}|-\phi(r_{0})).

Moreover, for CC^{\prime} sufficiently large, since cos(x)cos(x)=2sin(12(x+x))sin(12(xx))\cos(x)-\cos(x^{\prime})=2\sin(\frac{1}{2}(x+x^{\prime}))\sin(\frac{1}{2}(x-x^{\prime})),

cos(ϕ(r0))cos(ϕ(r^0))2sin(12ϕ(r0))sin(12(ϕ(r^0)ϕ(r0)))18ϕ(r0)(ϕ(r^0)ϕ(r0)).\cos(\phi(r_{0}))-\cos(\phi(\widehat{r}_{0}))\geq 2\sin(\tfrac{1}{2}\phi(r_{0}))\sin(\tfrac{1}{2}(\phi(\widehat{r}_{0})-\phi(r_{0})))\geq\tfrac{1}{8}\phi(r_{0})(\phi(\widehat{r}_{0})-\phi(r_{0})).

Using Part (v) of Lemma 8 we have ϕ(r0)er02\phi(r_{0})\geq e^{-\frac{r_{0}}{2}} for r0>Cr_{0}>C^{\prime} so we conclude that

ϕ(r^0)ϕ(r0) 32e(β12)r0(|θ0|ϕ(r0))\phi(\widehat{r}_{0})-\phi(r_{0})\,\leq\,32e^{(\beta-\frac{1}{2})r_{0}}(|\theta_{0}|-\phi(r_{0}))

and since β<12\beta<\frac{1}{2}, taking CC^{\prime} sufficiently large the last term is bounded by 23(θ0ϕ(r0))\frac{2}{3}(\theta_{0}-\phi(r_{0})) which proves that (x0)B¯Q(R)\mathcal{B}(x_{0})\subseteq\overline{B}_{Q}(R) for all such r0r_{0}.

Now, we follow the proof strategy presented in the previous section by splitting the detection probability as in (26) and using the first bound in Proposition 37, which by setting Ω:=𝒟¯(𝔷)B¯O(C)\Omega:=\overline{\mathcal{D}}^{(\mathfrak{z})}\cap\overline{B}_{O}(C^{\prime}) and recalling that J(r^0):=cosech2(βr^0)J(\widehat{r}_{0}):=\operatorname{cosech}^{2}(\beta\widehat{r}_{0}), gives

Ωx0(Tdet𝔷)dμ(x0)Ωx0(T0rad𝔷)dμ(x0)+Ω4Φ(θ^0𝔷J(r^0))dμ(x0)\int_{\Omega}\mathbb{P}_{x_{0}}(T_{det}\leq\mathfrak{z})d\mu(x_{0})\leq\int_{\Omega}\mathbb{P}_{x_{0}}(T_{0}^{rad}\leq\mathfrak{z})d\mu(x_{0})+\int_{\Omega}4\Phi\Big{(}{-}\frac{\widehat{\theta}_{0}}{\sqrt{\mathfrak{z}J(\widehat{r}_{0})}}\Big{)}d\mu(x_{0}) (29)

where T0radT_{0}^{rad} is the hitting time of r^0\widehat{r}_{0}. For the first term on the right of (29) we can stochastically dominate from above the trajectory of {rs}s0\{r_{s}\}_{s\geq 0} by that of a simple Brownian motion with a reflecting barrier at RR (thus removing the drift towards the boundary) which decreases the hitting time of r^0\widehat{r}_{0}. Since hitting r^0\widehat{r}_{0} implies moving outside of an interval of length 2(r0r^0)2(r_{0}-\widehat{r}_{0}), by the reflection principle, we obtain

x0(T0rad𝔷) 4Φ(1𝔷(r0r^0))= 4Φ(1𝔷log(1+|θ0|ϕ(r0)eβr0)).\mathbb{P}_{x_{0}}(T_{0}^{rad}\leq\mathfrak{z})\,\leq\,4\Phi\Big{(}{-}\frac{1}{\sqrt{\mathfrak{z}}}(r_{0}-\widehat{r}_{0})\Big{)}\,=\,4\Phi\Big{(}{-}\frac{1}{\sqrt{\mathfrak{z}}}\log\Big{(}1+\frac{|\theta_{0}|-\phi(r_{0})}{e^{-\beta r_{0}}}\Big{)}\Big{)}.

For the term on the right we have

ϕ(r0)+𝔷eβr0x0(T0rad𝔷)dθ0=4𝔷eβr01Φ(1𝔷log(1+𝔷w))dw\int_{\phi(r_{0})+\sqrt{\mathfrak{z}}e^{-\beta r_{0}}}^{\infty}\mathbb{P}_{x_{0}}(T_{0}^{rad}\leq\mathfrak{z})d\theta_{0}=4\sqrt{\mathfrak{z}}e^{-\beta r_{0}}\int_{1}^{\infty}\Phi\Big{(}{-}\frac{1}{\sqrt{\mathfrak{z}}}\log(1+\sqrt{\mathfrak{z}}w)\Big{)}dw

but since the function x1xlog(1+xw)x\to\frac{1}{\sqrt{x}}\log(1+\sqrt{x}w) is decreasing and 𝔷=O(1)\mathfrak{z}=O(1) we deduce that the term on the right-hand side is O(𝔷eβR)O(\sqrt{\mathfrak{z}}e^{-\beta R}), so

Ωx0(T0rad𝔷)dμ(x0)\displaystyle\int_{\Omega}\mathbb{P}_{x_{0}}(T_{0}^{rad}\leq\mathfrak{z})d\mu(x_{0}) =O(nCRϕ(r0)+𝔷eβr0x0(T0rad𝔷)dθ0eα(Rr0)dr0)\displaystyle=\;O\Big{(}n\int_{C^{\prime}}^{R}\int_{\phi(r_{0})+\sqrt{\mathfrak{z}}e^{-\beta r_{0}}}^{\infty}\mathbb{P}_{x_{0}}(T_{0}^{rad}\leq\mathfrak{z})d\theta_{0}e^{-\alpha(R-r_{0})}dr_{0}\Big{)}
=O(nCR𝔷eβr0eα(Rr0)dr0)=O(n𝔷eβR)\displaystyle=\;O\Big{(}n\int_{C^{\prime}}^{R}\sqrt{\mathfrak{z}}e^{-\beta r_{0}}e^{-\alpha(R-r_{0})}dr_{0}\Big{)}=O\big{(}n\sqrt{\mathfrak{z}}e^{-\beta R}\big{)}

where in the last equality we have used that β<12<α\beta<\frac{1}{2}<\alpha. Next, we must bound the second term on the right of (29), for which we observe first that since r^0>13r0\widehat{r}_{0}>\frac{1}{3}r_{0} and assuming that C>0C^{\prime}>0 is large we have sinh(βr^0)>14eβr^0\sinh(\beta\widehat{r}_{0})>\frac{1}{4}e^{\beta\widehat{r}_{0}} so that

θ^0𝔷J(r^0)=θ0ϕ(r0)3𝔷J(r^0)4(θ0ϕ(r0))3𝔷eβr0(1+θ0ϕ(r0)eβr0)β.{-}\frac{\widehat{\theta}_{0}}{\sqrt{\mathfrak{z}J(\widehat{r}_{0})}}=\frac{\theta_{0}-\phi(r_{0})}{3\sqrt{\mathfrak{z}J(\widehat{r}_{0})}}\leq{-}\frac{4(\theta_{0}-\phi(r_{0}))}{3\sqrt{\mathfrak{z}}e^{-\beta r_{0}}}\Big{(}1+\frac{\theta_{0}-\phi(r_{0})}{e^{-\beta r_{0}}}\Big{)}^{-\beta}.

Evaluating at Φ()\Phi(\cdot), integrating over Ω\Omega and using the change of variables w:=θ0ϕ(r0)𝔷eβr0w:=\frac{\theta_{0}-\phi(r_{0})}{\sqrt{\mathfrak{z}}e^{-\beta r_{0}}} gives

ΩΦ(θ^0𝔷J(r^0))dμ(x0)=O(nCR𝔷eβr0(1Φ(4w3(1+𝔷w)β)dw)eα(Rr0)dr0)\int_{\Omega}\Phi\Big{(}{-}\frac{\widehat{\theta}_{0}}{\sqrt{\mathfrak{z}J(\widehat{r}_{0})}}\Big{)}d\mu(x_{0})\;=\;O\Big{(}n\int_{C^{\prime}}^{R}\sqrt{\mathfrak{z}}e^{-\beta r_{0}}\Big{(}\int_{1}^{\infty}\Phi\Big{(}{-}\frac{4w}{3(1+\sqrt{\mathfrak{z}}w)^{\beta}}\Big{)}dw\Big{)}e^{-\alpha(R-r_{0})}dr_{0}\Big{)}

but since 𝔷=O(1)\mathfrak{z}=O(1) we have 1Φ(4w3(1+𝔷w)β)dw=O(1)\int_{1}^{\infty}\Phi(-\tfrac{4w}{3(1+\sqrt{\mathfrak{z}}w)^{\beta}})dw=O(1) (it is finite since β<1/2\beta<1/2) and hence

ΩΦ(θ^0𝔷J(r^0))dμ(x0)=O(n𝔷CReβr0+α(Rr0)dr0)=O(n𝔷eβR)\int_{\Omega}\Phi\Big{(}{-}\frac{\widehat{\theta}_{0}}{\sqrt{\mathfrak{z}J(\widehat{r}_{0})}}\Big{)}d\mu(x_{0})\,=\,O\Big{(}n\sqrt{\mathfrak{z}}\int_{C^{\prime}}^{R}e^{-\beta r_{0}+\alpha(R-r_{0})}dr_{0}\Big{)}\,=\,O\big{(}n\sqrt{\mathfrak{z}}e^{-\beta R}\big{)}

where again we used that β<12<α\beta<\frac{1}{2}<\alpha in the last equality. ∎

5.2.2 The case 𝔷\mathfrak{z} large

In this subsection we will prove the upper bounds in Theorem 30 for both 𝒟¯(𝔷)x0(Tdet𝔷)dμ(x0)\int_{\overline{\mathcal{D}}^{(\mathfrak{z})}}\mathbb{P}_{x_{0}}(T_{det}\leq\mathfrak{z})d\mu(x_{0}) as well as for supx0𝒟¯(𝔷)x0(Tdet𝔷)\sup_{x_{0}\in\overline{\mathcal{D}}^{(\mathfrak{z})}}\mathbb{P}_{x_{0}}(T_{det}\leq\mathfrak{z}), where for convenience we use the shorthand notation 𝒟¯(𝔷)=𝒟¯(𝔷)(κA)\overline{\mathcal{D}}^{(\mathfrak{z})}=\overline{\mathcal{D}}^{(\mathfrak{z})}(\kappa_{A}) that will be used throughout this section. Recall that we may assume 𝔷=Ω(1)\mathfrak{z}=\Omega(1) for this theorem. We handle this case with the help of Propositions 37 and 39, as well as with the results obtained in Section 4. The main result in this section is the following, which directly implies the corresponding upper bounds of Theorem 30:

Proposition 41.

Let 𝒟(𝔷)\mathcal{D}^{(\mathfrak{z})} be as in Theorem 30. Then there is a C>0C>0 independent of 𝔷\mathfrak{z} and nn such that for 𝔷=Ω(1)\mathfrak{z}=\Omega(1) satisfying the conditions of Theorem 30, fixing κA:=C\kappa_{A}:=C gives

𝒟¯(𝔷)x0(Tdet𝔷)dμ(x0)=O(nϕ(𝔷)).\int_{\overline{\mathcal{D}}^{(\mathfrak{z})}}\mathbb{P}_{x_{0}}(T_{det}\leq\mathfrak{z})d\mu(x_{0})\;=\;O(n\phi^{(\mathfrak{z})}).

Furthermore, for κAC\kappa_{A}\geq C and under the additional assumption 𝔷=ω(1)\mathfrak{z}=\omega(1) we have

supx0𝒟¯(𝔷)x0(Tdet𝔷)={eΩ(κA2), if α2β,O(κAα(1β2)), if α<2β.\sup_{x_{0}\in\overline{\mathcal{D}}^{(\mathfrak{z})}}\mathbb{P}_{x_{0}}(T_{det}\leq\mathfrak{z})=\begin{cases}e^{-\Omega(\kappa_{A}^{2})},&\text{ if $\alpha\geq 2\beta$,}\\[2.0pt] O(\kappa_{A}^{-\alpha(\frac{1}{\beta}\vee 2)}),&\text{ if $\alpha<2\beta$.}\end{cases}

We begin by introducing some notation and deriving a few facts that will come in handy throughout this section. The points in 𝒟¯(𝔷)\partial\overline{\mathcal{D}}^{(\mathfrak{z})} with a positive angle define a curve in polar coordinates which can be parameterized by the radius as (r,γ(r))(r,\gamma(r)) with γ(r):=max{ϕ(R)+κAϕ(𝔷),ϕ(r)+κRe(β12)r}\gamma(r):=\max\{\phi(R)+\kappa_{A}\phi^{(\mathfrak{z})},\phi(r)+\kappa_{R}e^{-(\beta\wedge\frac{1}{2})r}\}, and where rr takes values between rr^{\prime} and RR, for rr^{\prime} being the solution of (see Figure 3(b) for a depiction of rr^{\prime})

ϕ(r)+κRe(β12)r=π.\phi(r^{\prime})+\kappa_{R}e^{-(\beta\wedge\frac{1}{2})r^{\prime}}=\pi. (30)

Observing that γ\gamma is the maximum between two functions, a major role will be played by the intersecting point (r,γ(r))(r^{\prime\prime},\gamma(r^{\prime\prime})) which satisfies (see Figure 3(b) for a depiction of rr^{\prime\prime})

ϕ(R)+κAϕ(𝔷)=ϕ(r)+κRe(β12)r.\phi(R)+\kappa_{A}\phi^{(\mathfrak{z})}=\phi(r^{\prime\prime})+\kappa_{R}e^{-(\beta\wedge\frac{1}{2})r^{\prime\prime}}. (31)

Next we derive several inequalities that will be useful in the proof of Proposition 41:

Fact 42.

If the constant C>0C>0 appearing in the statement of Proposition 41 is sufficiently large, then under the assumption 𝔷=Ω(1)\mathfrak{z}=\Omega(1) the following hold:

  1. (i)

    ϕ(𝔷)=Ω(ϕ(R))\phi^{(\mathfrak{z})}=\Omega(\phi(R)) and for all 0<r0R0<r_{0}\leq R, e(β12)r0=Ω(ϕ(r0))e^{-(\beta\wedge\frac{1}{2})r_{0}}=\Omega(\phi(r_{0})). In particular, for any C>0C^{\prime}>0 by taking CC large we have κAϕ(𝔷)Cϕ(R)\kappa_{A}\phi^{(\mathfrak{z})}\geq C^{\prime}\phi(R) and κRe(β12)r0Cϕ(r0)\kappa_{R}e^{-(\beta\wedge\frac{1}{2})r_{0}}\geq C^{\prime}\phi(r_{0}).

  2. (ii)

    (β12)1log(1πκR)r(β12)1log(2πκR)(\beta\wedge\frac{1}{2})^{-1}\log(\frac{1}{\pi}\kappa_{R})\leq r^{\prime}\leq(\beta\wedge\frac{1}{2})^{-1}\log(\frac{2}{\pi}\kappa_{R}).

  3. (iii)

    e(β12)r=Θ(κAκRϕ(𝔷))e^{-(\beta\wedge\frac{1}{2})r^{\prime\prime}}=\Theta(\tfrac{\kappa_{A}}{\kappa_{R}}\phi^{(\mathfrak{z})}).

  4. (iv)

    γ(r)ϕ(r)\gamma(r)-\phi(r) is minimized at rr^{\prime\prime}, and in particular γ(r)ϕ(r)(κRe(β12)r12κAϕ(𝔷))\gamma(r)-\phi(r)\geq(\kappa_{R}e^{-(\beta\wedge\frac{1}{2})r}\vee\tfrac{1}{2}\kappa_{A}\phi^{(\mathfrak{z})}) for all rrRr^{\prime}\leq r\leq R.

  5. (v)

    For any fixed value of κA\kappa_{A} and κR\kappa_{R}, rR(γ(r)ϕ(r))eα(Rr)dr=O(ϕ(𝔷)).\displaystyle\int_{r^{\prime}}^{R}(\gamma(r)-\phi(r))e^{-\alpha(R-r)}dr=O(\phi^{(\mathfrak{z})}).

Proof.

The first part follows directly from Part (v) of Lemma 8, the second one from the definition of rr^{\prime} and the fact that 0ϕ()π20\leq\phi(\cdot)\leq\frac{\pi}{2}, and the third part follows from (i) and the definition of rr^{\prime\prime}. To prove part (iv) observe that in [r,r][r^{\prime},r^{\prime\prime}] the function γ(r)ϕ(r)=κRe(β12)r\gamma(r)-\phi(r)=\kappa_{R}e^{-(\beta\wedge\frac{1}{2})r} is decreasing while in [r,R][r^{\prime\prime},R] it is increasing (since γ\gamma is constant and ϕ\phi is decreasing) so the function is minimized at rr^{\prime\prime}. The inequality γ(r)ϕ(r)(κRe(β12)r12κAϕ(𝔷))\gamma(r)-\phi(r)\geq(\kappa_{R}e^{-(\beta\wedge\frac{1}{2})r}\vee\tfrac{1}{2}\kappa_{A}\phi^{(\mathfrak{z})}) follows from

γ(r)ϕ(r)=κRe(β12)r12(κRe(β12)r+ϕ(r))=12(κAϕ(𝔷)+ϕ(R))12κAϕ(𝔷).\gamma(r^{\prime\prime})-\phi(r^{\prime\prime})=\kappa_{R}e^{-(\beta\wedge\frac{1}{2})r^{\prime\prime}}\geq\tfrac{1}{2}(\kappa_{R}e^{-(\beta\wedge\frac{1}{2})r^{\prime\prime}}+\phi(r^{\prime\prime}))=\tfrac{1}{2}(\kappa_{A}\phi^{(\mathfrak{z})}+\phi(R))\geq\tfrac{1}{2}\kappa_{A}\phi^{(\mathfrak{z})}.

To prove (v) we fix κA=CA\kappa_{A}=C_{A} and κR=CR\kappa_{R}=C_{R} equal to some constants and split the range of integration into two intervals, specifically [r,r][r^{\prime},r^{\prime\prime}] and [r,R][r^{\prime\prime},R]. For the first one we have γ(r)=ϕ(r)+CRe(β12)r\gamma(r)=\phi(r)+C_{R}e^{-(\beta\wedge\frac{1}{2})r} and since α>12\alpha>\frac{1}{2}, using (iii), we get

rr(γ(r)ϕ(r))eα(Rr)dr=O(eαRe(α(β12)r))=O(ϕ(𝔷)eα(Rr))=O(ϕ(𝔷)).\int_{r^{\prime}}^{r^{\prime\prime}}(\gamma(r)-\phi(r))e^{-\alpha(R-r)}dr=O\big{(}e^{-\alpha R}e^{(\alpha-(\beta\wedge\frac{1}{2})r^{\prime\prime})}\big{)}=O\big{(}\phi^{(\mathfrak{z})}e^{-\alpha(R-r^{\prime\prime})}\big{)}=O(\phi^{(\mathfrak{z})}).

To conclude the proof of (v) observe that for the second interval (i) gives

rR(γ(r)ϕ(r))eα(Rr)dr=O(ϕ(𝔷)).\int_{r^{\prime\prime}}^{R}(\gamma(r)-\phi(r))e^{-\alpha(R-r)}dr=O(\phi^{(\mathfrak{z})}).

Following the strategy presented at the beginning of Section 5.2, for each x0𝒟¯(𝔷)x_{0}\in\overline{\mathcal{D}}^{(\mathfrak{z})} with θ0>0\theta_{0}>0 (the case θ0<0\theta_{0}<0 is analogous) we define a box (x0)B¯Q(R)\mathcal{B}(x_{0})\subseteq\overline{B}_{Q}(R) where this time we set (see Figure 3(c) for an illustration of (x0)\mathcal{B}(x_{0}))

θ^0=23(θ0ϕ(r0)) and ϕ(r^0)=θ0θ^0.\widehat{\theta}_{0}=\tfrac{2}{3}(\theta_{0}-\phi(r_{0}))\qquad\text{ and }\qquad\phi(\widehat{r}_{0})=\theta_{0}-\widehat{\theta}_{0}. (32)

Since by Part (ii) of Lemma 8 the function ϕ()\phi(\cdot) is decreasing, it can be easily checked that x0(x0)B¯Q(R)x_{0}\in\mathcal{B}(x_{0})\subseteq\overline{B}_{Q}(R) so in order to detect the target, a particle must first exit its respective box. In particular, by (26) and the second bound in Proposition 37, we obtain

x0(Tdet𝔷)x0(Trad0𝔷)+2π0θ^0σ3/2eθ^022σx0(I(𝔷)σ)dσ.\mathbb{P}_{x_{0}}(T_{det}\leq\mathfrak{z})\,\leq\,\mathbb{P}_{x_{0}}(T^{rad}_{0}\leq\mathfrak{z})+\sqrt{\frac{2}{\pi}}\int_{0}^{\infty}\frac{\widehat{\theta}_{0}}{\sigma^{3/2}}e^{-\frac{\widehat{\theta}_{0}^{2}}{2\sigma}}\mathbb{P}_{x_{0}}(I(\mathfrak{z})\geq\sigma)d\sigma. (33)

It suffices thus to bound the right-hand side. We first address the term x0(Trad0𝔷)\mathbb{P}_{x_{0}}(T^{rad}_{0}\leq\mathfrak{z}), which depends on the radial trajectory of the particle alone, and hence we can make use of the results obtained in Section 4.

Proposition 43.

Under the conditions of Proposition 41 we have:

  • If κA=C\kappa_{A}=C, then 𝒟¯(𝔷)x0(Trad0𝔷)dμ(x0)=O(nϕ(𝔷))\displaystyle\int_{\overline{\mathcal{D}}^{(\mathfrak{z})}}\mathbb{P}_{x_{0}}(T^{rad}_{0}\leq\mathfrak{z})d\mu(x_{0})=O(n\phi^{(\mathfrak{z})}).

  • If κAC\kappa_{A}\geq C and 𝔷=ω(1)\mathfrak{z}=\omega(1), then

    supx0𝒟¯(𝔷)x0(Trad0𝔷)={eΩ(κA2), if α2β,O(κAα(1β2)), if α<2β.\sup_{x_{0}\in\overline{\mathcal{D}}^{(\mathfrak{z})}}\mathbb{P}_{x_{0}}(T^{rad}_{0}\leq\mathfrak{z})=\begin{cases}e^{-\Omega(\kappa_{A}^{2})},&\text{ if $\alpha\geq 2\beta$,}\\[2.0pt] O(\kappa_{A}^{-\alpha(\frac{1}{\beta}\vee 2)}),&\text{ if $\alpha<2\beta$.}\end{cases}
Proof.

Throughout this proof, let x0𝒟¯(𝔷)x_{0}\in\overline{\mathcal{D}}^{(\mathfrak{z})} be such that θ0>0\theta_{0}>0 (the case θ0<0\theta_{0}<0 can be dealt with analogously). As defined in Part (iii) of Lemma 14, with y=r0y=r_{0}, 𝐲0=r^0{\bf y}_{0}=\widehat{r}_{0} and Y=RY=R, let

Gr^0(r0):=g(r0)g(r^0),whereg(r):=log(tanh(12αR)/tanh(12αr))G_{\widehat{r}_{0}}(r_{0}):=\frac{g(r_{0})}{g(\widehat{r}_{0})},\qquad\text{where}\qquad g(r):=\log(\tanh(\tfrac{1}{2}\alpha R)/\tanh(\tfrac{1}{2}\alpha r))

Using the same arguments as in the proof of Proposition 19 in Section 4, bounding the term R(Tr^0𝔷)\mathbb{P}_{R}(T_{\widehat{r}_{0}}\leq\mathfrak{z}) analogously to what was done in (16), we have

x0(Trad0𝔷)r0(Tr^0<TR)+R(Tr^0𝔷)=Gr^0(r0)+O(𝔷eα(Rr^0)).\mathbb{P}_{x_{0}}(T^{rad}_{0}\leq\mathfrak{z})\;\leq\;\mathbb{P}_{r_{0}}(T_{\widehat{r}_{0}}<T_{R})+\mathbb{P}_{R}(T_{\widehat{r}_{0}}\leq\mathfrak{z})\;=\;G_{\widehat{r}_{0}}(r_{0})+O(\mathfrak{z}e^{-\alpha(R-\widehat{r}_{0})}). (34)

To bound from above the numerator of Gr^0(r0)G_{\widehat{r}_{0}}(r_{0}), note that the largest possible value of r^0\widehat{r}_{0} among points in 𝒟¯(𝔷)\overline{\mathcal{D}}^{(\mathfrak{z})} is obtained at r0=Rr_{0}=R and θ0=ϕ(R)+κAϕ(𝔷)\theta_{0}=\phi(R)+\kappa_{A}\phi^{(\mathfrak{z})} so it follows from the definition of r^0\widehat{r}_{0}, θ^0\widehat{\theta}_{0}, ϕ()\phi(\cdot), and Part (i) in Fact 42, that

ϕ(r^0)ϕ(R)+13(θ0ϕ(R))=ϕ(R)+13κAϕ(𝔷)=Ω(κAeR/2).\phi(\widehat{r}_{0})\geq\phi(R)+\tfrac{1}{3}(\theta_{0}-\phi(R))=\phi(R)+\tfrac{1}{3}\kappa_{A}\phi^{(\mathfrak{z})}=\Omega(\kappa_{A}e^{-R/2}).

Also note that ϕ(r^0)=O(er^0/2)\phi(\widehat{r}_{0})=O(e^{-\widehat{r}_{0}/2}), so taking κA\kappa_{A} larger than a fixed constant gives Rr^0=Ω(1)R-\widehat{r}_{0}=\Omega(1), and hence

log(tanh(αR/2)tanh(αr^0/2))=log(1+22eα(r^0R)(eαR+1)(eαr^01))=log(1+O(eαr^0))=O(eαr^0).\log\Big{(}\frac{\tanh(\alpha R/2)}{\tanh(\alpha\widehat{r}_{0}/2)}\Big{)}\;=\;\log\Big{(}1+\frac{2-2e^{\alpha(\widehat{r}_{0}-R)}}{(e^{-\alpha R}+1)(e^{\alpha\widehat{r}_{0}}-1)}\Big{)}\,=\,\log\big{(}1+O(e^{-\alpha\widehat{r}_{0}})\big{)}=O(e^{-\alpha\widehat{r}_{0}}).

From Part (ii) in Fact 42 it follows that r=Ω(logκR)=Ω(1)r^{\prime}=\Omega(\log\kappa_{R})=\Omega(1), and since r0rr_{0}\geq r^{\prime} (because x0𝒟¯(𝔷)x_{0}\in\overline{\mathcal{D}}^{(\mathfrak{z})}) we can argue as in the proof of Fact 17 that g(r0)=O(eαr0)g(r_{0})=O(e^{-\alpha r_{0}}), and hence combining with the previous bound we get Gr^0(r0)=O(eα(r0r^0))G_{\widehat{r}_{0}}(r_{0})=O(e^{-\alpha(r_{0}-\widehat{r}_{0})}). Notice that both terms bounding x0(Trad0𝔷)\mathbb{P}_{x_{0}}(T^{rad}_{0}\leq\mathfrak{z}) in (34) involve a factor eαr^0e^{\alpha\widehat{r}_{0}}, which by definition of r^0\widehat{r}_{0} and Part (v) of Lemma 8 is O((θ0θ^0)2α)O((\theta_{0}-\widehat{\theta}_{0})^{-2\alpha}), giving

x0(Trad0𝔷)=O((θ0θ^0)2αeαr0+𝔷(θ0θ^0)2αeαR).\mathbb{P}_{x_{0}}(T^{rad}_{0}\leq\mathfrak{z})\;=\;O\big{(}(\theta_{0}-\widehat{\theta}_{0})^{-2\alpha}e^{-\alpha r_{0}}+\mathfrak{z}(\theta_{0}-\widehat{\theta}_{0})^{-2\alpha}e^{-\alpha R}\big{)}. (35)

To obtain a uniform upper bound for x0(Trad0𝔷)\mathbb{P}_{x_{0}}(T^{rad}_{0}\leq\mathfrak{z}) on 𝒟¯(𝔷)\overline{\mathcal{D}}^{(\mathfrak{z})}, observe that for any fixed r0r_{0}, since x0𝒟¯(𝔷)x_{0}\in\overline{\mathcal{D}}^{(\mathfrak{z})}, we have θ0γ(r0)\theta_{0}\geq\gamma(r_{0}), where γ\gamma was defined at the beginning of this section, giving

θ0θ^0=13θ023ϕ(r0)13γ(r0)+23ϕ(r0)13κRe(β12)r0+ϕ(r0)13κRe(β12)r0\theta_{0}-\widehat{\theta}_{0}=\tfrac{1}{3}\theta_{0}-\tfrac{2}{3}\phi(r_{0})\geq\tfrac{1}{3}\gamma(r_{0})+\tfrac{2}{3}\phi(r_{0})\geq\tfrac{1}{3}\kappa_{R}e^{-(\beta\wedge\frac{1}{2})r_{0}}+\phi(r_{0})\geq\tfrac{1}{3}\kappa_{R}e^{-(\beta\wedge\frac{1}{2})r_{0}} (36)

and hence (θ0θ^0)2αeαr0=O((κR2e(12(β12))r0)α)(\theta_{0}-\widehat{\theta}_{0})^{-2\alpha}e^{-\alpha r_{0}}=O((\kappa_{R}^{2}e^{(1-2(\beta\wedge\frac{1}{2}))r_{0}})^{-\alpha}). For the case β12\beta\geq\frac{1}{2} this bound is O(κR2α)O(\kappa_{R}^{-2\alpha}), while for the case β<12\beta<\frac{1}{2} the exponential term is equal to e(12β)r0e^{(1-2\beta)r_{0}} which is at least e(12β)re^{(1-2\beta)r^{\prime}} since as already observed r0rr_{0}\geq r^{\prime}. By Part (ii) in Fact 42 we have r=1βlog(κR)O(1)r^{\prime}=\frac{1}{\beta}\log(\kappa_{R})-O(1). Thus,

κR2αeα(12β)r0κR2αeα(12β)r=O(κR2ακRαβ(12β))=O(κRαβ)\kappa_{R}^{-2\alpha}e^{-\alpha(1-2\beta)r_{0}}\leq\kappa_{R}^{-2\alpha}e^{-\alpha(1-2\beta)r^{\prime}}=O\big{(}\kappa_{R}^{-2\alpha}\kappa_{R}^{-\frac{\alpha}{\beta}(1-2\beta)}\big{)}=O\big{(}\kappa_{R}^{-\frac{\alpha}{\beta}}\big{)}

so putting both cases together we obtain a bound of the form O(κRα/(β12))O(\kappa_{R}^{-\alpha/(\beta\wedge\frac{1}{2})}). Next we address the term 𝔷(θ0θ^0)2αeαR\mathfrak{z}(\theta_{0}-\widehat{\theta}_{0})^{-2\alpha}e^{-\alpha R} appearing in (35). For rr0rr^{\prime}\leq r_{0}\leq r^{\prime\prime}, by (36), we have

𝔷(θ0θ^0)2αeαR=O(𝔷κR2αe2α(β12)r0eαR)\mathfrak{z}(\theta_{0}-\widehat{\theta}_{0})^{-2\alpha}e^{-\alpha R}=O\big{(}\mathfrak{z}\kappa_{R}^{-2\alpha}e^{2\alpha(\beta\wedge\frac{1}{2})r_{0}}e^{-\alpha R}\big{)}

which is an increasing function of r0r_{0} and hence within [r,r][r^{\prime},r^{\prime\prime}] the bound is maximized at rr^{\prime\prime}. Thus, it is enough to deal with points with radial component r0r_{0} in [r,R][r^{\prime\prime},R]. Now, for said points a similar argument to the one in (36) gives θ0θ^013κAϕ(𝔷)\theta_{0}-\widehat{\theta}_{0}\geq\tfrac{1}{3}\kappa_{A}\phi^{(\mathfrak{z})} and hence

𝔷(θ0θ^0)2αeαR=O(𝔷κA2α(eR2ϕ(𝔷))2α).\mathfrak{z}(\theta_{0}-\widehat{\theta}_{0})^{-2\alpha}e^{-\alpha R}=O\big{(}\mathfrak{z}\kappa_{A}^{-2\alpha}\big{(}e^{\frac{R}{2}}\phi^{(\mathfrak{z})}\big{)}^{-2\alpha}\big{)}.

The analysis of this term depends on α\alpha and β\beta, and we begin addressing the case β<12\beta<\frac{1}{2} and α<2β\alpha<2\beta since it is the most delicate. Notice that in this case ϕ(𝔷)=eβR𝔷βα\phi^{(\mathfrak{z})}=e^{-\beta R}\mathfrak{z}^{\frac{\beta}{\alpha}} so in particular 𝔷(eR2ϕ(𝔷))2α=(𝔷eαR)12β\mathfrak{z}(e^{\frac{R}{2}}\phi^{(\mathfrak{z})})^{-2\alpha}=(\mathfrak{z}e^{-\alpha R})^{1-2\beta} and hence the upper bound is an increasing function of 𝔷\mathfrak{z}. Now, in order for 𝒟¯(𝔷)\overline{\mathcal{D}}^{(\mathfrak{z})} to be non-empty we need ϕ(R)+κAϕ(𝔷)π\phi(R)+\kappa_{A}\phi^{(\mathfrak{z})}\leq\pi we deduce that 𝔷eαR=O(κAαβ)\mathfrak{z}e^{-\alpha R}=O(\kappa_{A}^{-\frac{\alpha}{\beta}}), which gives

𝔷κA2α(eR2ϕ(𝔷))2α=O(κA2α(𝔷eαR)12β)=O(κA2ακAαβ(12β))=O(κAαβ).\mathfrak{z}\kappa_{A}^{-2\alpha}(e^{\frac{R}{2}}\phi^{(\mathfrak{z})})^{-2\alpha}=O\big{(}\kappa_{A}^{-2\alpha}\big{(}\mathfrak{z}e^{-\alpha R}\big{)}^{1-2\beta}\big{)}=O\big{(}\kappa_{A}^{-2\alpha}\kappa_{A}^{-\frac{\alpha}{\beta}(1-2\beta)}\big{)}=O(\kappa_{A}^{-\frac{\alpha}{\beta}}).

The remaining cases are easier: If β12\beta\geq\frac{1}{2}, then eR2ϕ(𝔷)=𝔷12αe^{\frac{R}{2}}\phi^{(\mathfrak{z})}=\mathfrak{z}^{\frac{1}{2\alpha}} so the term 𝔷κA2α(eR2ϕ(𝔷))2α\mathfrak{z}\kappa_{A}^{-2\alpha}(e^{\frac{R}{2}}\phi^{(\mathfrak{z})})^{-2\alpha} is O(κA2α)O(\kappa_{A}^{-2\alpha}), while for the cases α=2β\alpha=2\beta and α>2β\alpha>2\beta it can be easily checked that since 𝔷=O(e2βR)\mathfrak{z}=O(e^{2\beta R}) and 𝔷=O(e2βR/log(e2βR))\mathfrak{z}=O(e^{2\beta R}/\log(e^{2\beta R})) respectively, we obtain 𝔷(eR2ϕ(𝔷))2α=o(1)\mathfrak{z}(e^{\frac{R}{2}}\phi^{(\mathfrak{z})})^{-2\alpha}=o(1). Since κR=κA\kappa_{R}=\kappa_{A} whenever α<2β\alpha<2\beta, putting together the bounds for both terms in (35) we finally obtain the sought after bound:

x0(Trad0𝔷)=O(κRα/(β12))={eΩ(κA2), if α2β,O(κAα/(β12)), if α<2β.\mathbb{P}_{x_{0}}(T^{rad}_{0}\leq\mathfrak{z})\;=\;O(\kappa_{R}^{-\alpha/(\beta\wedge\frac{1}{2})})\;=\;\begin{cases}e^{-\Omega(\kappa_{A}^{2})},&\text{ if $\alpha\geq 2\beta$,}\\[2.0pt] O(\kappa_{A}^{-\alpha/(\beta\wedge\frac{1}{2})}),&\text{ if $\alpha<2\beta$.}\end{cases}

Next, we show that for κA=C\kappa_{A}=C we have 𝒟¯(𝔷)x0(Trad0𝔷)dμ(x0)=O(nϕ(𝔷))\int_{\overline{\mathcal{D}}^{(\mathfrak{z})}}\mathbb{P}_{x_{0}}(T^{rad}_{0}\leq\mathfrak{z})d\mu(x_{0})=O(n\phi^{(\mathfrak{z})}) by integrating both terms in (35) over 𝒟¯(𝔷)\overline{\mathcal{D}}^{(\mathfrak{z})}. For the first one we use that θ0θ^013θ0\theta_{0}-\widehat{\theta}_{0}\geq\frac{1}{3}\theta_{0} to obtain

𝒟¯(𝔷)(θ0θ^0)2αeαr0dμ(x0)=O(neαR)rRγ(r0)θ02αdθ0dr0=O(neαR)rR1γ(r0)2α1dr0.\int_{\overline{\mathcal{D}}^{(\mathfrak{z})}}(\theta_{0}-\widehat{\theta}_{0})^{-2\alpha}e^{-\alpha r_{0}}d\mu(x_{0})=O(ne^{-\alpha R})\int_{r^{\prime}}^{R}\int_{\gamma(r_{0})}^{\infty}\theta_{0}^{-2\alpha}d\theta_{0}dr_{0}=O(ne^{-\alpha R})\int_{r^{\prime}}^{R}\frac{1}{\gamma(r_{0})^{2\alpha-1}}dr_{0}.

Splitting the range of integration of the last integral into [r,r][r^{\prime},r^{\prime\prime}] and [r,R][r^{\prime\prime},R] we obtain

𝒟¯(𝔷)(θ0θ^0)2αeαr0dμ(x0)\displaystyle\int_{\overline{\mathcal{D}}^{(\mathfrak{z})}}(\theta_{0}-\widehat{\theta}_{0})^{-2\alpha}e^{-\alpha r_{0}}d\mu(x_{0}) =O(neαR(e(2α1)(β12)r+(Rr)(ϕ(𝔷))(2α1)))\displaystyle=O\big{(}ne^{-\alpha R}\big{(}e^{(2\alpha-1)(\beta\wedge\frac{1}{2})r^{\prime\prime}}+(R-r^{\prime\prime})(\phi^{(\mathfrak{z})})^{-(2\alpha-1)}\big{)}\big{)}
=O(neαR(ϕ(𝔷))(2α1)log(e(β12)Rϕ(𝔷)))\displaystyle=O\big{(}ne^{-\alpha R}(\phi^{(\mathfrak{z})})^{-(2\alpha-1)}\log(e^{(\beta\wedge\frac{1}{2})R}\phi^{(\mathfrak{z})})\big{)}

where the last equality is by definition of rr^{\prime\prime} and since e(β12)Rϕ(𝔷)=Ω(1)e^{(\beta\wedge\frac{1}{2})R}\phi^{(\mathfrak{z})}=\Omega(1) implies that Rr=Ω(log(e(β12)Rϕ(𝔷)))R-r^{\prime\prime}=\Omega(\log(e^{(\beta\wedge\frac{1}{2})R}\phi^{(\mathfrak{z})})). To address this last bound suppose first that β<12\beta<\frac{1}{2} and observe that since ϕ(𝔷)=Ω(eβR)O(1)\phi^{(\mathfrak{z})}=\Omega(e^{-\beta R})\cap O(1) we have eαR(ϕ(𝔷))2αlog(e(β12)Rϕ(𝔷))=O(Re2α(12β)R)=O(1)e^{-\alpha R}(\phi^{(\mathfrak{z})})^{-2\alpha}\log(e^{(\beta\wedge\frac{1}{2})R}\phi^{(\mathfrak{z})})=O\big{(}Re^{-2\alpha(\frac{1}{2}-\beta)R}\big{)}=O(1), while for β12\beta\geq\frac{1}{2} we have α<2β\alpha<2\beta, hence ϕ(𝔷)=eR2𝔷12α\phi^{(\mathfrak{z})}=e^{-\frac{R}{2}}\mathfrak{z}^{\frac{1}{2\alpha}} so eαR(ϕ(𝔷))2αlog(e(β12)Rϕ(𝔷))=O(𝔷1log𝔷)=O(1)e^{-\alpha R}(\phi^{(\mathfrak{z})})^{-2\alpha}\log(e^{(\beta\wedge\frac{1}{2})R}\phi^{(\mathfrak{z})})=O\left(\mathfrak{z}^{-1}\log\mathfrak{z}\right)=O(1) (since 𝔷=Ω(1)\mathfrak{z}=\Omega(1)) and we conclude that in any case scenario

𝒟¯(𝔷)(θ0θ^0)2αeαr0dμ(x0)=O(nϕ(𝔷)).\int_{\overline{\mathcal{D}}^{(\mathfrak{z})}}(\theta_{0}-\widehat{\theta}_{0})^{-2\alpha}e^{-\alpha r_{0}}d\mu(x_{0})=O(n\phi^{(\mathfrak{z})}).

For the second term in (35) we use θ0θ^013θ0\theta_{0}-\widehat{\theta}_{0}\geq\frac{1}{3}\theta_{0} again, together with γ(r0)Cϕ(𝔷)\gamma(r_{0})\geq C\phi^{(\mathfrak{z})} to obtain

𝒟¯(𝔷)𝔷(θ0θ^0)2αeαRdμ(x0)\displaystyle\int_{\overline{\mathcal{D}}^{(\mathfrak{z})}}\mathfrak{z}(\theta_{0}-\widehat{\theta}_{0})^{-2\alpha}e^{-\alpha R}d\mu(x_{0}) =O(𝔷neαR)rRCϕ(𝔷)θ02αdθ0eα(Rr0)dr0\displaystyle=O(\mathfrak{z}ne^{-\alpha R})\int_{r^{\prime}}^{R}\int_{C\phi^{(\mathfrak{z})}}^{\infty}\theta_{0}^{-2\alpha}d\theta_{0}e^{-\alpha(R-r_{0})}dr_{0}
=O(𝔷neαR(ϕ(𝔷))(2α1)),\displaystyle=O\big{(}\mathfrak{z}ne^{-\alpha R}(\phi^{(\mathfrak{z})})^{-(2\alpha-1)}\big{)},

and it can be checked directly from the definition of ϕ(𝔷)\phi^{(\mathfrak{z})} and our assumption 𝔷=Ω(1)\mathfrak{z}=\Omega(1) that this last expression is also O(nϕ(𝔷))O(n\phi^{(\mathfrak{z})}). ∎

The previous proposition gave the uniform bound for the first term appearing in (33), as well as a bound for its integral over 𝒟¯(𝔷)\overline{\mathcal{D}}^{(\mathfrak{z})}. The following proposition allows us to bound from above the second term in (33) with the use of Proposition 39 to handle the function x0(I(𝔷)σ)\mathbb{P}_{x_{0}}(I(\mathfrak{z})\geq\sigma). Recall that in order to apply said proposition we require σ\sigma to be larger than σ0\sigma_{0} with

σ0:={𝔏e2βR(v0𝔷)12βα,if α2β,𝔏e2βRv0𝔷log(v0𝔷),if α=2β,\sigma_{0}:=\begin{cases}\mathfrak{L}^{\prime}e^{-2\beta R}(v_{0}\mathfrak{z})^{1\vee\frac{2\beta}{\alpha}},&\text{if $\alpha\neq 2\beta$,}\\[2.0pt] \mathfrak{L}^{\prime}e^{-2\beta R}v_{0}\mathfrak{z}\log(v_{0}\mathfrak{z}),&\text{if $\alpha=2\beta$,}\end{cases} (37)

where 𝔏\mathfrak{L}^{\prime} is a large constant that depends on α\alpha and β\beta, and where v0v_{0} is some function of x0x_{0} for which we only ask to satisfy v0>4cv_{0}>\frac{4}{c} for 0<c<10<c<1 being a constant lower bound for 𝔷\mathfrak{z} (which exists since 𝔷=Ω(1)\mathfrak{z}=\Omega(1)).

Proposition 44.

Let v0:=v0(x0)v_{0}:=v_{0}(x_{0}) be a function satisfying v0>4cv_{0}>\frac{4}{c}, and let σ0\sigma_{0} and θ^0\widehat{\theta}_{0} be as in (32) and (37), respectively. Assume that κAC\kappa_{A}\geq C where C>0C>0 is as in the statement of Proposition 41 and θ^02=Ω(σ0)\widehat{\theta}_{0}^{2}=\Omega(\sigma_{0}), then

0θ^0σ3/2eθ^022σx0(I(𝔷)σ)dσ=O(eθ^022σ0+e116v02𝔷+θ^0αβeαr0+v0𝔷eαR+𝔱5),\int_{0}^{\infty}\tfrac{\widehat{\theta}_{0}}{\sigma^{3/2}}e^{-\frac{\widehat{\theta}_{0}^{2}}{2\sigma}}\mathbb{P}_{x_{0}}(I(\mathfrak{z})\geq\sigma)d\sigma=O\big{(}e^{-\frac{\widehat{\theta}_{0}^{2}}{2\sigma_{0}}}+e^{-\frac{1}{16}v_{0}^{2}\mathfrak{z}}+\widehat{\theta}_{0}^{-\frac{\alpha}{\beta}}e^{-\alpha r_{0}}+v_{0}\mathfrak{z}e^{-\alpha R}+\mathfrak{t}_{5}\big{)}, (38)

where

𝔱5={(v0𝔷)1α4βeαRθ^0αβ, if α2β,01wθ^02σ0ewθ^022σ0(𝔏log(v0𝔷))w1dw, if α=2β.\mathfrak{t}_{5}=\begin{cases}(v_{0}\mathfrak{z})^{1\vee\frac{\alpha}{4\beta}}e^{-\alpha R}\widehat{\theta}_{0}^{-\frac{\alpha}{\beta}},&\text{ if $\alpha\neq 2\beta$,}\\[2.0pt] \int_{0}^{1}\sqrt{\tfrac{w\widehat{\theta}_{0}^{2}}{\sigma_{0}}}e^{-\frac{w\widehat{\theta}_{0}^{2}}{2\sigma_{0}}}\left(\mathfrak{L}^{\prime}\log(v_{0}\mathfrak{z})\right)^{w-1}dw,&\text{ if $\alpha=2\beta$}.\end{cases}
Proof.

Fix any given x0𝒟¯(𝔷)x_{0}\in\overline{\mathcal{D}}^{(\mathfrak{z})} and split 0θ^0σ3/2eθ^022σx0(I(𝔷)σ)dσ\int_{0}^{\infty}\frac{\widehat{\theta}_{0}}{\sigma^{3/2}}e^{-\frac{\widehat{\theta}_{0}^{2}}{2\sigma}}\mathbb{P}_{x_{0}}(I(\mathfrak{z})\geq\sigma)d\sigma into

0σ0θ^0σ3/2eθ^022σx0(I(𝔷)σ)dσ+σ0θ^0σ3/2eθ^022σx0(I(𝔷)σ)dσ.\int_{0}^{\sigma_{0}}\frac{\widehat{\theta}_{0}}{\sigma^{3/2}}e^{-\frac{\widehat{\theta}_{0}^{2}}{2\sigma}}\mathbb{P}_{x_{0}}\big{(}I(\mathfrak{z})\geq\sigma\big{)}d\sigma+\int_{\sigma_{0}}^{\infty}\frac{\widehat{\theta}_{0}}{\sigma^{3/2}}e^{-\frac{\widehat{\theta}_{0}^{2}}{2\sigma}}\mathbb{P}_{x_{0}}(I(\mathfrak{z})\geq\sigma)d\sigma. (39)

For the first integral, we can bound the probability inside it by 11, which using that θ^02/σ0=Ω(1)\widehat{\theta}_{0}^{2}/\sigma_{0}=\Omega(1) and the change of variables w:=θ^02/σw:=\widehat{\theta}_{0}^{2}/\sigma gives a bound

0σ0θ^0σ3/2eθ^022σdσ=θ^02σ01wew2dw=O(eθ^022σ0)\int_{0}^{\sigma_{0}}\frac{\widehat{\theta}_{0}}{\sigma^{3/2}}e^{-\frac{\widehat{\theta}_{0}^{2}}{2\sigma}}d\sigma\,=\,\int_{\frac{\widehat{\theta}_{0}^{2}}{\sigma_{0}}}^{\infty}\frac{1}{\sqrt{w}}e^{-\frac{w}{2}}dw\,=\,O\big{(}e^{-\frac{\widehat{\theta}_{0}^{2}}{2\sigma_{0}}}\big{)}

For the second integral in (39), we use (28) to bound the probability within together with Proposition 39 (since σ>σ0\sigma>\sigma_{0}) and obtain

x0(I(𝔷)σ)=O(e116v02𝔷+eαr0+σα2βeαr0+v0𝔷eαR+((v0𝔷)1α4βσα2βeαR)1𝔢)\mathbb{P}_{x_{0}}(I(\mathfrak{z})\geq\sigma)\,=\,O\big{(}e^{-\frac{1}{16}v_{0}^{2}\mathfrak{z}}+e^{-\alpha r_{0}}+\sigma^{-\frac{\alpha}{2\beta}}e^{-\alpha r_{0}}+v_{0}\mathfrak{z}e^{-\alpha R}+\big{(}(v_{0}\mathfrak{z})^{1\vee\frac{\alpha}{4\beta}}\sigma^{-\frac{\alpha}{2\beta}}e^{-\alpha R}\big{)}^{1-\mathfrak{e}}\big{)} (40)

where 𝔢:=1σe2βR𝔏v0𝔷log(v0𝔷)=σ0σ\mathfrak{e}:=\frac{1}{\sigma e^{2\beta R}}\mathfrak{L}^{\prime}v_{0}\mathfrak{z}\log(v_{0}\mathfrak{z})=\frac{\sigma_{0}}{\sigma} if α=2β\alpha=2\beta, and 𝔢=0\mathfrak{e}=0 otherwise. Grouping the first, second and fourth terms in (40), the assumption θ^02/σ0=Ω(1)\widehat{\theta}_{0}^{2}/\sigma_{0}=\Omega(1) gives

σ0θ^0σ3/2eθ^022σ(e116v02𝔷+eαr0+v0𝔷eαR)dσ=O(e116v02𝔷+eαr0+v0𝔷eαR)\int_{\sigma_{0}}^{\infty}\frac{\widehat{\theta}_{0}}{\sigma^{3/2}}e^{-\frac{\widehat{\theta}_{0}^{2}}{2\sigma}}\big{(}e^{-\frac{1}{16}v_{0}^{2}\mathfrak{z}}+e^{-\alpha r_{0}}+v_{0}\mathfrak{z}e^{-\alpha R}\big{)}d\sigma=O\big{(}e^{-\frac{1}{16}v_{0}^{2}\mathfrak{z}}+e^{-\alpha r_{0}}+v_{0}\mathfrak{z}e^{-\alpha R}\big{)}

since the terms are independent of σ\sigma and σ0θ^0σ3/2eθ^022σdσ=0θ^02/σ01wew2dw=O(1)\int_{\sigma_{0}}^{\infty}\frac{\widehat{\theta}_{0}}{\sigma^{3/2}}e^{-\frac{\widehat{\theta}_{0}^{2}}{2\sigma}}d\sigma=\int_{0}^{\widehat{\theta}_{0}^{2}/\sigma_{0}}\frac{1}{\sqrt{w}}e^{-\frac{w}{2}}dw=O(1). For the term σα2βeαr0\sigma^{-\frac{\alpha}{2\beta}}e^{-\alpha r_{0}} analogous computations give

σ0θ^0σ3/2eθ^022σσα2βeαr0dσ=eαr0θ^0αβ0θ^02σ0wα2β12ew2dw=O(eαr0θ^0αβ).\int_{\sigma_{0}}^{\infty}\frac{\widehat{\theta}_{0}}{\sigma^{3/2}}e^{-\frac{\widehat{\theta}_{0}^{2}}{2\sigma}}\sigma^{-\frac{\alpha}{2\beta}}e^{-\alpha r_{0}}d\sigma\,=\,e^{-\alpha r_{0}}\widehat{\theta}_{0}^{-\frac{\alpha}{\beta}}\int_{0}^{\frac{\widehat{\theta}_{0}^{2}}{\sigma_{0}}}w^{\frac{\alpha}{2\beta}-\frac{1}{2}}e^{-\frac{w}{2}}dw\,=\,O\big{(}e^{-\alpha r_{0}}\widehat{\theta}_{0}^{-\frac{\alpha}{\beta}}\big{)}.

The final term in (40) is a bit more delicate, and must be treated differently according to whether α=2β\alpha=2\beta or α2β\alpha\neq 2\beta. In the latter case we can repeat the computations used in the previous term, giving

σ0θ^0σ3/2eθ^022σ(v0𝔷)1α4βσα2βeαRdσ=O((v0𝔷)1α4βeαRθ^0αβ),\int_{\sigma_{0}}^{\infty}\frac{\widehat{\theta}_{0}}{\sigma^{3/2}}e^{-\frac{\widehat{\theta}_{0}^{2}}{2\sigma}}(v_{0}\mathfrak{z})^{1\vee\frac{\alpha}{4\beta}}\sigma^{-\frac{\alpha}{2\beta}}e^{-\alpha R}d\sigma=O\big{(}(v_{0}\mathfrak{z})^{1\vee\frac{\alpha}{4\beta}}e^{-\alpha R}\widehat{\theta}_{0}^{-\frac{\alpha}{\beta}}\big{)},

whereas if α=2β\alpha=2\beta, we use the change of variables w:=σ0σw:=\frac{\sigma_{0}}{\sigma} and the fact that v0𝔷σα2βeαR=σ0/(σ𝔏log(v0𝔷))=w/(𝔏log(v0𝔷))v_{0}\mathfrak{z}\sigma^{-\frac{\alpha}{2\beta}}e^{-\alpha R}=\sigma_{0}/(\sigma\mathfrak{L}^{\prime}\log(v_{0}\mathfrak{z}))=w/(\mathfrak{L}^{\prime}\log(v_{0}\mathfrak{z})) so that

σ0θ^0σ3/2eθ^022σ((v0𝔷)1α4βσα2βeαR)1σ0σdσ\displaystyle\int_{\sigma_{0}}^{\infty}\frac{\widehat{\theta}_{0}}{\sigma^{3/2}}e^{-\frac{\widehat{\theta}_{0}^{2}}{2\sigma}}\big{(}(v_{0}\mathfrak{z})^{1\vee\frac{\alpha}{4\beta}}\sigma^{-\frac{\alpha}{2\beta}}e^{-\alpha R}\big{)}^{1-\frac{\sigma_{0}}{\sigma}}d\sigma =01θ^0σ0wewθ^022σ0(w𝔏log(v0𝔷))1wdw\displaystyle=\int_{0}^{1}\frac{\widehat{\theta}_{0}}{\sqrt{\sigma_{0}w}}e^{-\frac{w\widehat{\theta}_{0}^{2}}{2\sigma_{0}}}\Big{(}\frac{w}{\mathfrak{L}^{\prime}\log(v_{0}\mathfrak{z})}\Big{)}^{1-w}dw
=O(01wθ^02σ0ewθ^022σ0(𝔏log(v0𝔷))w1dw).\displaystyle=O\Big{(}\int_{0}^{1}\sqrt{\tfrac{w\widehat{\theta}_{0}^{2}}{\sigma_{0}}}e^{-\frac{w\widehat{\theta}_{0}^{2}}{2\sigma_{0}}}\Big{(}\mathfrak{L}^{\prime}\log(v_{0}\mathfrak{z})\Big{)}^{w-1}dw\Big{)}.

The result then follows by adding all the bounds and noticing that eαr0=O(θ^0αβeαr0)e^{-\alpha r_{0}}=O(\widehat{\theta}_{0}^{-\frac{\alpha}{\beta}}e^{-\alpha r_{0}}). ∎

We are now ready to prove the main result of this section.

Proof of Proposition 41.

Using (33) and Proposition 43 it will be enough to obtain a uniform bound for 0θ^0σ3/2eθ^022σx0(I(𝔷)σ)dσ\int_{0}^{\infty}\frac{\widehat{\theta}_{0}}{\sigma^{3/2}}e^{-\frac{\widehat{\theta}_{0}^{2}}{2\sigma}}\mathbb{P}_{x_{0}}(I(\mathfrak{z})\geq\sigma)d\sigma on 𝒟¯(𝔷)\overline{\mathcal{D}}^{(\mathfrak{z})} as well as an upper bound for the integral of this term over this set. We will address the uniform bounds first, since calculations are easier, and then turn to the upper bound for the integrals over 𝒟¯(𝔷)\overline{\mathcal{D}}^{(\mathfrak{z})}, whose analysis is similar.

Uniform upper bound: Our goal is to show that under the hypothesis 𝔷=ω(1)\mathfrak{z}=\omega(1),

supx0𝒟¯(𝔷)0θ^0σ3/2eθ^022σx0(I(𝔷)σ)dσ={eΩ(κA2), if α2β,O(κAα/(β12)), if α<2β.\sup_{x_{0}\in\overline{\mathcal{D}}^{(\mathfrak{z})}}\int_{0}^{\infty}\tfrac{\widehat{\theta}_{0}}{\sigma^{3/2}}e^{-\frac{\widehat{\theta}_{0}^{2}}{2\sigma}}\mathbb{P}_{x_{0}}(I(\mathfrak{z})\geq\sigma)d\sigma=\begin{cases}e^{-\Omega(\kappa_{A}^{2})},&\text{ if $\alpha\geq 2\beta$,}\\[2.0pt] O(\kappa_{A}^{-\alpha/(\beta\wedge\frac{1}{2})}),&\text{ if $\alpha<2\beta$}.\end{cases} (41)

Throughout this analysis, we take v0v_{0} constant and equal to 4c\frac{4}{c}. With this choice of v0v_{0} we deduce from Part (iv) in Fact 42 that σ0=Θ((ϕ(𝔷))24β)=O((γ(r0)ϕ(r0))2)=O(θ^02)\sigma_{0}=\Theta((\phi^{(\mathfrak{z})})^{2\vee 4\beta})=O((\gamma(r_{0})-\phi(r_{0}))^{2})=O(\widehat{\theta}_{0}^{2}) and hence we have (38) as in Proposition 44, so we address each term appearing in the bound as follows:

We assume throughout the uniform upper bound proof that x0=(r0,θ0)𝒟¯(𝔷)x_{0}=(r_{0},\theta_{0})\in\overline{\mathcal{D}}^{(\mathfrak{z})}, and hence this set is non empty so in particular we must have κAϕ(𝔷)+ϕ(R)π\kappa_{A}\phi^{(\mathfrak{z})}+\phi(R)\leq\pi.

  • For the term eθ^022σ0e^{-\frac{\widehat{\theta}_{0}^{2}}{2\sigma_{0}}} it follows from the definition of θ^0\widehat{\theta}_{0} that it is equal to e29σ0(|θ0|ϕ(r0))2e^{-\frac{2}{9\sigma_{0}}(|\theta_{0}|-\phi(r_{0}))^{2}} and since |θ0|γ(r0)|\theta_{0}|\geq\gamma(r_{0}) by Part (iv) in Fact 42 we deduce (|θ0|ϕ(r0))2=Ω(κA2(ϕ(𝔷))2)(|\theta_{0}|-\phi(r_{0}))^{2}=\Omega(\kappa_{A}^{2}(\phi^{(\mathfrak{z})})^{2}). Recalling that σ0=Θ((ϕ(𝔷))24β)\sigma_{0}=\Theta((\phi^{(\mathfrak{z})})^{2\vee 4\beta}) the term eθ^022σ0e^{-\frac{\widehat{\theta}_{0}^{2}}{2\sigma_{0}}} equals eΩ(κA2)e^{-\Omega(\kappa_{A}^{2})}, which is at most of the same order than the one claimed in (41).

  • The term e116v02𝔷e^{-\frac{1}{16}v_{0}^{2}\mathfrak{z}} appearing in (38) does not depend on x0x_{0} and since we are assuming 𝔷=ω(1)\mathfrak{z}=\omega(1), it becomes o(1)o(1), so it is negligible.

  • Now, we consider the term θ^0αβeαr0\widehat{\theta}_{0}^{-\frac{\alpha}{\beta}}e^{-\alpha r_{0}}. By definition θ^0=Θ(|θ0|ϕ(r0))\widehat{\theta}_{0}=\Theta(|\theta_{0}|-\phi(r_{0})). Since |θ0|γ(r0)ϕ(r0)|\theta_{0}|\geq\gamma(r_{0})\geq\phi(r_{0}) for x0𝒟¯(𝔷)x_{0}\in\overline{\mathcal{D}}^{(\mathfrak{z})}, by Part (iv) in Fact 42, we have

    ((|θ0|ϕ(r0))αβeαr0=O(κRαβeα((112β)1)r0),\big{(}(|\theta_{0}|-\phi(r_{0}))^{-\frac{\alpha}{\beta}}e^{-\alpha r_{0}}=O\big{(}\kappa_{R}^{-\frac{\alpha}{\beta}}e^{\alpha((1\wedge\frac{1}{2\beta})-1)r_{0}}\big{)},

    If β12\beta\leq\frac{1}{2}, this bound is O(κRαβ)O(\kappa_{R}^{-\frac{\alpha}{\beta}}) independently of r0r_{0}, whilst if β>12\beta>\frac{1}{2}, using that r0rr_{0}\geq r^{\prime}, the bound is O(κRαβ/eα2β(2β1)r0)=O(κRαβ/eα2β(2β1)r)O(\kappa_{R}^{-\frac{\alpha}{\beta}}/e^{\frac{\alpha}{2\beta}(2\beta-1)r_{0}})=O(\kappa_{R}^{-\frac{\alpha}{\beta}}/e^{\frac{\alpha}{2\beta}(2\beta-1)r^{\prime}}). Recalling that, by Part (ii) in Fact 42, we know that r=2logκRO(1)r^{\prime}=2\log\kappa_{R}-O(1), replacing this value in the previous bound we obtain a term of order κR2α\kappa_{R}^{-2\alpha}. Summarizing, we have shown that θ^0αβeαr0=O(κRα(21β))\widehat{\theta}_{0}^{-\frac{\alpha}{\beta}}e^{-\alpha r_{0}}=O(\kappa_{R}^{-\alpha(2\vee\frac{1}{\beta})}) and the result then follows by our choice of κR\kappa_{R}.

  • For the term v0𝔷eαRv_{0}\mathfrak{z}e^{-\alpha R} we observe that it is independent of x0x_{0}, and under the assumptions 𝔷=O(eαR/R)\mathfrak{z}=O(e^{\alpha R}/R) if α=2β\alpha=2\beta and 𝔷=O(e2βR)\mathfrak{z}=O(e^{2\beta R}) if α>2β\alpha>2\beta in the statement of Proposition 37 we obtain that 𝔷eαR=o(1)\mathfrak{z}e^{-\alpha R}=o(1) and hence the term is negligible. In the case α<2β\alpha<2\beta, since κAϕ(𝔷)+ϕ(R)π\kappa_{A}\phi^{(\mathfrak{z})}+\phi(R)\leq\pi we obtain 𝔷eαR=O(κAα/(β12))\mathfrak{z}e^{-\alpha R}=O(\kappa_{A}^{-\alpha/(\beta\wedge\frac{1}{2})}) by definition of ϕ(𝔷)\phi^{(\mathfrak{z})}.

  • To address the term 𝔱5\mathfrak{t}_{5} we first treat the case α2β\alpha\neq 2\beta. Using the definition of θ^0\widehat{\theta}_{0} and the fact that θ0γ(r0)ϕ(r0)\theta_{0}\geq\gamma(r_{0})\geq\phi(r_{0}) this term is of order

    𝔷1α4βθ^0αβeαR=O(𝔷1α4β(γ(r0)ϕ(r0))αβeαR)\mathfrak{z}^{1\vee\frac{\alpha}{4\beta}}\widehat{\theta}_{0}^{-\frac{\alpha}{\beta}}e^{-\alpha R}=O\big{(}\mathfrak{z}^{1\vee\frac{\alpha}{4\beta}}(\gamma(r_{0})-\phi(r_{0}))^{-\frac{\alpha}{\beta}}e^{-\alpha R}\big{)}

    and using Part (iv) in Fact 42 we deduce

    𝔷1α4βθ^0αβeαR=O(𝔷1α4β(κAϕ(𝔷))αβeαR)=O((κA𝔷(βα14)eβRϕ(𝔷))αβ).\mathfrak{z}^{1\vee\frac{\alpha}{4\beta}}\widehat{\theta}_{0}^{-\frac{\alpha}{\beta}}e^{-\alpha R}=O\big{(}\mathfrak{z}^{1\vee\frac{\alpha}{4\beta}}(\kappa_{A}\phi^{(\mathfrak{z})})^{-\frac{\alpha}{\beta}}e^{-\alpha R}\big{)}=O((\kappa_{A}\mathfrak{z}^{-(\frac{\beta}{\alpha}\vee\frac{1}{4})}e^{\beta R}\phi^{(\mathfrak{z})})^{-\frac{\alpha}{\beta}}). (42)

    From the definition of ϕ(𝔷)\phi^{(\mathfrak{z})} we have

    𝔷(βα14)eβRϕ(𝔷)={(eR2𝔷12α)0(2β1), if α<2β,𝔷(12βα)14, if α>2β,\mathfrak{z}^{-(\frac{\beta}{\alpha}\vee\frac{1}{4})}e^{\beta R}\phi^{(\mathfrak{z})}=\begin{cases}(e^{\frac{R}{2}}\mathfrak{z}^{-\frac{1}{2\alpha}})^{0\vee(2\beta-1)},&\text{ if $\alpha<2\beta$,}\\[2.0pt] \mathfrak{z}^{(\frac{1}{2}-\frac{\beta}{\alpha})\vee\frac{1}{4}},&\text{ if $\alpha>2\beta$,}\end{cases}

    where we directly observe that if α>2β\alpha>2\beta this expression is ω(1)\omega(1) (since 𝔷=ω(1)\mathfrak{z}=\omega(1)), and hence the upper bound is o(1)o(1) so in particular it has the form eΩ(κA2)e^{-\Omega(\kappa_{A}^{2})} as desired. For the case α<2β\alpha<2\beta we distinguish between the cases β12\beta\leq\frac{1}{2} and β>12\beta>\frac{1}{2}: In the former we directly obtain that 𝔷1α4β(κAϕ(𝔷))αβeαR=O(κAαβ)\mathfrak{z}^{1\vee\frac{\alpha}{4\beta}}(\kappa_{A}\phi^{(\mathfrak{z})})^{-\frac{\alpha}{\beta}}e^{-\alpha R}=O(\kappa_{A}^{-\frac{\alpha}{\beta}}), whereas if β>12\beta>\frac{1}{2}, since ϕ(R)+κAϕ(𝔷)π\phi(R)+\kappa_{A}\phi^{(\mathfrak{z})}\leq\pi and by definition of ϕ(𝔷)\phi^{(\mathfrak{z})}, we have 𝔷(βα14)eβRϕ(𝔷)=(1/ϕ(𝔷))2β1=Ω(κA(2β1))\mathfrak{z}^{-(\frac{\beta}{\alpha}\vee\frac{1}{4})}e^{\beta R}\phi^{(\mathfrak{z})}=(1/\phi^{(\mathfrak{z})})^{2\beta-1}=\Omega(\kappa_{A}^{-(2\beta-1)}) and obtain an upper bound of O(κAαβκAαβ(2β1))=O(κA2α)O\big{(}\kappa_{A}^{-\frac{\alpha}{\beta}}\kappa_{A}^{-\frac{\alpha}{\beta}(2\beta-1)}\big{)}=O(\kappa_{A}^{-2\alpha}).

    Assume now that α=2β\alpha=2\beta so that 𝔱5=01wθ^02σ0ewθ^022σ0(𝔏log(v0𝔷))w1dw\mathfrak{t}_{5}=\int_{0}^{1}\sqrt{\tfrac{w\widehat{\theta}_{0}^{2}}{\sigma_{0}}}e^{-\frac{w\widehat{\theta}_{0}^{2}}{2\sigma_{0}}}\left(\mathfrak{L}^{\prime}\log(v_{0}\mathfrak{z})\right)^{w-1}dw and notice that since the function xxex2x\to xe^{-x^{2}} is O(1)O(1) on +\mathbb{R}^{+}, and using that 𝔷=ω(1)\mathfrak{z}=\omega(1) we obtain

    𝔱5=O(01(𝔏log(v0𝔷))w1dw)=o(1)\mathfrak{t}_{5}=O\Big{(}\int_{0}^{1}(\mathfrak{L}^{\prime}\log(v_{0}\mathfrak{z}))^{w-1}dw\Big{)}=o(1)

    and hence the term is negligible in this case.

Integral upper bound: Our goal is to show that under the hypothesis κA=C\kappa_{A}=C for CC large and 𝔷=Ω(1)\mathfrak{z}=\Omega(1),

𝒟¯(𝔷)0θ^0σ3/2eθ^022σx0(I(𝔷)σ)dσdx0=O(nϕ(𝔷)).\int_{\overline{\mathcal{D}}^{(\mathfrak{z})}}\int_{0}^{\infty}\tfrac{\widehat{\theta}_{0}}{\sigma^{3/2}}e^{-\frac{\widehat{\theta}_{0}^{2}}{2\sigma}}\mathbb{P}_{x_{0}}\big{(}I(\mathfrak{z})\geq\sigma\big{)}d\sigma dx_{0}=O(n\phi^{(\mathfrak{z})}).

In contrast to the choice in the uniform upper bound analysis, we will now choose

v0(x0):=4c(|θ0|ϕ(r0)γ(r0)ϕ(r0))εv_{0}(x_{0})\;:=\;\frac{4}{c}\Big{(}\frac{|\theta_{0}|-\phi(r_{0})}{\gamma(r_{0})-\phi(r_{0})}\Big{)}^{\varepsilon}

for some fixed ε>0\varepsilon>0 satisfying ε<(2α1)(1α2β)\varepsilon<(2\alpha-1)(1\wedge\frac{\alpha}{2\beta}) (this is possible since α>12\alpha>\frac{1}{2}) and where 0<c<10<c<1 is a lower bound for 𝔷\mathfrak{z}. It will be convenient to define ε=ε/(1α2β)<1\varepsilon^{\prime}=\varepsilon/(1\wedge\frac{\alpha}{2\beta})<1 since it will appear many times in subsequent computations. In order to apply Proposition 44 we need to show that θ^02=Ω(σ0)\widehat{\theta}_{0}^{2}=\Omega(\sigma_{0}), which is less straightforward than the uniform bound case since σ0\sigma_{0} is also increasing with θ0\theta_{0}. In the case α2β\alpha\neq 2\beta, observe that |θ0|ϕ(r0)=Ω(v01/ε(γ(r0)ϕ(r0)))|\theta_{0}|-\phi(r_{0})=\Omega(v_{0}^{1/\varepsilon}(\gamma(r_{0})-\phi(r_{0}))) while at the same time γ(r0)ϕ(r0)=Ω(ϕ(𝔷))=Ω(eβR𝔷12βα)=Ω(v0ε2εσ0)\gamma(r_{0})-\phi(r_{0})=\Omega(\phi^{(\mathfrak{z})})=\Omega(e^{-\beta R}\mathfrak{z}^{\frac{1}{2}\vee\frac{\beta}{\alpha}})=\Omega(v_{0}^{-\frac{\varepsilon^{\prime}}{2\varepsilon}}\sqrt{\sigma_{0}}). We conclude that

θ^02σ0=49σ0(|θ0|ϕ(r0))2=Ω(v02εε),\frac{\widehat{\theta}_{0}^{2}}{\sigma_{0}}=\frac{4}{9\sigma_{0}}(|\theta_{0}|-\phi(r_{0}))^{2}=\Omega(v_{0}^{\frac{2-\varepsilon^{\prime}}{\varepsilon}}), (43)

which is Ω(1)\Omega(1) since v0=Ω(1)v_{0}=\Omega(1) and ε<2\varepsilon^{\prime}<2. Analogously, if α=2β\alpha=2\beta we still have |θ0|ϕ(r0)=Ω(v01/ε(γ(r0)ϕ(r0)))|\theta_{0}|-\phi(r_{0})=\Omega(v_{0}^{1/\varepsilon}(\gamma(r_{0})-\phi(r_{0}))) whereas this time γ(r0)ϕ(r0)=Ω(σ0log𝔷v0log(v0𝔷))\gamma(r_{0})-\phi(r_{0})=\Omega(\sqrt{\frac{\sigma_{0}\log\mathfrak{z}}{v_{0}\log(v_{0}\mathfrak{z})}}) so that

θ^02σ0=49σ0(|θ0|ϕ(r0))2=Ω(v02εεlog𝔷log(v0𝔷)),\frac{\widehat{\theta}_{0}^{2}}{\sigma_{0}}=\frac{4}{9\sigma_{0}}(|\theta_{0}|-\phi(r_{0}))^{2}=\Omega(v_{0}^{\frac{2-\varepsilon}{\varepsilon}}\tfrac{\log\mathfrak{z}}{\log(v_{0}\mathfrak{z})}), (44)

which is again Ω(1)\Omega(1) since both 𝔷=Ω(1)\mathfrak{z}=\Omega(1) and v0=Ω(1)v_{0}=\Omega(1). We have proved that in either case θ^02=Ω(σ0)\widehat{\theta}_{0}^{2}=\Omega(\sigma_{0}). As a result we have (38) where we address each term appearing in the upper bound as follows:

  • For the term eθ^022σ0e^{-\frac{\widehat{\theta}_{0}^{2}}{2\sigma_{0}}} assume first that α2β\alpha\neq 2\beta and use (43) to obtain

    𝒟¯(𝔷)exp(θ^022σ0)dμ(x0)=nrRγ(r0)exp(Ω(v02εε(θ0)))dθ0eα(Rr0)dr0,\int_{\overline{\mathcal{D}}^{(\mathfrak{z})}}\exp\Big{(}{-}\frac{\widehat{\theta}_{0}^{2}}{2\sigma_{0}}\Big{)}d\mu(x_{0})=n\int_{r^{\prime}}^{R}\int_{\gamma(r_{0})}^{\infty}\exp\Big{(}{-}\Omega\big{(}v_{0}^{\frac{2-\varepsilon^{\prime}}{\varepsilon}}(\theta_{0})\big{)}\Big{)}d\theta_{0}e^{-\alpha(R-r_{0})}dr_{0},

    so using the change of variable w:=θ0ϕ(r0)γ(r0)ϕ(r0)w:=\frac{\theta_{0}-\phi(r_{0})}{\gamma(r_{0})-\phi(r_{0})} we obtain

    γ(r0)exp(Ω(v02εε(θ0)))dθ0=(γ(r0)ϕ(r0))1eΩ(w2ε)dw=O(γ(r0)ϕ(r0)),\int_{\gamma(r_{0})}^{\infty}\exp\Big{(}{-}\Omega\big{(}v_{0}^{\frac{2-\varepsilon^{\prime}}{\varepsilon}}(\theta_{0})\big{)}\Big{)}d\theta_{0}=(\gamma(r_{0})-\phi(r_{0}))\int_{1}^{\infty}e^{-\Omega(w^{2-\varepsilon^{\prime}})}dw=O(\gamma(r_{0})-\phi(r_{0})),

    and hence

    𝒟¯(𝔷)exp(θ^022σ0)dμ(x0)=O(nrR(γ(r0)ϕ(r0))eα(Rr0)dr0)=O(nϕ(𝔷)),\int_{\overline{\mathcal{D}}^{(\mathfrak{z})}}\exp\Big{(}-\frac{\widehat{\theta}_{0}^{2}}{2\sigma_{0}}\Big{)}d\mu(x_{0})=O\Big{(}n\int_{r^{\prime}}^{R}(\gamma(r_{0})-\phi(r_{0}))e^{-\alpha(R-r_{0})}dr_{0}\Big{)}=O(n\phi^{(\mathfrak{z})}),

    where the last equality follows from Part (v) in Fact 42.

    For the case α=2β\alpha=2\beta we follow the same reasoning using (44) to deduce

    𝒟¯(𝔷)exp(θ^022σ0)dμ(x0)\displaystyle\int_{\overline{\mathcal{D}}^{(\mathfrak{z})}}\exp\Big{(}{-}\frac{\widehat{\theta}_{0}^{2}}{2\sigma_{0}}\Big{)}d\mu(x_{0}) =nrRγ(r0)exp(Ω(v02εε(θ0)log(𝔷)log(v0(θ0)𝔷)))dθ0eα(Rr0)dr0\displaystyle=n\int_{r^{\prime}}^{R}\int_{\gamma(r_{0})}^{\infty}\exp\left(-\Omega\left(v_{0}^{\frac{2-\varepsilon}{\varepsilon}}(\theta_{0})\tfrac{\log(\mathfrak{z})}{\log(v_{0}(\theta_{0})\mathfrak{z})}\right)\right)d\theta_{0}e^{-\alpha(R-r_{0})}dr_{0}
    =nrR(γ(r0)ϕ(r0))1exp(Ω(w2εlog(𝔷)log(wε𝔷)))dweα(Rr0)dr0,\displaystyle=n\int_{r^{\prime}}^{R}(\gamma(r_{0})-\phi(r_{0}))\int_{1}^{\infty}\exp\left(-\Omega(w^{2-\varepsilon^{\prime}}\tfrac{\log(\mathfrak{z})}{\log(w^{\varepsilon}\mathfrak{z})})\right)dwe^{-\alpha(R-r_{0})}dr_{0},

    which is O(nϕ(𝔷))O(n\phi^{(\mathfrak{z})}) as before.

  • For the term e116v02𝔷e^{-\frac{1}{16}v_{0}^{2}\mathfrak{z}} appearing in (38) we use the change of variable w=(v0𝔷)1/εw=(v_{0}\sqrt{\mathfrak{z}})^{1/\varepsilon} so that

    γ(r0)e116v02𝔷dθ0=𝔷12ε(γ(r0)ϕ(r0))Ω(𝔷12ε)eΩ(w2ε)dw=O(γ(r0)ϕ(r0)),\int_{\gamma(r_{0})}^{\infty}e^{-\frac{1}{16}v_{0}^{2}\mathfrak{z}}d\theta_{0}=\mathfrak{z}^{-\frac{1}{2\varepsilon}}(\gamma(r_{0})-\phi(r_{0}))\int_{\Omega(\mathfrak{z}^{\frac{1}{2\varepsilon}})}^{\infty}e^{-\Omega(w^{2\varepsilon})}dw=O(\gamma(r_{0})-\phi(r_{0})),

    where the last equality follows from 𝔷=Ω(1)\mathfrak{z}=\Omega(1). Hence,

    𝒟¯(𝔷)e116v02𝔷dμ(x0)=O(nrR(γ(r0)ϕ(r0))eα(Rr0)dr0)=O(nϕ(𝔷)),\int_{\overline{\mathcal{D}}^{(\mathfrak{z})}}e^{-\frac{1}{16}v_{0}^{2}\mathfrak{z}}d\mu(x_{0})=O\Big{(}n\int_{r^{\prime}}^{R}(\gamma(r_{0})-\phi(r_{0}))e^{-\alpha(R-r_{0})}dr_{0}\Big{)}=O(n\phi^{(\mathfrak{z})}),

    where the last equality follows from Part (v) in Fact 42.

  • For the term θ^0αβeαr0\widehat{\theta}_{0}^{-\frac{\alpha}{\beta}}e^{-\alpha r_{0}}, by definition of θ^0\widehat{\theta}_{0} we obtain

    𝒟¯(𝔷)θ^0αβeαr0dμ(x0)=O(neαRrRγ(r0)π(θ0ϕ(r0))αβdθ0dr0).\int_{\overline{\mathcal{D}}^{(\mathfrak{z})}}\widehat{\theta}_{0}^{-\frac{\alpha}{\beta}}e^{-\alpha r_{0}}d\mu(x_{0})\,=\,O\Big{(}ne^{-\alpha R}\int_{r^{\prime}}^{R}\int_{\gamma(r_{0})}^{\pi}(\theta_{0}-\phi(r_{0}))^{-\frac{\alpha}{\beta}}d\theta_{0}dr_{0}\Big{)}.

    We treat first the case β12\beta\leq\frac{1}{2}, which implies in particular αβ>1\frac{\alpha}{\beta}>1 and hence the right-hand side is of order

    neαRrR(γ(r0)ϕ(r0))1αβdr0=O(neαR(e(β12)r)1αβ+neαR(Rr)(ϕ(𝔷))1αβ)ne^{-\alpha R}\int_{r^{\prime}}^{R}(\gamma(r_{0})-\phi(r_{0}))^{1-\frac{\alpha}{\beta}}dr_{0}\,=\,O\big{(}ne^{-\alpha R}(e^{-(\beta\wedge\frac{1}{2})r^{\prime\prime}})^{1-\frac{\alpha}{\beta}}+ne^{-\alpha R}(R-r^{\prime\prime})(\phi^{(\mathfrak{z})})^{1-\frac{\alpha}{\beta}}\big{)}

    where we first integrate from rr^{\prime} to rr^{\prime\prime}, and then from rr^{\prime\prime} to RR, and then used Part (iv) in Fact 42 to bound γ(r0)ϕ(r0)\gamma(r_{0})-\phi(r_{0}) from below. Using Part (iii) of Fact 42 in the first term on the right-hand side, we obtain a term of order neαR(ϕ(𝔷))1αβ=O(nϕ(𝔷))ne^{-\alpha R}(\phi^{(\mathfrak{z})})^{1-\frac{\alpha}{\beta}}=O(n\phi^{(\mathfrak{z})}), while for the second term using that β12\beta\leq\frac{1}{2} we have by definition that r=1βlogϕ(𝔷)+Θ(1)=RΘ(log𝔷)r^{\prime\prime}=-\frac{1}{\beta}\log\phi^{(\mathfrak{z})}+\Theta(1)=R-\Theta(\log\mathfrak{z}) so in particular neαR(Rr)(ϕ(𝔷))1αβ=O(nϕ(𝔷)𝔷δlog𝔷)=O(nϕ(𝔷))ne^{-\alpha R}(R-r^{\prime\prime})(\phi^{(\mathfrak{z})})^{1-\frac{\alpha}{\beta}}=O(n\phi^{(\mathfrak{z})}\mathfrak{z}^{-\delta}\log\mathfrak{z})=O(n\phi^{(\mathfrak{z})}) for some δ>0\delta>0 depending on α\alpha and β\beta, giving the result in this case. For the case α>β>12\alpha>\beta>\frac{1}{2} we can repeat the calculations in the previous case to again obtain a bound of order nϕ(𝔷)n\phi^{(\mathfrak{z})}.

    For the case β>12\beta>\frac{1}{2} we have ϕ(𝔷)=e12R𝔷12α\phi^{(\mathfrak{z})}=e^{-\frac{1}{2}R}\mathfrak{z}^{\frac{1}{2\alpha}} and we still need to handle the case αβ1\frac{\alpha}{\beta}\leq 1. If αβ<1\frac{\alpha}{\beta}<1, we have

    neαRrRγ(r0)π(θ0ϕ(r0))αβdθ0dr0=O(neαRR),ne^{-\alpha R}\int_{r^{\prime}}^{R}\int_{\gamma(r_{0})}^{\pi}(\theta_{0}-\phi(r_{0}))^{-\frac{\alpha}{\beta}}d\theta_{0}dr_{0}\,=\,O\big{(}ne^{-\alpha R}R\big{)},

    and in it is easily checked that since α>12\alpha>\frac{1}{2} and by the definition of ϕ(𝔷)\phi^{(\mathfrak{z})} we have neαRR=O(nϕ(𝔷)e(12α)RR)=O(nϕ(𝔷))ne^{-\alpha R}R=O(n\phi^{(\mathfrak{z})}e^{(\frac{1}{2}-\alpha)R}R)=O(n\phi^{(\mathfrak{z})}). Finally, if αβ=1\frac{\alpha}{\beta}=1 we have

    neαRrRγ(r0)π(θ0ϕ(r0))αβdθ0dr0=O(neαRrRlog(πγ(r0)ϕ(r0))dr0)ne^{-\alpha R}\int_{r^{\prime}}^{R}\int_{\gamma(r_{0})}^{\pi}(\theta_{0}-\phi(r_{0}))^{-\frac{\alpha}{\beta}}d\theta_{0}dr_{0}\,=\,O\Big{(}ne^{-\alpha R}\int_{r^{\prime}}^{R}\log\big{(}\tfrac{\pi}{\gamma(r_{0}){-}\phi(r_{0})}\big{)}dr_{0}\Big{)}

    where using Part (i) in Fact 42 we conclude that the right-hand term is at most of order neαRRlog(1/ϕ(𝔷))=O(neαRR2)=O(nϕ(𝔷))ne^{-\alpha R}R\log(1/\phi^{(\mathfrak{z})})=O(ne^{-\alpha R}R^{2})=O(n\phi^{(\mathfrak{z})}).

  • For the term v0𝔷eαRv_{0}\mathfrak{z}e^{-\alpha R} we observe that, using the change of variable w:=v01/εw:=v_{0}^{1/\varepsilon} together with Part (iv) in Fact 42, we get

    γ(r0)πv0𝔷eαRdθ0=O(𝔷eαR(γ(r0)ϕ(r0))ε)0πwdw=O(𝔷eαR(ϕ(𝔷))ε).\int_{\gamma(r_{0})}^{\pi}v_{0}\mathfrak{z}e^{-\alpha R}d\theta_{0}=O(\mathfrak{z}e^{-\alpha R}(\gamma(r_{0})-\phi(r_{0}))^{-\varepsilon})\int_{0}^{\pi}wdw=O(\mathfrak{z}e^{-\alpha R}(\phi^{(\mathfrak{z})})^{-\varepsilon}).

    Now, suppose first that β>12\beta>\frac{1}{2} so that eαR=𝔷1(ϕ(𝔷))2αe^{-\alpha R}=\mathfrak{z}^{-1}(\phi^{(\mathfrak{z})})^{2\alpha} and hence 𝔷eαR(ϕ(𝔷))ε=(ϕ(𝔷))2αε=O(ϕ(𝔷))\mathfrak{z}e^{-\alpha R}(\phi^{(\mathfrak{z})})^{-\varepsilon}=(\phi^{(\mathfrak{z})})^{2\alpha-\varepsilon}=O(\phi^{(\mathfrak{z})}) where the last equality is by our assumption of ε<2α1\varepsilon<2\alpha-1. Suppose now that β12\beta\leq\frac{1}{2} and observe that in this case eαR=O((ϕ(𝔷))αβ𝔷(1α2β))e^{-\alpha R}=O((\phi^{(\mathfrak{z})})^{\frac{\alpha}{\beta}}\mathfrak{z}^{-(1\vee\frac{\alpha}{2\beta})}) and α2βα<1\frac{\alpha}{2\beta}\leq\alpha<1 so using that 𝔷=Ω(1)\mathfrak{z}=\Omega(1) we deduce again that 𝔷eαR(ϕ(𝔷))ε=O((ϕ(𝔷))2αε)=O(ϕ(𝔷))\mathfrak{z}e^{-\alpha R}(\phi^{(\mathfrak{z})})^{-\varepsilon}=O\left((\phi^{(\mathfrak{z})})^{2\alpha-\varepsilon}\right)=O(\phi^{(\mathfrak{z})}). Summarizing, independent of the value taken by β\beta we have

    𝒟¯(𝔷)v0𝔷eαRdμ(x0)=O(nϕ(𝔷))rReα(Rr0)dr0=O(nϕ(𝔷)).\int_{\overline{\mathcal{D}}^{(\mathfrak{z})}}v_{0}\mathfrak{z}e^{-\alpha R}d\mu(x_{0})=O(n\phi^{(\mathfrak{z})})\int_{r^{\prime}}^{R}e^{-\alpha(R-r_{0})}dr_{0}=O(n\phi^{(\mathfrak{z})}).
  • For the term 𝔱5\mathfrak{t}_{5} it will be useful to notice from Part iv in Fact 42 that

    (γ(r0)ϕ(r0))δ=O((ϕ(𝔷))δ)\left(\gamma(r_{0})-\phi(r_{0})\right)^{-\delta}=O((\phi^{(\mathfrak{z})})^{-\delta}) (45)

    for any δ>0\delta>0. We assume first that α2β\alpha\neq 2\beta so that

    𝒟¯(𝔷)𝔱5(x0)dμ(x0)=O(neαRrRγ(r0)π(v0𝔷)1α4βθ^0αβdθ0eα(Rr0)dr0)\int_{\overline{\mathcal{D}}^{(\mathfrak{z})}}\mathfrak{t}_{5}(x_{0})d\mu(x_{0})\,=\,O\Big{(}ne^{-\alpha R}\int_{r^{\prime}}^{R}\int_{\gamma(r_{0})}^{\pi}(v_{0}\mathfrak{z})^{1\vee\frac{\alpha}{4\beta}}\widehat{\theta}_{0}^{-\frac{\alpha}{\beta}}d\theta_{0}e^{-\alpha(R-r_{0})}dr_{0}\Big{)} (46)

    We analyze this expression first when α4β\alpha\leq 4\beta so using (45) the term on the right-hand side is of order

    neαR𝔷(ϕ(𝔷))εrRγ(r0)π(θ0ϕ(r0))εαβdθ0eα(Rr0)dr0,ne^{-\alpha R}\mathfrak{z}(\phi^{(\mathfrak{z})})^{-\varepsilon}\int_{r^{\prime}}^{R}\int_{\gamma(r_{0})}^{\pi}(\theta_{0}-\phi(r_{0}))^{\varepsilon-\frac{\alpha}{\beta}}d\theta_{0}e^{-\alpha(R-r_{0})}dr_{0}, (47)

    where the integral with respect to θ0\theta_{0} behaves differently according to whether εαβ\varepsilon-\frac{\alpha}{\beta} is smaller, equal, or larger than 1{-}1. Suppose first that εαβ<1\varepsilon-\frac{\alpha}{\beta}<{-}1 so that the above term is of order

    neαR𝔷(ϕ(𝔷))εrR(γ(r0)ϕ(r0))1αβ+εeα(Rr0)dr0=O(neαR𝔷(ϕ(𝔷))1αβ),ne^{-\alpha R}\mathfrak{z}(\phi^{(\mathfrak{z})})^{-\varepsilon}\int_{r^{\prime}}^{R}(\gamma(r_{0})-\phi(r_{0}))^{1-\frac{\alpha}{\beta}+\varepsilon}e^{-\alpha(R-r_{0})}dr_{0}\,=\,O\big{(}ne^{-\alpha R}\mathfrak{z}(\phi^{(\mathfrak{z})})^{1-\frac{\alpha}{\beta}}\big{)},

    where the right hand side follows from (45). Proceeding as in (42) we deduce that the bound is O(nϕ(𝔷))O(n\phi^{(\mathfrak{z})}). Suppose now that εαβ1\varepsilon-\frac{\alpha}{\beta}\geq-1, which in particular implies βα1+εα1+(2α1)=12\beta\geq\frac{\alpha}{1+\varepsilon}\geq\frac{\alpha}{1+(2\alpha-1)}=\frac{1}{2} so in this case ϕ(𝔷)=e12R𝔷12α\phi^{(\mathfrak{z})}=e^{-\frac{1}{2}R}\mathfrak{z}^{\frac{1}{2\alpha}}. Now, if εαβ>1\varepsilon-\frac{\alpha}{\beta}>-1 then the integral with respect to θ0\theta_{0} in (47) is O(1)O(1), and thus the bound is of order

    neαR𝔷(ϕ(𝔷))εrReα(Rr0)dr0=O(neαR𝔷(ϕ(𝔷))ε)=O(n(ϕ(𝔷))2αε)ne^{-\alpha R}\mathfrak{z}(\phi^{(\mathfrak{z})})^{-\varepsilon}\int_{r^{\prime}}^{R}e^{-\alpha(R-r_{0})}dr_{0}=O\big{(}ne^{-\alpha R}\mathfrak{z}(\phi^{(\mathfrak{z})})^{-\varepsilon}\big{)}=O\big{(}n(\phi^{(\mathfrak{z})})^{2\alpha-\varepsilon}\big{)}

    where we have used the particular form of ϕ(𝔷)\phi^{(\mathfrak{z})}. Since ε\varepsilon satisfies ε<2α1\varepsilon<2\alpha-1 and also ϕ(𝔷)π=O(1)\phi^{(\mathfrak{z})}\leq\pi=O(1), we conclude that the bound is O(nϕ(𝔷))O(n\phi^{(\mathfrak{z})}). Similarly, if εαβ=1\varepsilon-\frac{\alpha}{\beta}=-1 then (47) is of order

    neαR𝔷(ϕ(𝔷))εrRlog(1γ(r0)ϕ(r0))eα(Rr0)dr0=O(neαR𝔷(ϕ(𝔷))εlog(1ϕ(𝔷))),ne^{-\alpha R}\mathfrak{z}(\phi^{(\mathfrak{z})})^{-\varepsilon}\int_{r^{\prime}}^{R}\log\big{(}\tfrac{1}{\gamma(r_{0})-\phi(r_{0})}\big{)}e^{-\alpha(R-r_{0})}dr_{0}=O\big{(}ne^{-\alpha R}\mathfrak{z}(\phi^{(\mathfrak{z})})^{-\varepsilon}\log\big{(}\tfrac{1}{\phi^{(\mathfrak{z})}}\big{)}\big{)},

    which is of order n(ϕ(𝔷))2αεlog(1ϕ(𝔷))n(\phi^{(\mathfrak{z})})^{2\alpha-\varepsilon}\log(\frac{1}{\phi^{(\mathfrak{z})}}) and again using that ε<2α1\varepsilon<2\alpha-1 and ϕ(𝔷)=O(1)\phi^{(\mathfrak{z})}=O(1) we conclude that the bound is O(nϕ(𝔷))O(n\phi^{(\mathfrak{z})}).

    We now turn to the analysis of the right-hand side of (46) when α>4β\alpha>4\beta, in which case we obtain a term of order

    neαR𝔷α4β(ϕ(𝔷))αε4βrRγ(r0)π(θ0ϕ(r0))αε4βαβdθ0eα(Rr0)dr0.ne^{-\alpha R}\mathfrak{z}^{\frac{\alpha}{4\beta}}(\phi^{(\mathfrak{z})})^{-\frac{\alpha\varepsilon}{4\beta}}\int_{r^{\prime}}^{R}\int_{\gamma(r_{0})}^{\pi}(\theta_{0}-\phi(r_{0}))^{\frac{\alpha\varepsilon}{4\beta}-\frac{\alpha}{\beta}}d\theta_{0}e^{-\alpha(R-r_{0})}dr_{0}. (48)

    Observe that the condition α>4β\alpha>4\beta implies that ϕ(𝔷)=eβR𝔷\phi^{(\mathfrak{z})}=e^{-\beta R}\sqrt{\mathfrak{z}}. Assume first that αε4βαβ<1\frac{\alpha\varepsilon}{4\beta}-\frac{\alpha}{\beta}<-1 in which case (48) is of order

    neαR𝔷α4β(ϕ(𝔷))αε4βrR(γ(r0)ϕ(r0))1αβ+αε4βeα(Rr0)dr0=O(neαR𝔷α4β(ϕ(𝔷))1αβ),ne^{-\alpha R}\mathfrak{z}^{\frac{\alpha}{4\beta}}(\phi^{(\mathfrak{z})})^{-\frac{\alpha\varepsilon}{4\beta}}\int_{r^{\prime}}^{R}(\gamma(r_{0})-\phi(r_{0}))^{1-\frac{\alpha}{\beta}+\frac{\alpha\varepsilon}{4\beta}}e^{-\alpha(R-r_{0})}dr_{0}=O\big{(}ne^{-\alpha R}\mathfrak{z}^{\frac{\alpha}{4\beta}}(\phi^{(\mathfrak{z})})^{1-\frac{\alpha}{\beta}}\big{)},

    and from the particular form of ϕ(𝔷)\phi^{(\mathfrak{z})} we have eαR𝔷α4β(ϕ(𝔷))αβ=𝔷α4β=O(1)e^{-\alpha R}\mathfrak{z}^{\frac{\alpha}{4\beta}}(\phi^{(\mathfrak{z})})^{-\frac{\alpha}{\beta}}=\mathfrak{z}^{-\frac{\alpha}{4\beta}}=O(1) and hence the bound is O(nϕ(𝔷))O(n\phi^{(\mathfrak{z})}). Suppose next that αε4βαβ>1\frac{\alpha\varepsilon}{4\beta}-\frac{\alpha}{\beta}>-1, in which case (48) is of order

    neαR𝔷α4β(ϕ(𝔷))αε4βrReα(Rr0)dr0=O(neαR𝔷α4β(ϕ(𝔷))αε4β),ne^{-\alpha R}\mathfrak{z}^{\frac{\alpha}{4\beta}}(\phi^{(\mathfrak{z})})^{-\frac{\alpha\varepsilon}{4\beta}}\int_{r^{\prime}}^{R}e^{-\alpha(R-r_{0})}dr_{0}=O\big{(}ne^{-\alpha R}\mathfrak{z}^{\frac{\alpha}{4\beta}}(\phi^{(\mathfrak{z})})^{-\frac{\alpha\varepsilon}{4\beta}}\big{)},

    and in order to show that the bound is O(nϕ(𝔷))O(n\phi^{(\mathfrak{z})}) it will be enough to prove that eαR𝔷α4β(ϕ(𝔷))1αε4β=O(1)e^{-\alpha R}\mathfrak{z}^{\frac{\alpha}{4\beta}}(\phi^{(\mathfrak{z})})^{-1-\frac{\alpha\varepsilon}{4\beta}}=O(1). It can be checked that using the particular form of ϕ(𝔷)\phi^{(\mathfrak{z})}, we have eαR𝔷α4β(ϕ(𝔷))1αε4β=eαR2(ϕ(𝔷))α2β1αε4βe^{-\alpha R}\mathfrak{z}^{\frac{\alpha}{4\beta}}(\phi^{(\mathfrak{z})})^{-1-\frac{\alpha\varepsilon}{4\beta}}=e^{-\alpha\frac{R}{2}}(\phi^{(\mathfrak{z})})^{\frac{\alpha}{2\beta}-1-\frac{\alpha\varepsilon}{4\beta}}, and the result will follow as soon as the exponent of ϕ(𝔷)\phi^{(\mathfrak{z})} is positive. However, this is equivalent to 2ε>4βα2-\varepsilon>\frac{4\beta}{\alpha} which holds since by definition ε<1\varepsilon<1, and we are assuming 4βα<1\frac{4\beta}{\alpha}<1. Finally, suppose that αε4βαβ=1\frac{\alpha\varepsilon}{4\beta}-\frac{\alpha}{\beta}=-1, in which case (48) is of order neαR𝔷α4β(ϕ(𝔷))αε4βlog(1/ϕ(𝔷))ne^{-\alpha R}\mathfrak{z}^{\frac{\alpha}{4\beta}}(\phi^{(\mathfrak{z})})^{-\frac{\alpha\varepsilon}{4\beta}}\log(1/\phi^{(\mathfrak{z})}) and using the same analysis as in the case αε4βαβ>1\frac{\alpha\varepsilon}{4\beta}-\frac{\alpha}{\beta}>-1 (with strict inequalities) we deduce that it is O(nϕ(𝔷))O(n\phi^{(\mathfrak{z})}).

    We now bound the integral of the term 𝔱5\mathfrak{t}_{5} in the special case α=2β\alpha=2\beta where

    𝒟¯(𝔷)𝔱5(x0)dμ(x0)\displaystyle\int_{\overline{\mathcal{D}}^{(\mathfrak{z})}}\mathfrak{t}_{5}(x_{0})d\mu(x_{0}) =O(nrRγ(r0)01wθ^02σ0ewθ^022σ0(𝔏log(v0𝔷))w1dwdθ0eα(Rr0)dr0)\displaystyle=\,O\Big{(}n\int_{r^{\prime}}^{R}\int_{\gamma(r_{0})}^{\infty}\int_{0}^{1}\sqrt{\tfrac{w\widehat{\theta}_{0}^{2}}{\sigma_{0}}}e^{-\frac{w\widehat{\theta}_{0}^{2}}{2\sigma_{0}}}(\mathfrak{L}^{\prime}\log(v_{0}\mathfrak{z}))^{w-1}dwd\theta_{0}e^{-\alpha(R-r_{0})}dr_{0}\Big{)}
    =O(nrR01γ(r0)wθ^02σ0ewθ^022σ0dθ0dweα(Rr0)dr0).\displaystyle=O\Big{(}n\int_{r^{\prime}}^{R}\int_{0}^{1}\int_{\gamma(r_{0})}^{\infty}\sqrt{\tfrac{w\widehat{\theta}_{0}^{2}}{\sigma_{0}}}e^{-\frac{w\widehat{\theta}_{0}^{2}}{2\sigma_{0}}}d\theta_{0}dwe^{-\alpha(R-r_{0})}dr_{0}\Big{)}.

    Now, for any fixed w(0,1)w\in(0,1), we can use the change of variables u:=θ0ϕ(r0)γ(r0)ϕ(r0)u:=\frac{\theta_{0}-\phi(r_{0})}{\gamma(r_{0})-\phi(r_{0})} in the inner integral, which gives

    γ(r0)wθ^02σ0ewθ^022σ0dθ0=(γ(r0)ϕ(r0))1wuy0log𝔷log(u𝔷)exp(Ω(wuy0log(𝔷)2log(u𝔷)))du\int_{\gamma(r_{0})}^{\infty}\sqrt{\tfrac{w\widehat{\theta}_{0}^{2}}{\sigma_{0}}}e^{-\frac{w\widehat{\theta}_{0}^{2}}{2\sigma_{0}}}d\theta_{0}=(\gamma(r_{0})-\phi(r_{0}))\int_{1}^{\infty}\sqrt{\frac{wuy_{0}\log\mathfrak{z}}{\log(u\mathfrak{z})}}\exp\Big{(}{-}\Omega\left(\frac{wuy_{0}\log(\mathfrak{z})}{2\log(u\mathfrak{z})}\right)\Big{)}du

    for y0:=(γ(r0)ϕ(r0))2e2βR𝔷log(𝔷)y_{0}:=\frac{(\gamma(r_{0})-\phi(r_{0}))^{2}}{e^{-2\beta R}\mathfrak{z}\log(\mathfrak{z})} which is Ω(1)\Omega(1) since γ(r0)ϕ(r0)=Ω(ϕ(𝔷))\gamma(r_{0})-\phi(r_{0})=\Omega(\phi^{(\mathfrak{z})}). On the other hand log(𝔷)log(u𝔷)=Ω(1log(u))\frac{\log(\mathfrak{z})}{\log(u\mathfrak{z})}=\Omega(\frac{1}{\log(u)}) which allows us to conclude that

    011wuy0log𝔷log(u𝔷)exp(Ω(wuy0log(𝔷)2log(u𝔷)))dudw=O(1)\int_{0}^{1}\int_{1}^{\infty}\sqrt{\frac{wuy_{0}\log\mathfrak{z}}{\log(u\mathfrak{z})}}\exp\Big{(}{-}\Omega\left(\frac{wuy_{0}\log(\mathfrak{z})}{2\log(u\mathfrak{z})}\right)\Big{)}dudw=O(1)

    Therefore,

    𝒟¯(𝔷)𝔱5(x0)dμ(x0)=O(nrR(γ(r0)ϕ(r0))eα(Rr0)dr0)=O(nϕ(𝔷))\int_{\overline{\mathcal{D}}^{(\mathfrak{z})}}\mathfrak{t}_{5}(x_{0})d\mu(x_{0})=O\Big{(}n\int_{r^{\prime}}^{R}(\gamma(r_{0})-\phi(r_{0}))e^{-\alpha(R-r_{0})}dr_{0}\Big{)}=O(n\phi^{(\mathfrak{z})})

    where the last equality follows directly from Part (v) in Fact 42.

6 Conclusion and outlook

We studied a movement of particles in the random hyperbolic graph model introduced by [28] and analyzed the tail of detection times of a fixed target in this model. To the best of our knowledge, the current paper is the first one in analyzing dynamic hyperbolic graphs. It is natural to ask similar questions to the ones addressed in [37] in the model of random geometric graphs: how long does it take to detect a target that is mobile itself? How long does it take to detect all (fixed or mobile) targets in a certain sector? How long does it take in order for a given vertex initially outside the giant component at some fixed location needs to connect to some vertex of the giant component (percolation time)? How long does it take to broadcast a message in a mobile graph? We think that the detection of a mobile target, using the same proof ideas as presented here, should accelerate the detection time by a constant factor (presumably by a factor 22 in the angular case and perhaps by a different factor in general), and also the deletion time of all targets in a certain sector can presumably be done with similar techniques to the ones used in this paper. On the other hand, new proof ingredients might be needed for analyzing the percolation time and the broadcast time. We think that the same observation as the one made by Moreau et al. [33] which show that for the continuous random walk on the square lattice the best strategy to detect (to end up right at the same location) is to stay put, holds in our model as well.

Appendix A Appendix: on the sums of Pareto variables

Here we prove Lemma 38 that gives a large deviation result for sums of independent heavy tailed random variables. The lemma is a more precise version of the results in [35] (although here we treat only with positive random variables which are bounded away from zero), and the proof follows closely what was done there.

Lemma 45.

Let Sm=i=1mZiS_{m}=\sum_{i=1}^{m}Z_{i} where {Zi}i\{Z_{i}\}_{i\in\mathbb{N}} is a sequence of i.i.d. absolutely continuous random variables taking values in [1,)[1,\infty) such that there are V,γ>0V,\gamma>0 with

1FZ(x)=(Zix)Vxγ1-F_{Z}(x)=\mathbb{P}(Z_{i}\geq x)\leq Vx^{-\gamma}

for all x>0x>0. Then there are ,𝔏>0\mathfrak{C},\mathfrak{L}>0 depending on VV, γ\gamma and 𝔼(Z1)\mathbb{E}(Z_{1}) (if it exists) alone such that:

  • If γ<1\gamma<1, then for all L>𝔏L>\mathfrak{L}, (SmLm1γ)Lγ\displaystyle\mathbb{P}(S_{m}\geq Lm^{\frac{1}{\gamma}})\leq\mathfrak{C}L^{-\gamma}.

  • If γ=1\gamma=1, then for all L>𝔏L>\mathfrak{L}, (SmLmlog(m))(Llog(m))1𝔏L\displaystyle\mathbb{P}(S_{m}\geq Lm\log(m))\leq\left(\frac{\mathfrak{C}}{L\log(m)}\right)^{1-\frac{\mathfrak{L}}{L}}.

  • If γ>1\gamma>1, then for all L>𝔏L>\mathfrak{L}, (SmLm)Lγm((γ1)γ2)\displaystyle\mathbb{P}(S_{m}\geq Lm)\,\leq\,\mathfrak{C}L^{-\gamma}m^{-((\gamma-1)\wedge\frac{\gamma}{2})}.

Remark 46.

We remark that tighter results can be obtained for γ1\gamma\geq 1 below, but the current result is sufficient for our purposes.

Proof.

We proceed as in [35] by assuming first that γ1\gamma\leq 1. Fix x>0x>0 and define the event B={1i<m,Zix}B=\{\forall 1\leq i<m,\,Z_{i}\leq x\} so that

(Smx)(B¯)+(Smx|B)(B)mVxγ+(S(x)mx)(B)\mathbb{P}(S_{m}\geq x)\;\leq\;\mathbb{P}(\overline{B})+\mathbb{P}(S_{m}\geq x\,|\,B)\mathbb{P}(B)\;\leq\;mVx^{-\gamma}+\mathbb{P}(S^{(x)}_{m}\geq x)\mathbb{P}(B)

where S(x)S^{(x)} is the sum of mm i.i.d. random variables with c.d.f. FZ(y)FZ(x)\frac{F_{Z}(y)}{F_{Z}(x)} for any y[0,x]y\in[0,x]. We can thus use a Chernoff bound to deduce

(Smx)mVxγ+eλx𝔼(eλS(x)m)(B)=mVxγ+eλx(1xeλydFZ(y))m,\mathbb{P}(S_{m}\geq x)\;\leq\;mVx^{-\gamma}+e^{-\lambda x}\mathbb{E}\big{(}e^{\lambda S^{(x)}_{m}}\big{)}\mathbb{P}(B)\;=\;mVx^{-\gamma}+e^{-\lambda x}\left(\int_{1}^{x}e^{\lambda y}dF_{Z}(y)\right)^{m},

where we used independence of the random variables to conclude that (B)=(FZ(x))m\mathbb{P}(B)=(F_{Z}(x))^{m}. Define now M=2γλM=\frac{2\gamma}{\lambda} for some λ\lambda to be chosen below so that M<xM<x holds. Hence,

R(λ,x):=1xeλydFZ(y)=1MeλydFZ(y)+MxeλydFZ(y),R(\lambda,x)\,:=\,\int_{1}^{x}e^{\lambda y}dF_{Z}(y)\,=\,\int_{1}^{M}e^{\lambda y}dF_{Z}(y)\,+\,\int_{M}^{x}e^{\lambda y}dF_{Z}(y),

which we bound separately. First notice that for some constant C1=C1(γ,V)C_{1}=C_{1}(\gamma,V) we have

1MeλydFZ(y)\displaystyle\int_{1}^{M}e^{\lambda y}dF_{Z}(y) eλMFZ(M)λ1MeλyFZ(y)dy\displaystyle\leq\,e^{\lambda M}F_{Z}(M)-\lambda\int_{1}^{M}e^{\lambda y}F_{Z}(y)dy
eλMFZ(M)eλM+1+λ1Meλy(1FZ(y))dy\displaystyle\leq\,e^{\lambda M}F_{Z}(M)-e^{\lambda M}+1+\lambda\int_{1}^{M}e^{\lambda y}(1-F_{Z}(y))dy
eλMFZ(M)eλM+1+λVeλM1Myγdy 1+C1Q(λ),\displaystyle\leq\,e^{\lambda M}F_{Z}(M)-e^{\lambda M}+1+\lambda Ve^{\lambda M}\int_{1}^{M}y^{-\gamma}dy\;\leq\,1+C_{1}Q(\lambda),

where Q(λ)=λγQ(\lambda)=\lambda^{\gamma} if γ<1\gamma<1 and Q(λ)=λlog(λ2γ)Q(\lambda)=-\lambda\log(\frac{\lambda}{2\gamma}) if γ=1\gamma=1. For the integral between MM and xx observe that

MxeλydFZ(y)\displaystyle\int_{M}^{x}e^{\lambda y}dF_{Z}(y) eλM(1FZ(M))+λMxeλy(1FZ(y))dy\displaystyle\leq\,e^{\lambda M}(1-F_{Z}(M))+\lambda\int_{M}^{x}e^{\lambda y}(1-F_{Z}(y))dy
VeλMMγ+λVMxeλyyγdy\displaystyle\leq\,Ve^{\lambda M}M^{-\gamma}+\lambda V\int_{M}^{x}e^{\lambda y}y^{-\gamma}dy
=Ve2γ(λ2γ)γ+Veλxxγ0λ(xM)ew(1wλx)γdw,\displaystyle=\,Ve^{2\gamma}\left(\frac{\lambda}{2\gamma}\right)^{\gamma}+Ve^{\lambda x}x^{-\gamma}\int_{0}^{\lambda(x-M)}e^{-w}\left(1-\frac{w}{\lambda x}\right)^{-\gamma}dw,

where in the last line we used the change of variables w=λ(xy)w=\lambda(x-y). Now, since M=2γ/λM=2\gamma/\lambda, the function f(w)=ew2(1wλx)γf(w)=e^{\frac{w}{2}}(1-\frac{w}{\lambda x})^{\gamma} has a positive derivative for w[0,λ(xM)]w\in[0,\lambda(x-M)]. Hence for w[0,λ(xM)]w\in[0,\lambda(x-M)] we have (1wλx)γew/2(1-\frac{w}{\lambda x})^{-\gamma}\leq e^{w/2} and hence the last integral is therefore smaller than 0ew/2dw=2\int_{0}^{\infty}e^{-w/2}dw=2, giving

MxeλydFZ(y)Ve2γ(λ2γ)γ+2Veλxxγ=C2λγ+C3eλxxγ\int_{M}^{x}e^{\lambda y}dF_{Z}(y)\,\leq\,Ve^{2\gamma}\left(\frac{\lambda}{2\gamma}\right)^{\gamma}+2Ve^{\lambda x}x^{-\gamma}\,=\,C_{2}\lambda^{\gamma}+C_{3}e^{\lambda x}x^{-\gamma}

for some constants C2,C3C_{2},C_{3} depending only on VV and γ\gamma. Putting together both bounds for R(λ,x)R(\lambda,x) we arrive at

(Smx)\displaystyle\mathbb{P}(S_{m}\geq x) mVxγ+eλx(1+C1Q(λ)+C2λγ+C3eλxxγ)m\displaystyle\leq\;mVx^{-\gamma}+e^{-\lambda x}\left(1+C_{1}Q(\lambda)+C_{2}\lambda^{\gamma}+C_{3}e^{\lambda x}x^{-\gamma}\right)^{m}
mVxγ+exp(λx+mC1Q(λ)+mC2λγ+mC3eλxxγ).\displaystyle\leq\;mVx^{-\gamma}+\exp\left(-\lambda x+mC_{1}Q(\lambda)+mC_{2}\lambda^{\gamma}+mC_{3}e^{\lambda x}x^{-\gamma}\right).

Our aim at this point to choose λ\lambda such that the term on the right is small, which is achieved when taking

λ=1xlog(xγm),\lambda=\frac{1}{x}\log\left(\frac{x^{\gamma}}{m}\right),

so that meλxxγ=1me^{\lambda x}x^{-\gamma}=1. Assume first that γ<1\gamma<1 so x=Lm1γx=Lm^{\frac{1}{\gamma}} for LL large, for which λ=γlog(L)Lm1γ\lambda=\frac{\gamma\log(L)}{Lm^{\frac{1}{\gamma}}} is small, while λx=γlog(L)\lambda x=\gamma\log(L) is large so the assumption M<xM<x is justified. Now, since γ<1\gamma<1, Q(λ)=λγQ(\lambda)=\lambda^{\gamma} and hence we have

λx+mC1Q(λ)+mC2λγ+mC3eλxxγ=γlog(L)+(C1+C2)(γlog(L)L)γ+C3,-\lambda x+mC_{1}Q(\lambda)+mC_{2}\lambda^{\gamma}+mC_{3}e^{\lambda x}x^{-\gamma}\,=\,-\gamma\log(L)+(C_{1}+C_{2})\left(\frac{\gamma\log(L)}{L}\right)^{\gamma}+C_{3},

and since we are assuming LL large, we have (γlog(L)L)γ1(\frac{\gamma\log(L)}{L})^{\gamma}\leq 1 which finally gives

(Smx)\displaystyle\mathbb{P}(S_{m}\geq x) mVxγ+exp(λx+mC1Q(λ)+mC2λγ+mC3eλxxγ)\displaystyle\leq\;mVx^{-\gamma}+\exp\left(-\lambda x+mC_{1}Q(\lambda)+mC_{2}\lambda^{\gamma}+mC_{3}e^{\lambda x}x^{-\gamma}\right)
VLγ+exp(γlog(L)+C1+C2+C3)=Lγ,\displaystyle\leq\;VL^{-\gamma}+\exp\left(-\gamma\log(L)+C_{1}+C_{2}+C_{3}\right)\;=\;\mathfrak{C}L^{-\gamma},

which proves the first point of the theorem. Suppose now that γ=1\gamma=1 so that x=Lmlog(m)x=Lm\log(m) for L𝔏L\geq\mathfrak{L} for some 𝔏\mathfrak{L} large, and hence λ=1Lmlog(m)log(Llog(m))\lambda=\frac{1}{Lm\log(m)}\log(L\log(m)) is small, while λx=log(Llog(m))\lambda x=\log(L\log(m)) is large, so again the assumption M<xM<x is justified. For this choice of γ\gamma we have mQ(λ)=mλlog(λ2)mQ(\lambda)=-m\lambda\log(\frac{\lambda}{2}) which we can bound as

mλlog(λ2)\displaystyle-m\lambda\log(\tfrac{\lambda}{2}) =log(Llog(m))Llog(m)log(2Lmlog(m)log(Llog(m)))\displaystyle=\frac{\log(L\log(m))}{L\log(m)}\log\left(\frac{2Lm\log(m)}{\log(L\log(m))}\right)
(log(2Llog(m))2Llog(m)+log(Llog(m))LC4+log(Llog(m))L\displaystyle\leq\frac{(\log(2L\log(m))^{2}}{L\log(m)}+\frac{\log(L\log(m))}{L}\leq C_{4}+\frac{\log(L\log(m))}{L}

for some constant C4C_{4}, and hence we arrive at

(Smx)\displaystyle\mathbb{P}(S_{m}\geq x) mVx1+exp(λx+mC1Q(λ)+mC2λ+mC3eλxx1)\displaystyle\leq\;mVx^{-1}+\exp\left(-\lambda x+mC_{1}Q(\lambda)+mC_{2}\lambda+mC_{3}e^{\lambda x}x^{-1}\right)
VLlog(m)+C5exp(log(Llog(m))+C1Llog(Llog(m)))(Llog(m))1𝔏L\displaystyle\leq\;\frac{V}{L\log(m)}+C_{5}\exp\Big{(}{-}\log(L\log(m))+\frac{C_{1}}{L}\log(L\log(m))\Big{)}\;\leq\;\Big{(}\frac{\mathfrak{C}}{L\log(m)}\Big{)}^{1-\frac{\mathfrak{L}}{L}}

for some constant C5C_{5}, and where the last inequality holds by choosing 𝔏\mathfrak{L} larger than C1C_{1} and also by choosing \mathfrak{C} large enough.

Suppose now that γ>1\gamma>1 so that E0:=𝔼(Z1)E_{0}:=\mathbb{E}(Z_{1}) exists. In this case we can perform a similar computation to the one before to deduce that

(SmmE0x)mVxγ+eλx(eλE01xeλydFZ(y))m,\mathbb{P}(S_{m}-mE_{0}\geq x)\;\leq\;mVx^{-\gamma}+e^{-\lambda x}\Big{(}e^{-\lambda E_{0}}\int_{1}^{x}e^{\lambda y}dF_{Z}(y)\Big{)}^{m},

and we can divide the integral 1xeλydFZ(y)\int_{1}^{x}e^{\lambda y}dF_{Z}(y) as before so that

1xeλydFZ(y)=1MeλydFZ(y)+MxeλydFZ(y)\int_{1}^{x}e^{\lambda y}dF_{Z}(y)\,=\,\int_{1}^{M}e^{\lambda y}dF_{Z}(y)+\int_{M}^{x}e^{\lambda y}dF_{Z}(y)

where again M=2γλM=\frac{2\gamma}{\lambda} (and for our choice of small λ\lambda below again we have M<xM<x). Now, the main difference in this case is the treatment of the first term, for which we have

1MeλydFZ(y)\displaystyle\int_{1}^{M}e^{\lambda y}dF_{Z}(y) =1MdFZ(y)+λ1MydFZ(y)+1M(eλy1λy)dFZ(y)\displaystyle=\,\int_{1}^{M}dF_{Z}(y)+\lambda\int_{1}^{M}ydF_{Z}(y)+\int_{1}^{M}\left(e^{\lambda y}-1-\lambda y\right)dF_{Z}(y)
 1+λE0(eλy1λy)(1FZ(y))|M1+λ1M(eλy1)(1FZ(y))dy\displaystyle\leq\,1+\lambda E_{0}-\left(e^{\lambda y}-1-\lambda y\right)(1-F_{Z}(y))\bigg{|}^{M}_{1}+\lambda\int_{1}^{M}\left(e^{\lambda y}-1\right)(1-F_{Z}(y))dy
1+λE0+(eλ1λ)+λV1M(eλy1)yγdy\displaystyle\leq 1+\lambda E_{0}+\left(e^{\lambda}-1-\lambda\right)+\lambda V\int_{1}^{M}\left(e^{\lambda y}-1\right)y^{-\gamma}dy
1+λE0+(eλ1λ)+λVγ1(eλ1)+λ2Vγ1eλM1My1γdy\displaystyle\leq 1+\lambda E_{0}+\left(e^{\lambda}-1-\lambda\right)+\frac{\lambda V}{\gamma-1}\left(e^{\lambda}-1\right)+\frac{\lambda^{2}V}{\gamma-1}e^{\lambda M}\int_{1}^{M}y^{1-\gamma}dy
1+λE0+λ2+2λ2Vγ1+C1W(λ),\displaystyle\leq 1+\lambda E_{0}+\lambda^{2}+\frac{2\lambda^{2}V}{\gamma-1}+C_{1}W(\lambda),

for some value C1C_{1} depending on γ\gamma alone, where we used that γ>1\gamma>1, that λ\lambda is small, but also λM=2γ\lambda M=2\gamma, and where

W(λ)={λγ if γ<2λ2log(λ) if γ=2λ2 if γ>2W(\lambda)=\left\{\begin{array}[]{cl}\lambda^{\gamma}&\text{ if }\gamma<2\\[3.0pt] -\lambda^{2}\log(\lambda)&\text{ if }\gamma=2\\[3.0pt] \lambda^{2}&\text{ if }\gamma>2\end{array}\right.

Since λ\lambda is small we conclude that the W(λ)W(\lambda) is at least of the same order as the terms containing λ2\lambda^{2} and hence

1MeλydFZ(y) 1+λE0+3W(λ).\int_{1}^{M}e^{\lambda y}dF_{Z}(y)\;\leq\;1+\lambda E_{0}+3W(\lambda).

Treating the integral MxeλydFZ(y)\int_{M}^{x}e^{\lambda y}dF_{Z}(y) as in the case γ1\gamma\leq 1 we finally obtain

(SmmE0x)\displaystyle\mathbb{P}(S_{m}-mE_{0}\geq x) mVxγ+eλxλmE0(1+λE0+3W(λ)+C2λγ+C3eλxxγ)m\displaystyle\leq\;mVx^{-\gamma}+e^{-\lambda x-\lambda mE_{0}}\left(1+\lambda E_{0}+3W(\lambda)+C_{2}\lambda^{\gamma}+C_{3}e^{\lambda x}x^{-\gamma}\right)^{m}
mVxγ+exp(λx+mC4W(λ)+mC3eλxxγ).\displaystyle\leq\;mVx^{-\gamma}+\exp\left(-\lambda x+mC_{4}W(\lambda)+mC_{3}e^{\lambda x}x^{-\gamma}\right).

Now, since we are interested in the probability (SmLm)\mathbb{P}(S_{m}\geq Lm) for LL larger than some 𝔏\mathfrak{L} which we can take larger than 2E02E_{0} we have

(SmLm)(SmmE0Lm/2),\mathbb{P}(S_{m}\geq Lm)\,\leq\,\mathbb{P}(S_{m}-mE_{0}\geq Lm/2),

and hence we can take x=Lm/2x=Lm/2. For γ<2\gamma<2 we choose λ=1xlog(xγm)\lambda=\frac{1}{x}\log(\frac{x^{\gamma}}{m}) as before (which is small) for which mC3eλxxγ=C3mC_{3}e^{\lambda x}x^{-\gamma}=C_{3} and hence

(SmmE0x) 2γVLγm1γ+exp(log(Lγmγ1/2γ)+3C5mλγ+C3),\mathbb{P}(S_{m}-mE_{0}\geq x)\;\leq\;2^{\gamma}VL^{-\gamma}m^{1-\gamma}+\exp\left(-\log(L^{\gamma}m^{\gamma-1}/2^{\gamma})+3C_{5}m\lambda^{\gamma}+C_{3}\right),

but mλγ=2γlogγ(Lγmγ12γ)Lγmγ11m\lambda^{\gamma}=\frac{2^{\gamma}\log^{\gamma}(L^{\gamma}m^{\gamma-1}2^{-\gamma})}{L^{\gamma}m^{\gamma-1}}\leq 1 for Lγmγ1L^{\gamma}m^{\gamma-1} large enough, and hence

(SmmE0x)Lγm1γ.\mathbb{P}(S_{m}-mE_{0}\geq x)\;\leq\;\mathfrak{C}L^{-\gamma}m^{1-\gamma}.

Suppose now that γ2\gamma\geq 2 and choose λ=γxlog(xm)\lambda=\frac{\gamma}{x}\log(\frac{x}{\sqrt{m}}) for which we have mC3eλxxγ=C3m1γ2C3mC_{3}e^{\lambda x}x^{-\gamma}=C_{3}m^{1-\frac{\gamma}{2}}\leq C_{3}, giving

(SmmE0x) 2γVLγm1γ+exp(log(Lγmγ2/2γ)+C6mW(λ)+C3).\mathbb{P}(S_{m}-mE_{0}\geq x)\;\leq\;2^{\gamma}VL^{-\gamma}m^{1-\gamma}+\exp\left(-\log(L^{\gamma}m^{\frac{\gamma}{2}}/2^{\gamma})+C_{6}mW(\lambda)+C_{3}\right).

Now, if γ=2\gamma=2, then W(λ)=λ2log(1/λ)W(\lambda)=\lambda^{2}\log(1/\lambda) so for some constant C7C_{7}

mW(λ)=16L2mlog2(Lm2)log(Lm4log(Lm/2))C1L2mlog3(L2m)1mW(\lambda)=\frac{16}{L^{2}m}\log^{2}(\tfrac{L\sqrt{m}}{2})\log\left(\frac{Lm}{4\log(L\sqrt{m}/2)}\right)\leq\frac{C_{1}}{L^{2}m}\log^{3}(L^{2}m)\leq 1

for large L2mL^{2}m, while if γ>2\gamma>2, W(λ)=λ2W(\lambda)=\lambda^{2}, and so

mW(λ)=2γ2L2mlog2(Lm2)1mW(\lambda)=\frac{2\gamma^{2}}{L^{2}m}\log^{2}(\tfrac{L\sqrt{m}}{2})\leq 1

for large L2mL^{2}m. In any case scenario, we obtain

(SmmE0x) 2γVLγm1γ+Lγmγ2,\mathbb{P}(S_{m}-mE_{0}\geq x)\;\leq\;2^{\gamma}VL^{-\gamma}m^{1-\gamma}+\mathfrak{C}L^{-\gamma}m^{-\frac{\gamma}{2}},

but for γ2\gamma\geq 2 we have γ2γ1\frac{\gamma}{2}\leq\gamma-1 and hence the second term dominates the first, giving the result. ∎

References

  • [1] M.. Abdullah, M. Bode and N. Fountoulakis “Typical distances in a geometric model for complex networks” In Internet Mathematics 1, 2017
  • [2] I. Akyildiz, W. Su, Y. Sankarasubramaniam and E. Cayirci “Wirleness sensor networks: a survey” In Computer Networks 38, 2002, pp. 393–422
  • [3] N. Alon and J.. Spencer “The probabilistic method, Third edition” John Wiley & Sons, 2008
  • [4] S. Athreya, A. Drewitz and R. Sun “Random Walk Among Mobile/Immobile Traps: A Short Review” In In: Sidoravicius V. (eds) Sojourns in Probability Theory and Statistical Physics - III Springer Proceedings in Mathematics & Statistics, vol 300. Springer, Singapore, 2018
  • [5] A-L. Barabási “Linked: The New Science of Networks” Perseus Publishing, 2002
  • [6] A.M. Berezhovskii, Yu.A. Makhovskii and R.A. Suris “Wiener sausage volume moments” In Journal of Statistical Physics 57, 1989, pp. 333–346
  • [7] J. Berg, R. Meester and D.G. White “Dynamic Boolean models” In Stochastic Processes and their Applications 69, 1997, pp. 247–257
  • [8] M. Bode, N. Fountoulakis and T. Müller “On the Giant Component of random hyperbolic graphs” In Proceedings of the 7th European Conference on Combinatorics, Graph Theory and Applications, EUROCOMB’13, 2013, pp. 425–429
  • [9] M. Bode, N. Fountoulakis and T. Müller “The probability of connectivity in a hyperbolic model of complex networks” In Random Structures & Algorithms 49.1, 2016, pp. 65–94
  • [10] M. Boguñá, F. Papadopoulos and D. Krioukov “Sustaining the internet with hyperbolic mapping” In Nature Communications, 2010, pp. 1:62
  • [11] A.. Borodin and P. Salminen “Handbook of Brownian Motion - Facts and Formulae” Birkhauser Basel, 2002
  • [12] E. Candellero and N. Fountoulakis “Clustering and the hyperbolic geometry of complex networks” In Internet Mathematics 12.1–2, 2016, pp. 2–53
  • [13] J. Díaz, D. Mitsche and X. Pérez-Giménez In IEEE Transactions on Mobile Computing 8.6, 2009, pp. 821–835
  • [14] R. Diel and D. Mitsche “On the largest component of subcritical random hyperbolic graphs” In Electronic Communications of Probability 26, 2021, pp. 1–14
  • [15] D. Dufresne “The Distribution of a Perpetuity, with Applications to Risk Theory and Pension Funding” In Scandinavian Actuarial Journal, 1990, pp. 39–79
  • [16] R. Feng, P. Jiang and H. Volkmer “Geometric Brownian motion with affine drift and its time-integral” In Applied Mathematics and Computation 395, 2020
  • [17] N. Fountoulakis and T. Müller “Law of large numbers for the largest component in a hyperbolic model of complex networks” In Annals of Applied Probability 28, 2018, pp. 607–650
  • [18] N. Fountoulakis, P. Hoorn, T. M““uller and M. Schepers “Clustering in a hyperbolic model of complex networks” In Elec. J. of Probability 26, 2021, pp. 1–132
  • [19] T. Friedrich and A. Krohmer “On the Diameter of Hyperbolic Random Graphs” In Automata, Languages, and Programming - 42nd International Colloquium – ICALP Part II 9135, LNCS Springer, 2015, pp. 614–625
  • [20] R.D. Gordon “Values of Mills’ ratio of area to bounding ordinate and of the normal probability integral for large values of the argument” In Ann. Math. Statist. 12, 1941, pp. 364–366
  • [21] L. Gugelmann, K. Panagiotou and U. Peter “Random Hyperbolic Graphs: Degree Sequence and Clustering” In Automata, Languages, and Programming - 39th International Colloquium – ICALP Part II 7392, LNCS Springer, 2012, pp. 573–585
  • [22] G. Kesidis, T. Konstantopoulos and S. Phoha “Surveillance coverage of sensor networks under a random mobility strategy” In Proceedings of the 2nd IEEE International Conference on Sensors, 2003
  • [23] M. Kiwi and D. Mitsche “A Bound for the Diameter of Random Hyperbolic Graphs” In Proceedings of the 12th Workshop on Analytic Algorithmics and Combinatorics – ANALCO SIAM, 2015, pp. 26–39
  • [24] M. Kiwi and D. Mitsche “On the second largest component of random hyperbolic graphs” In SIAM J. Discrete Math. 33.4, 2019, pp. 2200–2217
  • [25] M. Kiwi and D. Mitsche “Spectral Gap of Random Hyperbolic Graphs and Related Parameters” In Annals of Applied Probability 28.2, 2018, pp. 941–989
  • [26] C. Koch and J. Lengler “Bootstrap percolation on geometric inhomogeneous random graphs” In Automata, Languages, and Programming - 43rd International Colloquium – ICALP 55, LIPIcs, 2016, pp. 1–15
  • [27] T. Konstantopoulos “Response to Prof. Baccelli’s lecture on modelling of wireless communication networks by stochastic geometry” In Computer Journal Advance Access, 2009
  • [28] D. Krioukov et al. “Hyperbolic geometry of complex networks” In Phyical Review E 82, 2010, pp. 036106
  • [29] A. Linker, D. Mitsche, B. Schapira and D. Valesin “The contact process on random hyperbolic graphs: metastability and critical exponents” In Annals of Probability 49.3, 2021, pp. 1480–1514
  • [30] B. Liu et al. “Mobility improves coverage of sensor networks” In Proceedings of the 6th ACM International Conference on Mobile Computing and Networking (MobiCom), 2005
  • [31] H. Matsumoto and M. Yor “Exponential functionals of Brownian motion, I: Probability laws at fixed time” In Probab. Surveys 2 The Institute of Mathematical Statisticsthe Bernoulli Society, 2005, pp. 312–347
  • [32] M. Mitzenmacher and E. Upfal “Probability and Computing: Randomization and Probabilistic Techniques in Algorithms and Data Analysis” Cambridge University Press, 2017
  • [33] M. Moreau, G. Oshanin, O. Bénichou and M. Coppey “Lattice theory of trapping reactions with mobile species” In Phys. Rev. E 69, 2004
  • [34] T. Müller and M. Staps “The diameter of KPKVB random graphs” In Advances in Applied Probability 51.2, 2019, pp. 358–377
  • [35] O. Omelchenko and A. Bulatov “Concentration inequalities for sums of random variables, each having power bounded tails”, 2019 arXiv:1903.02529
  • [36] Y. Peres, P. Sousi and A. Stauffer “The isolation time of Poisson Brownian motions” In ALEA, Lat. Am. J. Probab. Math. Stat. 10.2, 2013, pp. 813–829
  • [37] Y. Peres, A. Sinclair, P. Sousi and A. Stauffer “Mobile Geometric Graphs: Detection, Coverage and Percolation” In Probability Theory and Related Fields 156, 2013, pp. 273–305
  • [38] F. Spitzer “Electrostatic capacity, heat flow, and Brownian motion” In Z. Wahrscheinlichkeitstheorie verw. Geb. 3, 1964, pp. 110–121
  • [39] A. Stauffer “Space–time percolation and detection by mobile nodes” In Ann. Appl. Probab. 25.5, 2015, pp. 2416–2461
  • [40] D. Stoyan, W.S. Kendall and J. Mecke “Stochastic Geometry and its Applications” John Wiley & Sons, 2nd edition, 1995
  • [41] M. Yor “On Some Exponential Functionals of Brownian Motion” In Advances in Applied Probability 24, 2011, pp. 23–48