Tail bounds for detection times in mobile hyperbolic graphs
Abstract
Motivated by Krioukov et al.’s model of random hyperbolic graphs [28] for real-world networks, and inspired by the analysis of a dynamic model of graphs in Euclidean space by Peres et al. [37], we introduce a dynamic model of hyperbolic graphs in which vertices are allowed to move according to a Brownian motion maintaining the distribution of vertices in hyperbolic space invariant. For different parameters of the speed of angular and radial motion, we analyze tail bounds for detection times of a fixed target and obtain a complete picture, for very different regimes, of how and when the target is detected: as a function of the time passed, we characterize the subset of the hyperbolic space where particles typically detecting the target are initially located. Our analysis shows that our dynamic model exhibits a phase transition as a function of the relation of angular and radial speed.
We overcome several substantial technical difficulties not present in Euclidean space, and provide a complete picture on tail bounds. On the way, moreover, we obtain results for a class of one-dimensional continuous processes with drift and reflecting barrier, concerning the time they spend within a certain interval. We also derive improved bounds for the tail of independent sums of Pareto random variables.
1 Introduction
Random Geometric Graphs (RGGs) are a family of spatial networks that have been intensively studied as models of communication networks, in particular sensor networks, see for example [2]. In this model, an almost surely finite number of vertices is distributed in a metric space according to some probability distribution and two vertices are joined by an edge if the distance between them is at most a given parameter called radius. Typically, the metric space is the -dimensional unit cube or torus (often with ) and points are chosen according to a Poisson point process of given intensity. While simple, RGGs do capture relevant characteristics of some real world networks, for instance, non-negligible clustering coefficient. However, RGGs fail to exhibit other important features such as scale-freeness and non-homogeneous vertex degree distribution, both of which are staple features of a large class of networks loosely referred to as ”social networks” that are meant to encompass networks such as the Internet, citation networks, friendship relation among individuals, etc.
A network model that naturally exhibits clustering and scale-freeness is the Random Hypebolic Graph (RHG) model introduced by Krioukov et al. [28] where vertices of the network are points in a bounded region of the hyperbolic plane, and connections exist if their hyperbolic distance is small. In [10], a surprisingly good maximum likelihood fit of the hyperbolic model was shown for the embedding of the network corresponding to the autonomous systems of the Internet, drawing much attention, interest, and follow-up work on the model (see the section on related work below).
It has been recognized that in many applications of geometric network models the entities represented by vertices are not fixed in space but mobile. One way in which this has been addressed is to assume that the vertices of the network move according to independent Brownian motions, giving rise to what Peres et al. in [37]) call the mobile geometric graph model. Mobility, and more generally dynamic models, are even more relevant in the context of social networks. Thus, it is natural to adapt the mobile geometric graph setting to the hyperbolic graph context and assume the vertices of the latter graphs again move according to independent Brownian motions but in hyperbolic space. This gives rise, paraphrasing Peres et al., to the mobile hyperbolic graph model. We initiate the study of this new model by focusing on the fundamental problem of detection, that is, the time until a fixed (non-mobile) added target vertex becomes non-isolated in the (evolving) hyperbolic graph. We will do this; but in fact do much more. In order to discuss our contributions in detail we first need to precisely describe the model we introduce and formalize the main problem we address in our study.
1.1 The mobile hyperbolic graph model
We first introduce the model of Krioukov et al. [28] in its Poissonized version (see also [21] for the same description in the so called uniform model): for each , consider a Poisson point process on the hyperbolic disk of radius for some positive constant ( denotes here and throughout the paper the natural logarithm). The intensity function at polar coordinates for and is equal to , where is given by
In other words, the angle and radius are chosen independently; the former uniformly at random in and the latter with density proportional to .
Next, identify the points of the Poisson process with vertices and define the following graph where . For , , with polar coordinates and respectively, there is an edge in with endpoints and provided the hyperbolic distance between and is such that , where is obtained by solving
(1) |
In particular, note that .
Henceforth, we denote the point whose radius is by and refer to it as the origin. For a point and we let denote the ball centered at of radius , that is, the set of all points at hyperbolic distance less than from . Also, we henceforth denote the boundary of by .
To define a dynamic version of Krioukov et al.’s model we consider an initial configuration of the Poissonized model in and then associate to each a particle that evolves independently of other particles following a trajectory given in radial coordinates by at time . At a microscopic level a natural choice for the movement of a particle is that of a random walk in with acting as a reflecting boundary, and where at each step the particle can move either in the radial or angular direction. Assuming that the movement does not depend on the angle and that there is no angular drift, we conclude that at a macroscopic level particles should move according to a generator of the form
where the drift term in the radial component is chosen so that remains the stationary distribution of the process (this can be checked using the Fokker-Planck equation). The function is unrestricted and relates to the displacement given by the angular movement at a given position . In this sense a natural choice is to take proportional to which follows by taking the displacement proportional to the hyperbolic perimeter at that radius. An alternative, however, is to take proportional to corresponding to the (re-scaled) Brownian motion in hyperbolic space. In order to capture both settings, we introduce an additional parameter and work throughout this paper with the following generalized generator:
(2) |
where is a new parameter related to the velocity of the angular movement which can alternatively be understood as a deformation of the underlying space. We shall see that thus enhancing our model yields a broader range of behavior and exhibits phase transition phenomena (for detection times this is explained in detail in our next section where we summarize our main results).
Fix now an initial configuration of particles located at points in . We denote by the law of a particle initially placed at a given point . We have one more fixed target , located at the boundary of (that is, ), and at angular coordinate . For any , let be the set of points that have detected the target by time : that is, for each there exists a time instant , so that . Note that if , then might be in the interior of , whereas if , the first instant at which detects is when is at the boundary of , that is . This instant is also called the hitting time of by particle . The detection time of , denoted by , is then defined as the minimum hitting time of among all initially placed particles. We are particularly interested in the tail probability for different values of (note that is a function on ). In words, we are interested in the tail probability that no particle initially located at position detects the target by time .
Observe that any given particle evolving according to the generator specified in (2) will eventually detect the target at some point, so that when as soon as there is at least one particle. Following what was done in [37] for a similar model on Euclidean space, our main result determines the speed at which tends to zero as a function of . We consider the same setting of [37], that is but we have to deal with several additional, both qualitatively and quantitatively different, new issues that arise due to the dependency of on .
1.2 Main results
In this section, we present the main results we obtain regarding tail probabilities for detection time, both for the mobile hyperbolic graph model we introduced in the previous section and for two restricted instances: one where the radial coordinate of particles does not change over time and another where the angular coordinate does not change. We also discuss the relation between the main results and delve into the insights they provide concerning the dominant mechanism (either angular movement, radial movement or a combination of both) that explain the different asymptotic regimes, depending on the relation between and . We point out that we do not present in this section a complete list of other significant results, in particular the ones that provide a detailed idea of the initial location of particles that typically detect the target. These last results will be discussed at the start of Sections 3, 4, and 5 where, in fact, we prove slightly stronger results than the ones stated in this section. Neither do we delve here into those results concerning one-dimensional processes with non-constant drift and a reflecting barrier that might be useful in other settings and thus of independent interest (these results are found in Section 4).
We begin with the statement describing the (precise) behavior of the detection time tail probability depending on how the model parameters and relate to each other:111We use the standard Bachmann–Landau asymptotic (in ) notation of , , , , , with all terms inside asymptotic expressions being positive.
Theorem 1.
Let , , , and assume that particles move according to the generator in (2). Then, the following hold:
-
(i)
For , if , then
-
(ii)
For and the tail exponent depends on the relation between and as follows:
-
(a)
For , if , then .
-
(b)
For , if , then .
-
(c)
For , if , then
-
(a)
-
(iii)
For , if , then .
Remark 2.
Since we are working with the Poissonized model, the probability of not having any particle to begin with is of order , and on this event the detection time is infinite. The reader may check that replacing the upper bound of in each of the previous theorem cases gives a probability of this order, which explains the asymptotic upper bounds on .
Observe that by Theorem 1, for in the case , and for in the case we recover a tail exponent of order , showing that the expected detection time occurs at values of of said order. It follows that at there is a phase transition in the qualitative behavior of the model; for , since , the detection becomes asymptotically “immediate” whereas for the target can remain undetected for an amount of time of order . To explain this change in behavior notice that even though for any the value of is minuscule near the boundary of (where particles spend most of the time due to the radial drift), decreasing does increase dramatically, allowing for a number of particles tending to infinity, to detect the target immediately. Since for small values of all but a few particles remain “radially still” near the boundary, we deduce that there must be some purely angular movement responsible for the detection of the target, to which we can associate the tail exponent seen in (i) of Theorem 1. The same tail exponent appears in Case (iic) of Theorem 1 for large when is again sufficiently small (smaller than ), and again the purely angular movement is responsible for the detection of the target. To make the understanding of the observed distinct behaviors even more explicit we study two simplified models where particles are restricted to evolve solely through their angular or radial movement, respectively.
Theorem 3 (angular movement).
Let , , , and assume that particles move according to the generator
If , then
Observe that the tail exponent appearing in the case of the previous theorem is the same as the one appearing in Case (iic) (where ) and Case (i) (where ) of Theorem 1: this is no coincidence. Our proof shows that a purely angular movement is responsible for detection in those cases. Note also that the other exponents contemplated in Theorem 1 are not present in Theorem 3.
We consider next the result of our analysis of the second simplified model where particles move only radially:
Theorem 4 (radial movement).
Let , , , and assume that particles move according to the generator
For every , there is a such that if and , then
Once more observe that the tail exponent in the case of Theorem 1 is the same as the one in Theorem 4, and once more this is not a coincidence: our proof shows that when the detection of the target is the result of the radial motion of particles alone. In contrast, Cases (iia) and (iib) in Theorem 1, when yield new tail exponents not observed neither in Theorem 3 nor in Theorem 4, and which is larger than those given in said results. Since a larger tail exponent means a larger probability of an early detection, in this case, neither the angular nor the radial component of the movement dominates the other, but they rather work together to detect the target quicker: in Case (iia), the proof reveals that the radial movement is responsible for pulling particles sufficiently far away from the boundary to a radial value, where becomes sufficiently large (although still small) so that with a constant probability at least one particle has a chance of detecting at this radial value through its angular motion. Finally, for Case (iib) in Theorem 1 we see a tail exponent of the form (as expected when taking in Case (iia) or in Case (iic)) but accompanied by a logarithmic correction. This correction becomes clear when inspecting the proof: it is a result of the balance between the effect of the radial drift and the contribution of the angular movement of particles at distinct radii whose effect is that the contributions to the total angular movement coming from the time spent by a particle within a narrow band centered at a given radius are all about the same, adding up and giving the additional logarithmic factor.
Our main theorem, that is Theorem 1, provides insight on how unlikely it is for the target to remain undetected until time , and as discussed before, the proof strategy as well as our remarks provide additional information about the dominant detection mechanisms (that is, either by angular movement only, by radial movement only, or by a combination of the two). Moreover, through our techniques we can obtain more precise information showing in each case where particles must be initially located in order to detect before time with probability bounded away from zero. Specifically, given a parameter (independent of ) we can construct a parametric family of sets depending on and the parameters of the model, such that for every and sufficiently large , the probability is at least a constant that depends on the parameter . Even further, we will show that is of the same order as the tail exponents, which implies that asymptotically the best chance for to remain undetected by time is to find these sets initially empty of points. To be precise, we show the following meta-result:
Theorem 5.
Let , and . For every case considered in Theorems 1, 3 and 4 with the corresponding hypotheses, and under the additional assumption of in the case of Theorem 1, there is an explicit parametric family of sets which is increasing in , and which satisfies:
-
(i)
(Uniform lower bound) For every sufficiently large and sufficiently large,
is bounded from below by a positive expression that depends only on and the parameters of the model.
-
(ii)
(Uniform upper bound) For every sufficiently large and sufficiently large
is bounded from above by a positive expression that depends only on and the parameters of the model, and that tends to as .
-
(iii)
(Integral bound) Fix for some sufficiently large constant . For sufficiently large, we have
The added value of this last theorem is that it reveals that the set can roughly be interpreted as the regions where particles typically detecting the target are initially located. Moreover, each of the Theorems 1, 3, and 4 follow from the instances of Theorem 5 regarding angular, radial and unrestricted movement, respectively, thus revealing the unified approach that we take throughout the paper as explained in detail in Subsection 1.3.
Establishing the uniform lower bounds for the different cases of Theorem 5 requires, especially in Case (iib) of Theorem 1, a delicate coupling with a discretized auxiliary process. For the integral upper bounds a careful analysis has to be performed: we thereby need to calculate hitting times of Brownian motions in the hyperbolic plane with a reflecting barrier where we make use of results on tails of independent sums of Pareto random variables (that differ greatly according to the exponent).
Finally, let us point out that from a technical point of view, the additional difficulties we encounter compared to Euclidean space are substantial: in [37] the authors address the Euclidean analogue of the problem studied in this paper by relating the detection probability to the expected volume of the -dimensional Wiener sausage (that is, the neighborhood of the trace of Brownian motion up to a certain time moment – see also the related work section below). To the best of our knowledge, however, the volume of the Wiener sausage of Brownian motion with a reflecting boundary in hyperbolic space is not known. Even further, our proof techniques give significantly more information: with our proof we are able to characterize the subregions of the hyperbolic space such that particles initially located therein are likely to detect the target up to a certain time moment, whereas particles located elsewhere are not - with the information of the pure volume of the Wiener sausage we would not be able to do so. The radial drift towards the boundary of entails many additional technical difficulties that do not arise in Euclidean space (see also the related work section below for work being done there). Moreover, our setup contains a reflecting barrier at the boundary, thereby adding further difficulty to the analysis.
1.3 Structure and strategy of proof
Recall that in our mobile hyperbolic graph model we consider an initial configuration of the Poissonized model in and then associate to each a particle that evolves independently of other particles according to the generator with a reflecting barrier at . Denote by the Poisson point process obtained from by retaining only particles having detected the target by time . Since points move independently of each other, each particle initially placed at belongs to independently with probability and hence is a thinned Poisson point process on with intensity measure
Noticing that the event is equivalent to , we have
(3) |
and hence in order to obtain lower and upper bounds on we will compute the dependence of this integral as a function of . In fact, we can say a bit more on the way we go about determining the value of the said integral. Specifically, we partition the range of integration into and and observe that the three parts of Theorem 5 together immediately imply that
thus reducing, via (3), the computation of tail bounds for detection times to fixing the parameter equal to a constant and determining the expected number of particles initially in , that is, computing . So, the structure of the proof of our main results is the following: we determine a suitable candidate set , compute , and establish an adequate version of Theorem 5 for the specific type of movement of particles considered (angular, radial or a combination of both).
1.4 Related work
After the introduction of random hyperbolic graphs by Krioukov et al. [28], the model was then first analyzed in a mathematically rigorous way by Gugelmann et al. [21]: they proved that for the case the distribution of the degrees follows a power law, they showed that the clustering coefficient is bounded away from , and they also obtained the average degree and maximum degree in such networks. The restriction guarantees that the resulting graph has bounded average degree (depending on and only): if , then the degree sequence is so heavy tailed that this is impossible (the graph is with high probability connected in this case, as shown in [9]). The power-law degree distribution of random hyperbolic graphs is equal to (see [21, Theorem 2.2]) which, for , belongs to the interval , and this is the interval where the best fit power-law exponent of social networks typically falls into (see [5, p. 69]). If , then as the number of vertices grows, the largest component of a random hyperbolic graph has sublinear size (more precisely, its order is , see [8, Theorem 1.4] and [14]). On the other hand, it is known that for , with high probability a hyperbolic random graph of expected order has a connected component whose order is also linear in [8, Theorem 1.4] and the second largest component has size [24], which justifies referring to the linear size component as the giant component. More precise results including a law of large numbers for the largest component in these networks were established in [17]. Further results on the static version of this model include results on the diameter [23, 19, 34], on the spectral gap [25], on typical distances [1], on the clustering coefficient [12, 18], on bootstrap percolation [26] and on the contact process [29].
No dynamic model of random hyperbolic graphs has been proposed so far. In the Boolean model (that is similar to random geometric graphs but now the underlying metric space is and, almost surely, there is an infinite number of vertices) the same question of the tail probability of detection time, as well as the questions of coverage time (the time to detect a certain region of ), percolation time (the time a certain particle needs to join particle of the infinite component) and broadcast time (the time it takes to broadcast a message to all particles) were studied by Peres et al. [37]: the authors therein show that for a Poisson process of intensity , and a target performing independent Brownian motion, as ,
where a target is detected if a particle is at Euclidean distance at most , for some arbitrary but fixed (in the three other mentioned contexts the interpretations are similar: we say that an interval is covered if each point of the interval has been at distance at most from some particle at some time instant before ; we say that a particle joins a component if it is at distance at most from another particle of the component, and we say that a message can be sent between two particles if they are at distance ). Note that the probability holds only as , and in this case the main order term of the tail probability does not depend on , as shown in [37]. Stauffer [39] generalized detection of a mobile target moving according to any continuous function and showed that for a Poisson process of sufficiently high intensity over (with the same detection setup) the target will eventually be detected almost surely, whereas for small enough , with positive probability the target can avoid detection forever. The somewhat inverse question of isolation of the target, that is, the time it takes for a (possibly) moving target (again for a Poisson process of intensity in ) until no other vertex is at distance anymore and the target becomes isolated, was then studied by Peres et al. [36]: it was shown therein that the best strategy for the target to stay isolated as long as possible in fact is to stay put (again with the same setup for being isolated).
Also for the Boolean model, the question of detection time was already addressed by Liu et al. [30] when each particle moves continuously in a fixed randomly chosen direction: they showed that the time it takes for the set of particles to detect a target is exponentially distributed with expectation depending on the intensity (where detection again means entering the sensing area of the target). For the case of a stationary target as discussed here, as observed in Kesidis et al. [22] and in Konstantopoulos [27] the detection time can be deduced from classical results on continuum percolation: namely, in this case it follows from Stoyan et al. [40] that
where is the volume of the Wiener sausage of radius up to time (equivalent to being able to detect at distance at most ), and which in the case of Euclidean space is known quite precisely [38, 6].
A study of dynamic random geometric graphs prior to the paper of Peres et al. was undertaken by Díaz et al. [13]: the authors therein consider a random geometric graph whose radius is close to the threshold of connectivity and analyze in this setup the lengths of time intervals of connectivity and disconnectivity. Even earlier, the model of Brownian motion in infinite random geometric graphs (in the ”dynamic Boolean model”) was studied by van den Berg et al. [7] who showed that for a Poisson intensity above the critical intensity for the existence of an infinite component, almost surely an infinite component exists at all times (here the radii of particles are not fixed to , but rather i.i.d. random variables following a certain distribution). More generally, the question of detecting an immobile target (or escaping from a set of immobile traps) when performing random walks on a square lattice is well studied (here, in contrast to the previous papers, detecting means to be exactly at the same position in the lattice); escaping from a set of mobile traps was recently analyzed as well, for both questions we refer to Athreya et al. [4] and the references therein.
1.5 Organization
Section 2 gathers facts and observations that will be used throughout the paper. Section 3 then analyzes the simplified model where the particles’ movement is restricted to angular movement only, at the same time illustrating in the simplest possible setting our general proof strategy. Section 4 then considers the model where the particles’ movement is restricted to radial movement only. In Section 5 we finally address the mixed case, that is, the combined case of both radial and angular movement of particles. Specifically, in Section 5.1, we look at quick detection mechanisms and, in Section 5.2, we show that the detection mechanisms found are asymptotically optimal in all cases, that is, no other strategy can detect the target asymptotically faster. In Section 6 we briefly comment on future work.
1.6 Global conventions
Throughout all of this paper is a fixed constant such that (the reason to consider only this range is that only therein hyperbolic random graphs are deemed to be reasonable models of real-world networks – see the discussion in Section 1.4). Furthermore, in this article both and are strictly positive constants. In order to avoid tedious repetitions and lengthy statements, we will not reiterate these facts. Also, wherever appropriate, we will hide , , and within asymptotic notation. Moreover, throughout the paper, we let be a non-negative function depending on .
In order to avoid using ceiling and floors, we will always assume is an integer. Since all our claims hold asymptotically, doing so has no impact on the stated results. All asymptotic expressions are with respect to , and wherever it makes sense, inequalities should be understood as valid asymptotically.
2 Preliminaries
In this section we collect basic facts and a few useful observations together with a bit of additional notation. First, for a region of , that is, , let denote the expected number of points in . The first basic fact we state here gives the expected number of points at distance at most from the origin, as well as the expected number of points at distance at most from a given point :
Lemma 6 ([21, Lemma 3.2]).
If , then . Moreover, if is such that , then for ,
We also use the following Chernoff bounds for Poisson random variables:
Theorem 7 (Theorem A.1.15 of [3]).
Let have Poisson distribution with mean . Then, for every ,
and
Throughout this paper, we denote by the maximal angle between two points at (hyperbolic) distance at most and radial coordinates . By (1), for , we have
(4) |
Henceforth, consider the mapping such that . For the sake of future reference, we next collect some simple as well as some known facts concerning .
Lemma 8.
The following hold:
-
(i)
.
-
(ii)
is differentiable (hence, also continuous) and decreasing.
-
(iii)
.
-
(iv)
is invertible and its inverse is differentiable (hence, continuous) and decreasing.
-
(v)
.
Proof.
The first part follows from (1) taking , , and by the hyperbolic tangent half angle formula. The second part follows because the derivative with respect to of exists and is negative except when (for details see [24, Remark 2.1]). The third part follows directly from the first part, and the fourth part follows immediately from the preceding two parts. The last part is a particular case of a more general result [21, Lemma 3.1]. ∎
To conclude this section, we introduce some notation that we will use throughout the article. For a subset of we let denote its complement with respect to , i.e., . Thus, denotes .
3 Angular movement
In this section we consider angular movement only. As it will become evident, restricting the motion to angular movement alone simplifies the analysis considerably, compared to the general case, and in fact also compared to restricting movement radially. The simpler setting considered in this section is ideal for illustrating our general strategy for deriving tail bounds on detection times. It is also handy for introducing some notation and approximations we shall use throughout the rest of the paper. In later sections we will follow the same general strategy argument but encounter technically more challenging obstacles. In contrast, the calculations involved in the study of the angular movement only case can be mostly reduced to ones involving known facts about one dimensional Brownian motion.
Following the proof strategy described in Section 1.3, we first define . Recall that, roughly speaking, is the set of starting points from which a particle has a ”good” chance to detect the target by time , where ”good” depends on a parameter (independent of ). Let be roughly proportional to the angular distance travelled by a particle at radial coordinate up to time , more precisely, let (that is, corresponds to the standard deviation of a standard Brownian motion during an interval of time of length normalized by a term which is proportional to the perimeter of for bounded away from ). We can then define the following region (the shaded region in Figure 1):
In words, is the collection of points which are either contained in or form an angle at the origin of at most with a point that belongs to the boundary of and has radial coordinate exactly . Since if and only if , it is clear that is contained in . The value is a parameter that tells us at which scaling up to time we consider our region .
The main goal of this section is to prove the result stated next which is an instance of Theorem 5 for the case of angular movement only. Thus, Theorem 3 will immediately follow (by the proof strategy discussed in Section 1.3) once we show that is of the right order:
Theorem 9.
Denote by the standard normal distribution. If and , then
Furthermore,
Before proving the previous result, we make a few observations and introduce some definitions. First, note that the perimeter of that is outside has length (see Figure 1)
Fact 10.
The mapping is increasing and continuous for arguments in and takes values in .
In particular, for there is a unique value such that
(5) |
One may think of as chosen so that up to time a point at distance from the origin has a reasonable chance to detect the target due to their angular movement. Using Fact 10, we immediately obtain the following:
Fact 11.
For , the following holds: if and only if . In particular, is the largest ball centered at the origin contained in .
(See Figure 1 for an illustration of .)
Fact 12.
If , then . Moreover, if , then .
We will need the following claim whose proof is simple although the calculations involved are a bit tedious, mostly consisting in computing integrals and case analysis.
Lemma 13.
If and , then
Proof.
If , by (5) and Part (iii) of Lemma 8, we see that which always gives an expression of order on the right-hand side of the equality in the lemma’s statement. On the other hand, by Fact 11, we know that , so from Lemma 6 we get that , and hence the claim holds for said large values of .
Assume henceforth that and define . Clearly,
(6) |
Now, observe that is completely contained in half of the disk (see Figure 1), so , and thus by Lemma 6 and Fact 12, for ,
(7) |
Moreover, the identity also holds when , since and (by Lemma 6, definition of and the fact that ), and (by Fact 12, definition of and since .
On the other hand, by Fact 11 and our choice of , we get
(8) |
The next claim together with (6), (7) and (8) yield the lemma:
To prove the claim, not that when , the last integral equals . If , by Fact 12 we have that , so by definition of we get that . If on the other hand , again by Fact 12, we have that , so analogously to the previous calculations we obtain . Plugging back establishes the claim when .
For , since , the claim follows because the integral therein equals .
We now have all the required ingredients to prove this section’s main result.
Proof.
(of Theorem 9) Throughout the ensuing discussion we let . We begin by showing the uniform upper and lower bounds on . To do so observe first that if , clearly for any and hence the uniform lower bound follows directly for said . Assume henceforth that and observe that since there is only angular movement, a particle initially located at detects if and only if it reaches or . Now, recall that the angular movement’s law is that of a variance Brownian motion with , so
(9) |
where we have used (with a slight abuse of notation) for the law of a standard Brownian motion, and where is its exit time from the interval where in this case and . This last probability is a well studied function of and (see [11], formula 3.0.2), which can be bounded using the reflection principle and the fact that , giving
(10) |
From our assumption we deduce that , and hence the argument within above is always negative. It follows that for the mapping is decreasing, and so both uniform bounds on follow from the definition of .
We next establish the integral bound. Let . From (9) and (10) we observe that
Applying the change of variables and bounding by in the upper limit of the integral we obtain
The last expression is the same as one encountered in the proof of Lemma 13. Substituting by the values obtained therein one gets a term which, by Lemma 13, is , thus completing the proof of the claimed integral upper bound. ∎
4 Radial movement
The basic structure of this section is similar to Section 3. However, we now consider radial movement only. We define the relevant set depending on a parameter independent of , then we compute and afterwards separately establish the stated upper and lower bounds. Since the radial movement contains a drift towards the boundary that makes the calculations more involved, we first need to prove basic results on such diffusion processes before actually defining . Let us thus start with the definition of the radial movement of a given particle, initially located at . Recall that a particle which at time is in position will stay at an angle while its radial distance from the origin evolves according to a diffusion process with a reflecting barrier at and generator
We are only concerned with the evolution of the process up until the detection time of the target, which occurs when the particle reaches , and since the particles can only move radially, for any we can also impose an absorbing barrier for at the radius corresponding to the point in with angle . Recall that by Part (iv) of Lemma 8, the function has an inverse which is also decreasing and continuous, so the absorbing barrier is given by . This means that for values of we choose as absorbing barrier the origin , that is, . Also recall that, whatever the value of , we have that always belongs to the boundary of . Since near the origin the drift towards the boundary grows to infinity, for a point such that we have (in other words, at these angles the only way to detect would be by reaching the origin, but since the drift there is this is impossible). For the case where the absorbing barrier is distinct from the origin we use the following result, which we state as a standalone result since it will be of use in later sections as well (by abuse of notation, since defined above depends only on the radial (one-dimensional) movement we use in the following lemma the operator also to denote the specific operator acting over sufficiently smooth one variable functions over the reals):
Lemma 14.
Let be a diffusion process on with generator , and with a reflecting barrier at , and let denote the law of with initial position . Define also , the hitting times of and respectively. We have:
-
(i)
For any and any ,
where and .
-
(ii)
If , then
In particular, if , then .
-
(iii)
If , then
-
(iv)
If , then .
Proof.
We begin with the proof of (i) by observing that for all positive values of and hence we can couple the trajectory of a particle with that of an auxiliary particle starting with the same initial position as , but whose radius evolves according to the diffusion with generator
in such a way that for all . It follows that the detection time of this auxiliary particle is smaller than the one of so in particular , and it suffices to prove the inequality for the auxiliary process. Let now be the solution of the following O.D.E.,
(11) |
on with boundary conditions and , which is equal to
where and are as in the statement of the lemma. It follows from Itô’s lemma that is a bounded martingale, and hence we can apply Doob’s optional stopping theorem to deduce , giving the result. To obtain the bound in (ii), we go back to the original process which evolves according to , and define as the solution of the O.D.E. on
(12) |
with boundary conditions and . We advance that the solution is smooth and bounded and deduce from Itô’s lemma that is a martingale, so applying Doob’s optional stopping theorem we deduce for every . Choosing any , a simple argument obtained by restarting the process at every units of time gives
and hence . We deduce then that and since is bounded, . Thus, , and it remains to solve the O.D.E. To do so, we multiply (12) by to obtain
Thus, integrating from to and using that we have
which in particular proves directly that is an increasing function, so that . Integrating from to , together with the condition gives
and hence the general bound appearing in (ii) follows by noticing that the first term is negative, and by bounding by . To obtain observe that if we assume then so the result follows from bounding .
To establish (iii) and (iv) it will be enough to work with a diffusion evolving on according to the original generator , but without any barriers. Abusing notation we still call the process . To deduce (iii), observe that the solution of the O.D.E.
with conditions and is given by the closed expression given in (iii). It follows from Itô’s lemma that is a bounded martingale, so applying Doob’s optional stopping theorem we deduce .
Finally, to prove (iv) define the function as the solution of the ordinary differential equation
with boundary conditions . It can be checked directly that the last equation is satisfied by
(13) |
which is smooth. It follows once again from Itô’s lemma that is a martingale. Since in the proof of (iii) we already showed that is a martingale, it follows that is also a martingale. We conclude that is a martingale, so applying Doob’s optional stopping theorem we deduce
for every . Reasoning as in the proof of (ii) we can take the limit as to obtain
where we used that and . Observing that , to obtain the inequality in (iv) we only need to bound . To do so notice that for any gives , which together with allows us to bound from above the first integral of (13) by , where is bounded from above by on . Using the same argument we can bound the term within the second integral of (13) by , so that
Using the fact that , the second integral becomes which we control by studying the function on . Notice first that any critical point of said function satisfies
so it will be sufficient to control the integral when either or . For the first case we have , and for the second one, by definition we have for any . Since is monotone increasing in , by the monotone convergence theorem, , so putting all these cases together we conclude . ∎
Before we define , let
Intuitively, one may think of as those points that are so close in terms of angle to the target, so that - even though possibly initially at the boundary of - they have a reasonable chance to detect the target by time through the radial movement. We define as the collection of points where a particle initially located can detect the target before time with a not too small probability (depending on ). From our discussion preceding Lemma 14, we will always assume that any point satisfies . Since , with as defined in Lemma 14 with , and , it follows that is decreasing as a function of , continuous and takes values in . In particular, for there is a unique such that
(14) |
Note that (the latter inequality holds because we assume ). Also observe that and tends to as tends to infinity. Furthermore, if . Now, define (see Figure 2)
which contains since every satisfies . To better understand the motivation for defining and as above, we consider the most interesting regime, i.e., . Under this condition the effect of the drift has enough time to move the particle far away from its initial position, and towards the boundary, so that the event is mostly explained by the particles’ initial trajectory. In particular, by Part (iii) of Lemma 14, we have that , so the condition aims to include in all points whose probability of detecting the target before reaching the boundary of is not too small. To exhaust all possibilities, we must also include in all points which have a sufficiently large probability of detecting the target even after reaching the boundary of . Said points are considered through the condition , which gives a lower bound of order for the detection probability, thus explaining our choice of said function.
Before moving to the main theorem of this section, we spend some time building some intuition about the geometric shape of . Since goes to when tends to , from (14) which is used to define , it is not hard to see that also goes to (and hence as well) when tends to . It requires a fair amount of additional work to show that , as a function of , first increases very slowly, then reaches a maximum value of roughly and finally decreases rapidly (we omit the details since we will not rely on this observation). In fact, for all practical purposes, one might think of as being essentially constant up to the point when is smaller than a constant (equivalently, is at least a constant).
The main goal of this section is to prove the following result from which Theorem 4 immediately follows (by the proof strategy discussed in Section 1.3) once we show that is of the right order:
Theorem 15.
The following hold:
-
(i)
If and , then and
-
(ii)
For every there is a such that if and satisfy , then .
Remark 16.
The extra hypotheses on and needed for the lower bounds of Part (ii) of Theorem 15 are key for our methods to work, since for small times the detection probability for a point always tends to unless it is already in or very close to it (but the latter set is of smaller measure than ). Similarly, if one is very close to the origin (that is, for angles very close to ) the explosion of the drift towards the boundary at the origin also implies an extra penalization for the probability of detection. Observe nevertheless that the extra hypothesis in Part (ii) are automatically satisfied if . Furthermore, the hypothesis is natural since in the case of radial movement only we will show that , and we are interested in tail behaviors of the detection time.
Among all facts regarding the intuition of , we will only need to prove rigorously (see the next result) that if , then .
Fact 17.
For and , if , then where .
Proof.
We first handle some simple cases. If , then and we are done. Similarly, if , since , we have that and we obtain again the claimed bound. Henceforth, assume that and that .
Let and note that since , and ,
where in the last inequality we have used that . Moreover, since , and using twice that for , we have
Since , by our assumption on , we conclude that . Observing that , it follows from the previously derived bounds on and that and thus as desired. ∎
For practical purposes one may view as the collection of points of which belong either to a sector of angle whose bisector contains , or to the ball , or to those points with angular coordinate which are within radial distance roughly of . This picture would be accurate except for the fact that it places into points with angular coordinate close to and fairly close to the origin , say at distance . However, particles initially located at such points are extremely unlikely to reach and detect , since the drift towards the boundary tends to infinity close to the origin. This partly justifies why is defined so that in the case where goes to the expression tends to , thus leaving out of the previously mentioned problematic points.
We next determine , which simply amounts to performing an integral.
Lemma 18.
If are such that , then
Proof.
For the lower bound, simply observe that contains a sector of angle on whose bisector lies the target , so . Now, we address the upper bound. It suffices to show that . Indeed, observe that
Thus, since , by Fact 17,
Recall that by definition , so that by Part (v) of Lemma 8, we have , and
By our choices of and we have . Thus, as claimed. ∎
4.1 Uniform upper bound
The goal of this subsection is to prove the following result from which the upper bounds of Theorem 15 immediately follow.
Proposition 19.
If , then
Furthermore, if , then
Proof.
By the discussion previous to Lemma 14, we have when is such that , as claimed. Henceforth, let be such that . As in Lemma 14, let be the law of with initial position and be the hitting time of .
Observe that a particle initially at reaches the boundary of either without ever visiting the boundary of (which happens with probability at most or after visiting the said boundary for the first time (in which case the time to reach the boundary of is dominated by the time of a particle starting at the boundary of and having the same angular coordinate as the point ). Thus,
We will show that each of the right hand side terms above is where . This immediately implies the first part of the proposition’s statement.
By our choice of and since , we have
By Part (i) of Lemma 14 with , and , and Markov’s inequality, for , and , we get that
(15) |
Observing that , that and , and taking (since by hypothesis ) it follows that
(16) |
By our choice of , we have . Moreover, by Part (v) of Lemma 8, we have . Since by hypothesis , we know that , it follows that . Thus,
which establishes the first part of our stated result.
Next, we consider the second stated claim (the integral bound). By (15), denoting by the measure induced by for the angle , recalling that , it follows that
Using once more that and , taking again , we obtain
Therefore, since and imply that is positive, it follows that
Using that , since , by definition of we get
Since , the claimed integral bound follows from Lemma 18. ∎
4.2 Uniform lower bound
The goal of this subsection is to prove the following lower bound matching the upper bound of Proposition 19 under the assumption that at least a constant amount of time has passed since the process started (as stated in Theorem 15).
Proposition 20.
For every fixed arbitrarily small constant there are large enough constants such that if and satisfy , then
Proof.
Let . To obtain the lower bound on the detection probability recall that any either satisfies or where by definition is such that . We deal first with the case (the case is trivial since it implies that and thus ) by noticing that
Using Part (iv) of Lemma 14 with , and , recalling that , since by case assumption and recalling again that by definition of it holds that , we get
From Fact 17 it follows that , and since we can bound from above by , we get that the term inside the brackets above is at most . By hypothesis, , so and hence, using again the hypothesis regarding , we see that for large enough, so taking even larger if needed we conclude that
We now handle the points for which . Observe that for any fixed the detection probability is minimized when , and for points such that the probability decreases with , so it will be enough to consider the case where and . To obtain lower bounds on the detection probability we will couple the radial movement of the particle starting at with a similar one, denoted by , evolving as follows:
-
•
and we let evolve through the generator on .
-
•
Every time hits it is immediately restarted at .
It follows that and can be coupled so that for every , and hence the detection time of the new process bounds the original from above. Thus, it will be enough to bound from below. Observe that the trajectories of are naturally divided into excursions from to , which we use to define a sequence of Bernoulli random variables where if reaches on the -th excursion. We also use this division to define a sequence of random variables, where is the time it takes to reach either or in the -th excursion. It follows from the strong Markov property that all excursions are independent of one another, and so are the ’s and ’s.
Fix which from our hypothesis on satisfies , and observe that, since the ’s are i.i.d., the event has probability
From Part (iii) of Lemma 14 with , and , using for and , we have
Since , we get and deduce that . Recalling that , by Part (v) of Lemma 8, we have that and . Since by hypothesis , it follows that . Thus, so setting sufficiently large we get
Clearly, if is the first index such that , then . Hence,
where the were defined above. Using Markov’s inequality we obtain
since there are at most excursions where the particle fails to reach , followed by a single excursion where it succeeds (here we are conditioning on these events through the value of ). For the first term, we have
as can be seen in [11] (formulas 3.0.4 and 3.0.6). by comparing the process to one with constant drift equal to . For the second term we use Part (iv) of Lemma 14 with , and to obtain
Recalling that , we have . Summarizing,
By our choice of it is immediate that . To bound the last term of the displayed equation recall that by hypothesis , so that we have and moreover we can choose large enough so that and finally conclude that
∎
Observe that the bounds established in the preceding proposition are exactly those claimed in the first part of Theorem 15.
4.3 Time spent within an interval containing the origin
In this section we consider a class of one-dimensional diffusion processes taking place in with a positive integer and with given stationary distribution. Our study is motivated by the process in with generator (see (4)) and reflecting barrier at . This section’s main result will later in the paper be applied precisely to this latter process. However, the arguments of this section are applicable to less constrained diffusion processes and might even be further generalized to a larger class of processes, and so we have opted for an exposition which, instead of tailoring the proof arguments to the specific process with generator , gives the conditions that need to be satisfied so that our results as a corollary also apply to our specific process.
Our goal is to formalize a rather intuitive fact. Specifically, assume is not a too small amount of time and that is a diffusion process with stationary distribution on . Roughly speaking, we show that under some general hypotheses, during a time interval of length , when starting from the stationary distribution, with constant probability the process spends within ( larger than a constant) a period of time proportional to the expected time to spend there, that is, proportional to . We remark that only needs to be larger than a sufficiently large constant , which does not depend on (otherwise the result would follow directly from Markov’s inequality), and does not depend on either.
We start by establishing two lemmas. The first one states that a diffusion process in with stationary distribution , henceforth referred to as the original process, can be coupled with a process that is continuous in time but discretized in space with values in and defined next. First, let where . Let (this restriction could be relaxed, but we impose it for the sake of simplicity of presentation), and let be the unique solution to the following system of equations:
(17) |
Intuitively, one can think of the as the probability that the process hits before it hits when starting from the stationary distribution conditioned on starting in the interval . Denoting next, for , by the probability that the original process starting at hits before hitting (observe that ). We then define
and let . We are exclusively interested in processes for which the following holds:
(H1) , , for constants that do not depend on .
In other words, we are interested in the case where the original process is subject to a drift towards the boundary .
Note however that condition (H1) does not preclude that the process , being at position (for some real ), in expectation stays within an interval an amount of time that increases with . This explains why we focus on processes that satisfy the next stated additional hypothesis concerning the maximal time the process stays within said interval:
(H2) There is a constant (independent of ) such that for all reals it holds that .
The preceding hypothesis says that the process stays, when starting at position , in expectation, at most constant time within any giving interval . A last assumption that we make in order to derive this section’s main result is the following:
(H3) There is a constant , independent of , such that if , and at some time moment , then with constant probability for all .
The preceding hypothesis says that only in the vicinity of there might exist an infinite drift (towards ). Once we reach a point that is a constant away from , we might stay at roughly this value for at least one unit of time with constant probability (this is a technical assumption that allows us to focus only on the probability to reach such a value when considering the expected time spent roughly there.
Now, we are ready to define the process : set the initial position of as . Suppose the original process starts in an interval for some integer . If the original process hits before hitting , the original process stays put at and the coupling phase defined below starts. If the original process hits before hitting , then jumps to and the coupling phase starts as well. In the coupling phase now iteratively the following happens: suppose the original process is initially at integer , for some , and is at some integer (if , directly go to the move-independent phase defined below). Mark as the center of the current phase, and do the following: As long as the original process neither hits nor , stays put. If the original process hits , then jumps to (note that this happens with probability . If the original process hits , then jumps to with probability and with probability jumps to . Note that the total probability that jumps to is exactly . Moreover, if , the move-independent phase starts. In either case, is unmarked as the center of the current phase, and the new location of the original process (either or , respectively; note that could also happen, this is treated in the same way), is marked as the new center, and the next iteration of the coupling phase (move-independent phase, respectively) with this new location playing the role of .
In the move-independent phase only the instants of movements of the original process and are coupled, but not the movements itself: suppose that at the beginning of this phase the original process is at position for some integer , whereas is at some integer : at the instant when the original process hits or , jumps to with probability and to with probability . Then, the next iteration of the independent phase starts with the new location of the original process (that is, either or ) playing the role of .
By construction, it is clear that from the beginning of the coupling phase on, jumps from to with probability and from to with probability . We also have the following observation that also follows immediately by construction:
Observation 21.
As long as the original process does not hit , at every time instant , .
We next show that for being a sufficiently large constant that depends on only, conditional under not yet having hit the boundary, is likely to have performed already quite a few steps up to time :
Lemma 22.
Assume (H2) holds. Let be a large enough constant depending on only with as in (H2). For every constant , there exists a constant , depending only on and , such that for any integer , with probability at least , at least steps are performed by up to time .
Proof.
Fix any real , and recall that denotes the time the original process remains in the interval . By (H2) and Markov’s inequality, with probability at most , the original process exits before time . Formally, for all , we have . Since this bound is uniform over the starting position we can restart the process every units of time, thus obtaining
(18) |
for every integer , regardless of . Hence, if at the beginning of an iteration of the coupling phase the original process is at some integer , the time it takes to wait until its next step from that time moment on, is bounded in the same way as , and the time for the first step to enter the coupling phase can be bounded in the same way as in (18). Now, for , let denote the time spent between the instants and , that mark the beginning and end of the -th iteration of the coupling phase (the time before the start of the first iteration of the coupling phase, in case , respectively). By (18), the random variable is stochastically bounded from above by a geometric random variable with success probability and support . The random variables are independent but not identically distributed, as the time to remain in some intervals might be longer than in others, but since (18) holds for all , we may bound by the sum of i.i.d. geometric random variables with success probability . Hence, for any constant , there is a sufficiently small such that
where we used that the sum of i.i.d. geometric random variables has a negative binomial distribution, and where is chosen (depending on and only) so that the last inequality holds. Hence, with probability at least , at least steps are performed by during time , and the lemma follows. ∎
From the previous lemma we have the following immediate corollary:
Corollary 23.
Assume (H2) holds. Let be a large enough constant depending on only with as in (H2). Let be a sufficiently large constant depending on only and let denote the number of steps performed by . For every constant , there exists depending only on and on , such that with probability at least , the time elapsed for the steps is at most .
When , the process is subject to a drift towards the boundary , so intuitively the process will also hit in a number of steps that is proportional to (the initial distance of the process to the boundary ) and dependent on the intensity of the drift. Moreover, one should expect that the probability of the said number of steps exceeding its expected value by much decreases rapidly. The next result is a quantitative version of this intuition.
Lemma 24.
Let be a diffusion process on and suppose that is such that for some . Let and assume (H1) holds. Then, for any and all , with probability at least , the process hits in at most steps.
Proof.
Denote by the random variable counting the number of time steps up to step where decreases its value (that is, jumps from some integer to ). If the process does not hit during steps, then it must have decreased for steps and increased during steps (so ), and moreover would have to be smaller than . Thus, it will suffice to bound from above the probability that . Since and by Chernoff bounds (see [32, Theorem 4.4(2)]), for any ,
the claim follows observing that (by hypothesis on and ) and if and only if . ∎
Next, we show that, with high probability over the choice of the starting position, the boundary is hit quickly:
Lemma 25.
Let be a diffusion process on with its stationary distribution. Assume (H1) and (H2) hold. Then, there is a such that if , then the original process has not hit by time with probability at most .
Proof.
Define the event and observe that
Next, fix , as before, and let be a constant which is at least as large as the constant of Lemma 24 (this last result is applicable because (H1) holds). Define and let be the event that the process does not hit during steps. Conditioned on (so since ), by our choice of and Lemma 24 with ,
Choosing sufficiently large, by Corollary 23, applied with , there exists large enough, so that with probability at least , the time elapsed for steps is at most .
Let be the event that the original process has not hit by time . Since , by definition of , we have , so it follows by definition of that , and thus our preceding discussion establishes that
The lemma follows by a union bound over events , , , and by setting . ∎
In order to establish our main result we next state the following fact:
Fact 26.
For all integers , .
Proof.
Let be defined as in this subsection’s introduction and let , and recall that by assumption. We claim that for . Indeed, by (17), we have that , so . Assume the claim holds for . By (17) and the inductive hypothesis, we have , which concludes the inductive proof of our claim.
By definition of , we have and for all . Hence, by the preceding paragraph’s claim, for we get , and the fact follows. ∎
We now use the previous lemmata to show this section’s main result:
Proposition 27.
Let be a diffusion process on with stationary distribution . Assume (H1), (H2) and (H3) hold. Then, there are constants , , all independent of such that for all and we have
Proof.
First, we establish that there is an integer depending on alone such that
(19) |
The claim is a consequence of (H1) and Fact 26: Indeed, let be as therein. For any , summing over with we get and thus for any . By (H1) we know that is bounded away from by a constant independent of . Taking large enough so that the claim follows setting .
Next, we will split the time period into time intervals of one unit (throwing away the eventual remaining non-integer part): for the -th time interval let be the indicator random variable being if at time instant (that is, at the beginning of the -th time interval) the process is within , and otherwise. Since is stationary, for any . Thus, setting , we have . Since (H3) holds, for the constant therein, if and , then with constant probability the process stays throughout the entire -th time interval within . So by our previous discussion, to establish the desired result, it suffices to show that for some the probability that is smaller than is at most a constant strictly less than . We rely on Chebyshev’s inequality to do so and thus need to bound from above the variance of . This explains why we now undertake the determination of an upper bound for . Since there are at most pairs of random variables and such that ,
In order to bound we construct two processes and as follows: Initially, is sampled with distribution . Also, with probability we let , and otherwise we sample with distribution . Then, both processes evolve independently according to the same law as the original process until they first meet. Afterward, they still move as the original process but coupled together. It follows directly that is a copy of the original process, while is a version of the original process conditioned to start within . Let be the first time that hits , and define and analogously to the variables with and playing the role of , respectively. Since , it follows that
For the first term on the right-hand side above observe that , hence on the event both processes have met before time , and so by definition , yielding
It remains then to bound the term . To do so recall that is a version conditioned to start at so calling the first time hits , we have
First, recall that by Lemma 22, if for some large enough constant depending on only, for every there exists , depending only on and on , so that the process performs at least steps during the time interval with probability at least . Note that and do not depend on . Let be the event that performed at least steps up to time . Now,
Next, recall that as long as the original process does not hit the boundary, by Observation 21, the auxiliary process satisfies for every instant . Formally,
and also
and our goal is thus to bound . Recalling that (by hypothesis) is the probability that makes a decreasing step, and , we have for some large enough constant
The same upper bound holds for . For and note that has to make one more decreasing step and one less increasing step, yielding an upper bound of . In the same way, , so that for large enough, . We can get the same upper bound also for . Moreover, we have
By the argument from before, we get and iterating the argument, also obtain Hence, if is even,
where for the secondlast inequality we used Fact 26. If is odd, then take as the range over which the second index of the summation (the one over above); the remaining bounds are still valid, and we obtain the same bound as for even. Thus, recalling that denotes the number of pairs of random variables and for which , and recalling that holds with probability at most (in which case the joint probability is only bounded by the probability of ), and adding also the joint probability in case , we get
where for the second inequality we used that and then that , and that for some depending on and only. Thus, since for some constant large by our hypothesis , we conclude that , so . Thus, by Chebyshev’s inequality,
and the statement follows taking and . ∎
To conclude this section, we show that the process we studied in Sections 4.1 and 4.2 satisfies (H1) through (H3) and hence Proposition 27 holds for the said process. Reaching such conclusion was the motivation for this section, and it provides a result on which the analysis of the combined radial and angular processes, which we will treat in the next section, relies.
Corollary 28.
Let be the diffusion process on with generator and a reflecting barrier at . Then, there are constants , , such that for all and we have
Proof.
It will be enough to verify that (H1)-(H3) hold and apply Proposition 27 with as therein and satisfying our just stated condition.
Let . By definition . Using , we get
(20) |
To show that is bounded away from as required by (H1), it suffices to observe that by Fact 26, . To establish that is also bounded from above by a constant strictly smaller than , recall the definition of from this subsection’s introduction and let . Note that by definition of , we have , so from the proof of Fact 26, . Hence, for . It is easy to see that the right-hand side is, as a function in , maximized for . Moreover, for every , it is increasing in , we see it is bounded from above by . By direct calculation, and , where we used that both expressions for and are decreasing as functions in . We conclude that for all . To conclude that , we also need to bound , for : in order to do so, observe that the diffusion process defined by (2) can be coupled with a process of constant drift towards so that the real process is always to the right of the process of constant drift; for a process with constant drift towards starting at it is well known that the probability to reach before is at most for some (see [11], formula 3.0.4). Hence for any , and hence , as desired.
Next, we establish (H2). Recalling that is the random variable counting the maximal time the process spends in the interval starting at ; by Part (ii) of Lemma 14 applied with , for any , we get , and since , conditon (H2) is satisfied for . In order to bound for , note that the time to exit from such an interval can only increase when imposing a reflecting barrier at , and we can then apply Part (ii) of Lemma 14 to the process with this reflecting barrier at , applied with . Hence, for any starting position , the expected time remaining in the interval is at most a constant , and (H2) is satisfied.
Finally, to check that (H3) holds, simply observe that for all (where is at least a constant strictly greater than ), the process dominates a process of constant drift towards , which wherever conditioned on has a constant probability of staying within during the unit length time interval , thus establishing the claim and concluding the proof of the corollary. ∎
5 General case: strategies for detection
In this section we define the set similarly as in the radial section, without the restriction that , since also points that belong to the half disk of opposite to are starting positions from which particles can detect (mainly thanks to the angular movement component). As in previous sections, on a high level, this set is chosen in such a way that at least a constant part of the measure of the overall detection probability comes from . The precise definition of (given in the two following theorems) depends on the fact whether is small or large. For large, we can moreover provide uniform lower bounds of detection for every element in and matching uniform upper bounds of detection for every element in . To make this intuition precise, we have the following two theorems, depending on whether is small or large:
Theorem 29.
Assume , . Let
(21) |
Then
and
Theorem 30.
Let . Define then if and if . Let
where
Furthermore, define
Then, for we have
Furthermore, under the additional assumption we have
and
We refer to Figure 3(a) for an illustration of as defined in the last theorem.
The following subsection is dedicated to proving the lower bounds of the two theorems just stated (in fact, as can be seen from the proof, the assumptions about are slightly milder in the proofs). The subsequent subsection then deals with the corresponding upper bounds.
5.1 Lower bounds
5.1.1 The case small
We start with the proof of the lower bound of Theorem 29. We establish the following proposition, which proves the lower bound of Theorem 29.
Proposition 31.
Fix and take as in (21). If , then
Proof.
Consider with and for some constant . Note that for such choice of we have and thus within time , with probability bounded away from zero, the radial coordinate changes by at most . Conditioning on this, we consider the angular movement variance Brownian motion where now . As in the proof of Theorem 9, define the exit time from the interval where and , and as in (10) in the proof of Theorem 9,
Since and we conclude that . Integrating over the elements of satisfying , we have
Performing the change of variables , since , and using for , we get for the latter
Note that the last integral is , and thus the stated result follows. ∎
5.1.2 The case large
We now prove the lower bound of Theorem 30. In fact, we will only need to assume the milder condition with being a large enough constant. We start by showing that particles that start close to the origin detect quickly with at least constant probability:
Lemma 32.
Let , and let for some arbitrarily large constant . Then,
Proof.
Let be the smallest time such that and notice that, calling the law of the radial component of the process, for any point with , it holds that Now, fix any realization of the radial component of the process such that and observe that for any such realization, detection is guaranteed if for some . Under the event the radial coordinate at any time is at most . Thus, the angular movement has a normal distribution centered at zero with variance at least
Thus, with constant probability, within time the angular movement covers an angle of . The lemma follows. ∎
We now deal with the remaining starting points . Before doing so we establish a simple fact concerning the random variables defined for nonnegative integer values of as follows:
(22) |
Recall that , , is the stationary distribution of the process .
Fact 33.
If , then .
Proof.
Suppose is distributed according to the stationary distribution . Since , by definition of , we have
The desired conclusion follows observing that the map is non-decreasing, and thus for , we have . ∎
We are now ready to state and prove the proposition dealing with particles whose initial position are points satisfying the first condition in the definition of :
Proposition 34.
Let . There is a sufficiently large constant such that if and , then for any with , we have the following:
-
(i)
If , then .
-
(ii)
If , then .
To prove the just stated proposition we use the following standard fact several times, so we state it explicitly.
Fact 35 ([20]).
Given the radial component of a particle’s trajectory, the angular component law is that of a Brownian motion indexed by . If and , then
Now we proceed with the pending proof.
Proof of Proposition 34..
Assume first (and thus necessarily ). In this case the radial movement dominates and the proof of (ii) is very similar to the one given for where : assume first that (and ) is such that for some . Define be as in the proof of Proposition 20 (that is corresponds to the absorption radius in case there was radial movement only; since , we have ). By the same argument as given in the proof of Proposition 20, with playing the role of therein, with probability , there exists a time moment , at which a radial value of is reached. In this case, by symmetry of the angular movement, with probability at least , either the angle at time satisfies or there exists where the angle and we detected already by time , and hence in this case is detected by time with probability . Assume then that (and ) is such that . In this case, let be equal (assuming there were radial movement only) to the absorption radius that would correspond to an angle exactly . Note that in this case . By the proof of Proposition 20, with probability a radial value of is reached at a time moment . In this case, with probability at least , as before, either there existed a moment with , and we would have detected , or . In this case, by Lemma 32, since , with constant probability we detect in time , and hence also in this case is detected by time with probability .
Consider thus with under the assumption . Recall that we also may assume that for as in the statement of Lemma 32 since other values of were already dealt with in said lemma. First, note that for the lower bound trivially holds. So, henceforth let . Since by hypothesis , we may assume that for sufficiently large. Let and note that by the previous assumption on . Denote by the random variable corresponding to the first time the process reaches . Recall from Part (ii) of Lemma 14, with and , that for the radial component of the movement, and for any ,
By Markov’s inequality, for sufficiently large so that , it follows that the event corresponding to the process reaching before time happens with probability
Now, let Moreover, define as the event that starting from radius we hit the radial value before hitting the radial value . Define as in Fact 17 and observe that as argued therein, since we have and because we have . Recall from Part (iii) of Lemma 14 that is the probability that starting from radius we hit the radial value before hitting the radial value , and we have
From Part (iv) of Lemma 14 we obtain
where the last inequality follows (with room to spare) by our assumption of . Thus, by Markov’s inequality, conditionally under , with probability at least , we reach before time . Let be the corresponding event. Conditional under , with constant probability the particle stays one unit of time inside before time , and let this event be called . Conditional under , the angular component’s law during one unit of time inside is with
Note that for the absolute constant independent of from above, and recall also that by assumptions on and , with room to spare. Let be the event that when reaching the angle at the origin spanned by the particle and is (in absolute value) is at most or there was a moment before reaching with (and thus we detected at time ). Note that by symmetry of the angular movement, . Conditional under , by Fact 35, since , for some we have
and since holds with probability , we have , which finishes the proof of the case and (and thus the proof of (ii)).
We now deal with the remaining case (and therefore necessarily ). Assume first . This argument is analogous to the one for angular movement: we repeat it for convenience. Given the trajectory of , recall that the angular component’s law is that of a Brownian motion , where . Hence, using Fact 35, since and using that for all ,
showing the result in the case .
Finally, suppose (therefore ). First, suppose that the starting point of the radial component is distributed according to the stationary distribution of the process, that is, with . We will see that the contribution to of the time spent around different radial values is roughly the same, forcing a logarithmic correction. To bound from below, let , which by our hypothesis implies (in particular it implies also ). Note that
Hence, using that implies that ,
(23) |
So, recalling that , by Fact 33, for all , we have , which gives an estimate for the value of the variables. Moreover, by Corollary 28, since and by definition of we have . Since , using once more that , we obtain , so and the assumptions of Corollary 28 are satisfied. Hence, there exist , such that for each , we have that the expectation of the indicator of the event is at least . Let and that . Define then the event . Since , it must be the case that . Thus, for a fixed realization of satisfying , since , we have
Note that by definition of , so where we used the assumption that is at least a sufficiently large constant. Thus, under the angular movement dominates stochastically a Brownian motion with standard deviation . By Fact 35,
Thus, for with , since and using that for all ,
(24) |
It remains to show thus that with positive probability the trajectory starting with a fixed initial radius for can be coupled in such a way that the probability of detection of the target by time can be bounded from below by the probability of detection when the radius is chosen according to the stationary distribution . Denote by the radial component at time when starting according to the stationary distribution. Consider the event that the initial radial value is at most one unit away from the boundary of , that is, . Clearly, . Let be the event that starting from hits by time . Conditional under , since the time to hit is clearly dominated by the time a standard Brownian motion (corresponding to a one-dimensional radial movement) hits starting from , by our lower bound on , we have that . Note that conditional under either the trajectories starting with a fixed value of on the one hand and with according to the stationary distribution on the other hand must cross by time (and they can be coupled naturally from then on for a time interval of ), or for any , and during an initial time period of length the detection probability starting from stochastically dominates the one when starting from . Thus, with probability , the process can be successfully coupled with , and since we aim for a lower bound, we assume that holds. Conditional under , we may thus apply the reasoning yielding (24) with replaced by , and the result follows. ∎
We still have to deal with particles whose initial location are points satisfying the second condition in the definition of . The next proposition does this.
Proposition 36.
Let and assume that for large enough. Then, for any with , we have the following:
-
(i)
If , then
-
(ii)
If , then
Proof.
Thanks to Lemma 32 we may, and will, assume throughout the proof that for an arbitrarily large . Under said condition, the proof argument formalizes the following intuitive fact: Either there is a chance for the particle to move radially towards the origin so that the boundary of is reached and detection happens, or there is a chance to move radially towards the origin, enough so that the particle stays long enough in such a region close enough to the origin so that the relatively large angle the particle traverses makes detection probable.
We first assume . Observe that we may assume , since otherwise during one time unit there is constant probability that the radial value is at most , and conditional under this event, during this time unit with constant probability an angular movement of standard deviation at least is performed, thus covering with constant probability an angular distance of in the case of for . We may and will thus below replace the condition by . We consider the worst-case scenario, i.e., , or equivalently . We restrict our discussion to vertices with , as other vertices were already considered before: indeed, for values of , in case (i) we have , where the second inequality follows from our assumption of with large. Hence such values of satisfy the first condition of and were treated in Proposition 34. In case (ii), for values of , we have , where again the second inequality follows from the same assumption on (again with room to spare), and again this case was already dealt with in Proposition 34. Define as in Fact 17 and let . By arguments given in said fact, since we have restricted our discussion to the case where , we have and, since with large, we also have . Recall from Part (iii) of Lemma 14 that is the probability that starting from radius we hit the radial value before hitting the radial value , and we have
Thus, by Part (iv) of Lemma 14 with and , we have that by our lower bound hypothesis of with sufficiently large. By Markov’s inequality, conditional under , starting at radius , with probability at least , the value of is hit by time . In this case, if , then by Lemma 32 the target is detected with constant probability in constant time, and we are done. If , then with constant probability, during the ensuing unit time interval, the radial coordinate is always at most . Call this event . By symmetry of the angular movement, with probability at least , either the angle at the hitting time of is in absolute value at most or there was a time moment with before, and we had detected by time already. Call this event . Conditional under , the variance of the angular movement after reaching during the following unit time interval is at least , so recalling that by our worst-case assumption , the standard deviation is thus at least . Hence, with constant probability, independently of (and also independently of ), in this case an angle of is covered during a unit time interval, and by our lower bound assumption of , the particle is detected by time , finishing also this case and establishing the proposition when by plugging in the respective relations between and in order to obtain the last equality of the the two parts of the statement.
Next, we consider Part (ii) which encompasses a case similar to the one analyzed in Section 4, so we give only a short sketch of how it is handled. By hypothesis and case assumption . Since , we thus have . Again, it suffices to show the statement for the worst-case scenario, that is, we may assume . We may assume that , since for vertices with , using our assumption of (with room to spare) we have , and such cases were already dealt with in the proof of Part (ii) of Proposition 34. By Lemma 32 we may also assume for arbitrarily large . Define . As before, recall from Part (iii) of Lemma 14 that is the probability that starting from radius we hit the radial value before hitting the radial value . Since and since , we have
and let be the event that this indeed happens. By Part (iv) of Lemma 14 with and , we have that by our lower bound hypothesis on . By Markov’s inequality, with probability at least , . Let be the event to reach the radius by time . We have
Let be the event that one of the following events occurs: Either at the moment when reaching the angle made at the origin between the particle and is in absolute value at most , or there was a moment where (and we already detected by time ). Note that by symmetry of the angular movement, . Hence, . To conclude observe first the following: If it is the case that , then conditional under , by Lemma 32, with constant probability, in time the particle is detected, and since , we are done. In particular, if we had for some sufficiently small , then, since , we must have , and therefore clearly also . If on the other hand we had for some arbitrarily small and also , then observe that since with large enough, we have , and is detected by time , since corresponds to the maximum angle at the origin between a particle at radius and that guarantees detection. ∎
The uniform lower bound of Theorem 30 now follows directly by combining Proposition 34 and Proposition 36. The integral lower bound of Theorem 30 follows directly from Proposition 34 applied with being any fixed constant and the fact that (by considering the condition only), and the proofs of the lower bounds of Theorem 29 and Theorem 30 are finished.
5.2 Upper bounds
In this section we will show the corresponding upper bounds of Theorems 29 and 30: that is, we show that particles initially placed outside , with as in the corresponding theorems, have only a small chance of detecting the target by time . For all values of , we will show that
thus establishing that the significant contribution to the tail probabilities comes from particles inside , and for large values of we will show uniform upper bounds for the detection probability of every point outside . In order to analyze the trajectory of a particle we will make use of the fact that the generator driving its motion can be separated into its radial and angular component, given respectively by
Since the generator driving the radial part of the motion does not depend on , our approach consists in sampling the radial component of the trajectories first and then, conditional under a given radial trajectory, sampling the angular component . With this approach we can make use of the results obtained in Section 4 in order to study , while the angular component is distributed according to a time-changed Brownian motion , where
relates the radial trajectory to the angular variance of , as already seen in Section 3. To be able to apply the insight obtained by studying each component separately, we will need to replace the hitting time of the target (which translates into hitting whose boundary defines a curve relating and ) by the exit time of a simpler set: let be any fixed initial location with and define a box containing , of the form
(25) |
where and will be defined later on. Since , it follows that in order for a particle starting at to detect the target, it must first exit the box through either its upper boundary or its side boundaries. Denoting by and the respective hitting times of said boundaries, this implies that , and we can obtain the bound
(26) |
The advantage of addressing the exit time of instead of is straightforward; the first event is independent of the angular component of the trajectory, allowing us to bound from above its probability with the tools developed in Section 4, while the treatment of the second event will follow from standard results on Brownian motion and the control of . The following result allows us to bound the second term on the right-hand side of (26):
Proposition 37.
Define . Then,
where stands for the error function. Furthermore,
Proof.
Denote by the law of the angular process given a realization of its radial component. Given such a realization we already know that the trajectory is equal in law to that of the time-changed Brownian motion , for which is equal to the exit time of from . Since the exit time of a Brownian motion is well known, we proceed as in Section 3 using the reflection principle to obtain
where the factor 4 is obtained since the probability of exiting by hitting one of the angles is twice the probability of hitting any of the two angles, which is twice the probability of hitting one fixed angle. Taking expectations with respect to the law of we obtain
where the expression within the expectation depends on the radial movement alone. To apply this last bound observe that for any realization of such that we have that , so , which gives the first part of the proposition’s statement. For the second part of the proposition we can make the same computation as before without taking into account the event , giving
and the result follows after using integration by parts, since . ∎
On a high level, in the proposition above, the bound for will be useful as long as since in this case the inequality is not too loose. However, when we need to control this term using the second part of Proposition 37, which relies heavily on bounding probabilities of the form . To provide these bounds observe that when is not too close to zero the drift function is almost constant so the trajectories of resemble those of a Brownian motion with drift (with a reflecting boundary at ). Even further, on the event where is not too close to zero we also have
(27) |
where the integral appearing above on the right-hand side has been widely studied in the case when is an (unbounded) Brownian motion with drift (see [31, 41, 16, 15]), revealing that their distributions are heavy-tailed with the tail exponent given analytically in terms of and the drift of . In order to make use of known results we must first address the change in behavior arising from the reflecting boundary of our process. To do so, we consider an auxiliary process akin to the one introduced in the proof of Proposition 20:
-
•
begins at and evolves according to up until hitting .
-
•
Every time hits , it is immediately restarted at and continues to evolve as before.
It is not hard to see that is stochastically dominated by , in the sense that the radius of the auxiliary process is always smaller. Thus, in particular, , and hence it will be enough to bound probabilities of the form from above. Now, from the definition of , it is natural to use the subsequent hitting times of , say , to divide the trajectory of into excursions from to (or from to , in the case of the first one), giving
where is a random variable equal to the amount of times has hit by time . Let be independent random diffusion processes (that will correspond to the different excursions), each evolving on according to the generator without the reflecting boundary at , and such that starts at while the rest of the start at . From the definition of the auxiliary process it follows that within each time interval of the form (or in the case of the first excursion) the trajectory of is equal to the one of for . Hence,
where for
Observe that the bound on involves a sum of random variables. Intuitively speaking the number of times the process hits should be proportional to , so we introduce an auxiliary parameter depending on , to be fixed later, so that the event occurs with a very small probability. Using this new parameter we obtain
(28) |
The advantage of the bound above is that it involves the sum of independent random variables which, aside from , are also identically distributed. The bound also gives valuable intuition regarding the detection of the target: roughly speaking, in order for detection to take place either must be sufficiently close to the origin so that the angular variance coming from becomes significant, or must be large enough so that many excursions starting near the boundary occur, allowing for the contribution to be large enough. As mentioned above (the discussion following (27)), the distributions of the variables are heavy-tailed. In order to control their sum we make use of the following lemma, whose proof mimics closely what was done in [35]: since our result is slightly more precise than the one in [35], we provide it in the appendix for the sake of completeness.
Lemma 38.
Let where is a sequence of i.i.d. absolutely continuous random variables taking values in such that there are for which for all
Then, there are depending on , and (if it exists) alone such that:
-
•
If and , then .
-
•
If and , then .
-
•
If and , then .
We can now prove the following result, which will allow us to control :
Proposition 39.
For any there are large depending on and alone, such that for any , , and with , defining as
the following statements hold for all and all sufficiently large:
-
(i)
,
-
(ii)
, and
-
(iii)
where
Proof.
We begin with the upper bound for the term . By definition, the event is equal to . Writing , by Markov’s inequality, for any we deduce
where the differences are i.i.d. random variables which indicate the time it takes for excursion starting at to reach . Since each is stochastically bounded from below by a Brownian motion with constant drift , we can use Formula 2.0.1 in [11] to obtain
The exponent appearing in the last term is minimized as a function of when , which defines a positive since and , giving an expression for the exponent of the right-hand side of the form
where in the last inequality we used that and that . This proves the bound in (i). In order to control the probabilities appearing in (ii) and (iii) it is convenient from the point of view of computations to work under the assumption that the processes never get too close to . To do so observe that by our assumption of , all the start in , and denote by the hitting time of radius by . Observe that
Using Part (iii) from Lemma 14, with and , we obtain
which gives the first term of the bound for (and thus ) sufficiently large. For the second term, observe that on the event we have for all so there is some constant depending on alone such that for all , we have and hence
Now, notice that is stochastically bounded from below by , a Brownian motion with constant drift so we can bound the integral within the last term from above by . This particular functional of Brownian motion was studied in [15, Proposition 4.4.4], where it was shown that
that is, is distributed according to an inverse gamma distribution with parameters and . From the density of the variable above the only relevant feature for our purposes is its heavy tail: in order to ease calculations we will bound stochastically from above by a random variable following a Pareto distribution with a sufficiently large scale depending on and alone (in particular we will assume ), and shape . That is, with
Using this auxiliary variable we readily obtain
giving the last term of (ii). To obtain the bound in (iii), define the event where was defined previously as the hitting time of on the -th excursion. By the same analysis as in the previous paragraph, we get
where in this case we have used that each begins at and the ’s stand for i.i.d. Pareto random variables with scale and shape as in the previous case. For the first term we use the same treatment as with to obtain
where we have used that in the first excursion the starting radius is , and that . For the second term recall that the variables are bounded from below by , and that for any . Hence, we can apply Lemma 38 with and depending on the value of as follows:
-
•
If , then we can take in the lemma so that
as soon as for some and depending on and alone. Observing that and that the condition is equivalent to , the result follows by defining and accordingly.
-
•
If we can take in the lemma so that
as soon as for some and depending on and alone, which is satisfied since by hypothesis and we can choose . Since , we have for , giving the result.
-
•
Finally, assume that and take in the lemma so that
as soon as for some and depending on and alone. The condition follows directly from the hypothesis for adequately chosen, and the bound in (iii) is obtained from the previous inequality by noticing that and defining accordingly.
∎
5.2.1 The case small
In this subsection we obtain upper bound for under the assumption of and as required by Theorem 29. Since for we always have and the next subsection deals with all such values, here we even assume . The main result in this section is the following:
Proposition 40.
Let . If is as defined in Theorem 29 and , then
Proof.
Notice first that for any , Lemma 6 gives so the contribution of points in to the upper bound is from our assumption , and hence such points can be neglected. Take then large (to be fixed later) so we only need to consider particles initially located at points with radial component in . For such points we follow the proof argument described in the previous section, where we bound the detection time of the target by the exit time of the box for and given in the next sentence, allowing us to bound as in (26). More in detail, given we construct the box by choosing
Notice first that , , and assuming we also have : indeed, this last statement is equivalent to which is satisfied for since and . The previous inequalities show that the box is well defined and contain the point , but we still need to prove that . To do so assume, without loss of generality, that and observe that it is enough to show that the upper-left corner of the box, is in , or equivalently, that . Using that and Part (i) of Lemma 8 we obtain that
Moreover, for sufficiently large, since ,
Using Part (v) of Lemma 8 we have for so we conclude that
and since , taking sufficiently large the last term is bounded by which proves that for all such .
Now, we follow the proof strategy presented in the previous section by splitting the detection probability as in (26) and using the first bound in Proposition 37, which by setting and recalling that , gives
(29) |
where is the hitting time of . For the first term on the right of (29) we can stochastically dominate from above the trajectory of by that of a simple Brownian motion with a reflecting barrier at (thus removing the drift towards the boundary) which decreases the hitting time of . Since hitting implies moving outside of an interval of length , by the reflection principle, we obtain
For the term on the right we have
but since the function is decreasing and we deduce that the term on the right-hand side is , so
where in the last equality we have used that . Next, we must bound the second term on the right of (29), for which we observe first that since and assuming that is large we have so that
Evaluating at , integrating over and using the change of variables gives
but since we have (it is finite since ) and hence
where again we used that in the last equality. ∎
5.2.2 The case large
In this subsection we will prove the upper bounds in Theorem 30 for both as well as for , where for convenience we use the shorthand notation that will be used throughout this section. Recall that we may assume for this theorem. We handle this case with the help of Propositions 37 and 39, as well as with the results obtained in Section 4. The main result in this section is the following, which directly implies the corresponding upper bounds of Theorem 30:
Proposition 41.
We begin by introducing some notation and deriving a few facts that will come in handy throughout this section. The points in with a positive angle define a curve in polar coordinates which can be parameterized by the radius as with , and where takes values between and , for being the solution of (see Figure 3(b) for a depiction of )
(30) |
Observing that is the maximum between two functions, a major role will be played by the intersecting point which satisfies (see Figure 3(b) for a depiction of )
(31) |
Next we derive several inequalities that will be useful in the proof of Proposition 41:
Fact 42.
If the constant appearing in the statement of Proposition 41 is sufficiently large, then under the assumption the following hold:
-
(i)
and for all , . In particular, for any by taking large we have and .
-
(ii)
.
-
(iii)
.
-
(iv)
is minimized at , and in particular for all .
-
(v)
For any fixed value of and ,
Proof.
The first part follows directly from Part (v) of Lemma 8, the second one from the definition of and the fact that , and the third part follows from (i) and the definition of . To prove part (iv) observe that in the function is decreasing while in it is increasing (since is constant and is decreasing) so the function is minimized at . The inequality follows from
To prove (v) we fix and equal to some constants and split the range of integration into two intervals, specifically and . For the first one we have and since , using (iii), we get
To conclude the proof of (v) observe that for the second interval (i) gives
∎
Following the strategy presented at the beginning of Section 5.2, for each with (the case is analogous) we define a box where this time we set (see Figure 3(c) for an illustration of )
(32) |
Since by Part (ii) of Lemma 8 the function is decreasing, it can be easily checked that so in order to detect the target, a particle must first exit its respective box. In particular, by (26) and the second bound in Proposition 37, we obtain
(33) |
It suffices thus to bound the right-hand side. We first address the term , which depends on the radial trajectory of the particle alone, and hence we can make use of the results obtained in Section 4.
Proposition 43.
Under the conditions of Proposition 41 we have:
-
•
If , then .
-
•
If and , then
Proof.
Throughout this proof, let be such that (the case can be dealt with analogously). As defined in Part (iii) of Lemma 14, with , and , let
Using the same arguments as in the proof of Proposition 19 in Section 4, bounding the term analogously to what was done in (16), we have
(34) |
To bound from above the numerator of , note that the largest possible value of among points in is obtained at and so it follows from the definition of , , , and Part (i) in Fact 42, that
Also note that , so taking larger than a fixed constant gives , and hence
From Part (ii) in Fact 42 it follows that , and since (because ) we can argue as in the proof of Fact 17 that , and hence combining with the previous bound we get . Notice that both terms bounding in (34) involve a factor , which by definition of and Part (v) of Lemma 8 is , giving
(35) |
To obtain a uniform upper bound for on , observe that for any fixed , since , we have , where was defined at the beginning of this section, giving
(36) |
and hence . For the case this bound is , while for the case the exponential term is equal to which is at least since as already observed . By Part (ii) in Fact 42 we have . Thus,
so putting both cases together we obtain a bound of the form . Next we address the term appearing in (35). For , by (36), we have
which is an increasing function of and hence within the bound is maximized at . Thus, it is enough to deal with points with radial component in . Now, for said points a similar argument to the one in (36) gives and hence
The analysis of this term depends on and , and we begin addressing the case and since it is the most delicate. Notice that in this case so in particular and hence the upper bound is an increasing function of . Now, in order for to be non-empty we need we deduce that , which gives
The remaining cases are easier: If , then so the term is , while for the cases and it can be easily checked that since and respectively, we obtain . Since whenever , putting together the bounds for both terms in (35) we finally obtain the sought after bound:
Next, we show that for we have by integrating both terms in (35) over . For the first one we use that to obtain
Splitting the range of integration of the last integral into and we obtain
where the last equality is by definition of and since implies that . To address this last bound suppose first that and observe that since we have , while for we have , hence so (since ) and we conclude that in any case scenario
For the second term in (35) we use again, together with to obtain
and it can be checked directly from the definition of and our assumption that this last expression is also . ∎
The previous proposition gave the uniform bound for the first term appearing in (33), as well as a bound for its integral over . The following proposition allows us to bound from above the second term in (33) with the use of Proposition 39 to handle the function . Recall that in order to apply said proposition we require to be larger than with
(37) |
where is a large constant that depends on and , and where is some function of for which we only ask to satisfy for being a constant lower bound for (which exists since ).
Proposition 44.
Proof.
Fix any given and split into
(39) |
For the first integral, we can bound the probability inside it by , which using that and the change of variables gives a bound
For the second integral in (39), we use (28) to bound the probability within together with Proposition 39 (since ) and obtain
(40) |
where if , and otherwise. Grouping the first, second and fourth terms in (40), the assumption gives
since the terms are independent of and . For the term analogous computations give
The final term in (40) is a bit more delicate, and must be treated differently according to whether or . In the latter case we can repeat the computations used in the previous term, giving
whereas if , we use the change of variables and the fact that so that
The result then follows by adding all the bounds and noticing that . ∎
We are now ready to prove the main result of this section.
Proof of Proposition 41.
Using (33) and Proposition 43 it will be enough to obtain a uniform bound for on as well as an upper bound for the integral of this term over this set. We will address the uniform bounds first, since calculations are easier, and then turn to the upper bound for the integrals over , whose analysis is similar.
Uniform upper bound: Our goal is to show that under the hypothesis ,
(41) |
Throughout this analysis, we take constant and equal to . With this choice of we deduce from Part (iv) in Fact 42 that and hence we have (38) as in Proposition 44, so we address each term appearing in the bound as follows:
We assume throughout the uniform upper bound proof that , and hence this set is non empty so in particular we must have .
- •
-
•
The term appearing in (38) does not depend on and since we are assuming , it becomes , so it is negligible.
-
•
Now, we consider the term . By definition . Since for , by Part (iv) in Fact 42, we have
If , this bound is independently of , whilst if , using that , the bound is . Recalling that, by Part (ii) in Fact 42, we know that , replacing this value in the previous bound we obtain a term of order . Summarizing, we have shown that and the result then follows by our choice of .
-
•
For the term we observe that it is independent of , and under the assumptions if and if in the statement of Proposition 37 we obtain that and hence the term is negligible. In the case , since we obtain by definition of .
-
•
To address the term we first treat the case . Using the definition of and the fact that this term is of order
and using Part (iv) in Fact 42 we deduce
(42) From the definition of we have
where we directly observe that if this expression is (since ), and hence the upper bound is so in particular it has the form as desired. For the case we distinguish between the cases and : In the former we directly obtain that , whereas if , since and by definition of , we have and obtain an upper bound of .
Assume now that so that and notice that since the function is on , and using that we obtain
and hence the term is negligible in this case.
Integral upper bound: Our goal is to show that under the hypothesis for large and ,
In contrast to the choice in the uniform upper bound analysis, we will now choose
for some fixed satisfying (this is possible since ) and where is a lower bound for . It will be convenient to define since it will appear many times in subsequent computations. In order to apply Proposition 44 we need to show that , which is less straightforward than the uniform bound case since is also increasing with . In the case , observe that while at the same time . We conclude that
(43) |
which is since and . Analogously, if we still have whereas this time so that
(44) |
which is again since both and . We have proved that in either case . As a result we have (38) where we address each term appearing in the upper bound as follows:
-
•
For the term assume first that and use (43) to obtain
so using the change of variable we obtain
and hence
- •
-
•
For the term , by definition of we obtain
We treat first the case , which implies in particular and hence the right-hand side is of order
where we first integrate from to , and then from to , and then used Part (iv) in Fact 42 to bound from below. Using Part (iii) of Fact 42 in the first term on the right-hand side, we obtain a term of order , while for the second term using that we have by definition that so in particular for some depending on and , giving the result in this case. For the case we can repeat the calculations in the previous case to again obtain a bound of order .
-
•
For the term we observe that, using the change of variable together with Part (iv) in Fact 42, we get
Now, suppose first that so that and hence where the last equality is by our assumption of . Suppose now that and observe that in this case and so using that we deduce again that . Summarizing, independent of the value taken by we have
-
•
For the term it will be useful to notice from Part iv in Fact 42 that
(45) for any . We assume first that so that
(46) We analyze this expression first when so using (45) the term on the right-hand side is of order
(47) where the integral with respect to behaves differently according to whether is smaller, equal, or larger than . Suppose first that so that the above term is of order
where the right hand side follows from (45). Proceeding as in (42) we deduce that the bound is . Suppose now that , which in particular implies so in this case . Now, if then the integral with respect to in (47) is , and thus the bound is of order
where we have used the particular form of . Since satisfies and also , we conclude that the bound is . Similarly, if then (47) is of order
which is of order and again using that and we conclude that the bound is .
We now turn to the analysis of the right-hand side of (46) when , in which case we obtain a term of order
(48) Observe that the condition implies that . Assume first that in which case (48) is of order
and from the particular form of we have and hence the bound is . Suppose next that , in which case (48) is of order
and in order to show that the bound is it will be enough to prove that . It can be checked that using the particular form of , we have , and the result will follow as soon as the exponent of is positive. However, this is equivalent to which holds since by definition , and we are assuming . Finally, suppose that , in which case (48) is of order and using the same analysis as in the case (with strict inequalities) we deduce that it is .
We now bound the integral of the term in the special case where
Now, for any fixed , we can use the change of variables in the inner integral, which gives
for which is since . On the other hand which allows us to conclude that
Therefore,
where the last equality follows directly from Part (v) in Fact 42.
∎
6 Conclusion and outlook
We studied a movement of particles in the random hyperbolic graph model introduced by [28] and analyzed the tail of detection times of a fixed target in this model. To the best of our knowledge, the current paper is the first one in analyzing dynamic hyperbolic graphs. It is natural to ask similar questions to the ones addressed in [37] in the model of random geometric graphs: how long does it take to detect a target that is mobile itself? How long does it take to detect all (fixed or mobile) targets in a certain sector? How long does it take in order for a given vertex initially outside the giant component at some fixed location needs to connect to some vertex of the giant component (percolation time)? How long does it take to broadcast a message in a mobile graph? We think that the detection of a mobile target, using the same proof ideas as presented here, should accelerate the detection time by a constant factor (presumably by a factor in the angular case and perhaps by a different factor in general), and also the deletion time of all targets in a certain sector can presumably be done with similar techniques to the ones used in this paper. On the other hand, new proof ingredients might be needed for analyzing the percolation time and the broadcast time. We think that the same observation as the one made by Moreau et al. [33] which show that for the continuous random walk on the square lattice the best strategy to detect (to end up right at the same location) is to stay put, holds in our model as well.
Appendix A Appendix: on the sums of Pareto variables
Here we prove Lemma 38 that gives a large deviation result for sums of independent heavy tailed random variables. The lemma is a more precise version of the results in [35] (although here we treat only with positive random variables which are bounded away from zero), and the proof follows closely what was done there.
Lemma 45.
Let where is a sequence of i.i.d. absolutely continuous random variables taking values in such that there are with
for all . Then there are depending on , and (if it exists) alone such that:
-
•
If , then for all , .
-
•
If , then for all , .
-
•
If , then for all , .
Remark 46.
We remark that tighter results can be obtained for below, but the current result is sufficient for our purposes.
Proof.
We proceed as in [35] by assuming first that . Fix and define the event so that
where is the sum of i.i.d. random variables with c.d.f. for any . We can thus use a Chernoff bound to deduce
where we used independence of the random variables to conclude that . Define now for some to be chosen below so that holds. Hence,
which we bound separately. First notice that for some constant we have
where if and if . For the integral between and observe that
where in the last line we used the change of variables . Now, since , the function has a positive derivative for . Hence for we have and hence the last integral is therefore smaller than , giving
for some constants depending only on and . Putting together both bounds for we arrive at
Our aim at this point to choose such that the term on the right is small, which is achieved when taking
so that . Assume first that so for large, for which is small, while is large so the assumption is justified. Now, since , and hence we have
and since we are assuming large, we have which finally gives
which proves the first point of the theorem. Suppose now that so that for for some large, and hence is small, while is large, so again the assumption is justified. For this choice of we have which we can bound as
for some constant , and hence we arrive at
for some constant , and where the last inequality holds by choosing larger than and also by choosing large enough.
Suppose now that so that exists. In this case we can perform a similar computation to the one before to deduce that
and we can divide the integral as before so that
where again (and for our choice of small below again we have ). Now, the main difference in this case is the treatment of the first term, for which we have
for some value depending on alone, where we used that , that is small, but also , and where
Since is small we conclude that the is at least of the same order as the terms containing and hence
Treating the integral as in the case we finally obtain
Now, since we are interested in the probability for larger than some which we can take larger than we have
and hence we can take . For we choose as before (which is small) for which and hence
but for large enough, and hence
Suppose now that and choose for which we have , giving
Now, if , then so for some constant
for large , while if , , and so
for large . In any case scenario, we obtain
but for we have and hence the second term dominates the first, giving the result. ∎
References
- [1] M.. Abdullah, M. Bode and N. Fountoulakis “Typical distances in a geometric model for complex networks” In Internet Mathematics 1, 2017
- [2] I. Akyildiz, W. Su, Y. Sankarasubramaniam and E. Cayirci “Wirleness sensor networks: a survey” In Computer Networks 38, 2002, pp. 393–422
- [3] N. Alon and J.. Spencer “The probabilistic method, Third edition” John Wiley & Sons, 2008
- [4] S. Athreya, A. Drewitz and R. Sun “Random Walk Among Mobile/Immobile Traps: A Short Review” In In: Sidoravicius V. (eds) Sojourns in Probability Theory and Statistical Physics - III Springer Proceedings in Mathematics & Statistics, vol 300. Springer, Singapore, 2018
- [5] A-L. Barabási “Linked: The New Science of Networks” Perseus Publishing, 2002
- [6] A.M. Berezhovskii, Yu.A. Makhovskii and R.A. Suris “Wiener sausage volume moments” In Journal of Statistical Physics 57, 1989, pp. 333–346
- [7] J. Berg, R. Meester and D.G. White “Dynamic Boolean models” In Stochastic Processes and their Applications 69, 1997, pp. 247–257
- [8] M. Bode, N. Fountoulakis and T. Müller “On the Giant Component of random hyperbolic graphs” In Proceedings of the 7th European Conference on Combinatorics, Graph Theory and Applications, EUROCOMB’13, 2013, pp. 425–429
- [9] M. Bode, N. Fountoulakis and T. Müller “The probability of connectivity in a hyperbolic model of complex networks” In Random Structures & Algorithms 49.1, 2016, pp. 65–94
- [10] M. Boguñá, F. Papadopoulos and D. Krioukov “Sustaining the internet with hyperbolic mapping” In Nature Communications, 2010, pp. 1:62
- [11] A.. Borodin and P. Salminen “Handbook of Brownian Motion - Facts and Formulae” Birkhauser Basel, 2002
- [12] E. Candellero and N. Fountoulakis “Clustering and the hyperbolic geometry of complex networks” In Internet Mathematics 12.1–2, 2016, pp. 2–53
- [13] J. Díaz, D. Mitsche and X. Pérez-Giménez In IEEE Transactions on Mobile Computing 8.6, 2009, pp. 821–835
- [14] R. Diel and D. Mitsche “On the largest component of subcritical random hyperbolic graphs” In Electronic Communications of Probability 26, 2021, pp. 1–14
- [15] D. Dufresne “The Distribution of a Perpetuity, with Applications to Risk Theory and Pension Funding” In Scandinavian Actuarial Journal, 1990, pp. 39–79
- [16] R. Feng, P. Jiang and H. Volkmer “Geometric Brownian motion with affine drift and its time-integral” In Applied Mathematics and Computation 395, 2020
- [17] N. Fountoulakis and T. Müller “Law of large numbers for the largest component in a hyperbolic model of complex networks” In Annals of Applied Probability 28, 2018, pp. 607–650
- [18] N. Fountoulakis, P. Hoorn, T. M““uller and M. Schepers “Clustering in a hyperbolic model of complex networks” In Elec. J. of Probability 26, 2021, pp. 1–132
- [19] T. Friedrich and A. Krohmer “On the Diameter of Hyperbolic Random Graphs” In Automata, Languages, and Programming - 42nd International Colloquium – ICALP Part II 9135, LNCS Springer, 2015, pp. 614–625
- [20] R.D. Gordon “Values of Mills’ ratio of area to bounding ordinate and of the normal probability integral for large values of the argument” In Ann. Math. Statist. 12, 1941, pp. 364–366
- [21] L. Gugelmann, K. Panagiotou and U. Peter “Random Hyperbolic Graphs: Degree Sequence and Clustering” In Automata, Languages, and Programming - 39th International Colloquium – ICALP Part II 7392, LNCS Springer, 2012, pp. 573–585
- [22] G. Kesidis, T. Konstantopoulos and S. Phoha “Surveillance coverage of sensor networks under a random mobility strategy” In Proceedings of the 2nd IEEE International Conference on Sensors, 2003
- [23] M. Kiwi and D. Mitsche “A Bound for the Diameter of Random Hyperbolic Graphs” In Proceedings of the 12th Workshop on Analytic Algorithmics and Combinatorics – ANALCO SIAM, 2015, pp. 26–39
- [24] M. Kiwi and D. Mitsche “On the second largest component of random hyperbolic graphs” In SIAM J. Discrete Math. 33.4, 2019, pp. 2200–2217
- [25] M. Kiwi and D. Mitsche “Spectral Gap of Random Hyperbolic Graphs and Related Parameters” In Annals of Applied Probability 28.2, 2018, pp. 941–989
- [26] C. Koch and J. Lengler “Bootstrap percolation on geometric inhomogeneous random graphs” In Automata, Languages, and Programming - 43rd International Colloquium – ICALP 55, LIPIcs, 2016, pp. 1–15
- [27] T. Konstantopoulos “Response to Prof. Baccelli’s lecture on modelling of wireless communication networks by stochastic geometry” In Computer Journal Advance Access, 2009
- [28] D. Krioukov et al. “Hyperbolic geometry of complex networks” In Phyical Review E 82, 2010, pp. 036106
- [29] A. Linker, D. Mitsche, B. Schapira and D. Valesin “The contact process on random hyperbolic graphs: metastability and critical exponents” In Annals of Probability 49.3, 2021, pp. 1480–1514
- [30] B. Liu et al. “Mobility improves coverage of sensor networks” In Proceedings of the 6th ACM International Conference on Mobile Computing and Networking (MobiCom), 2005
- [31] H. Matsumoto and M. Yor “Exponential functionals of Brownian motion, I: Probability laws at fixed time” In Probab. Surveys 2 The Institute of Mathematical Statisticsthe Bernoulli Society, 2005, pp. 312–347
- [32] M. Mitzenmacher and E. Upfal “Probability and Computing: Randomization and Probabilistic Techniques in Algorithms and Data Analysis” Cambridge University Press, 2017
- [33] M. Moreau, G. Oshanin, O. Bénichou and M. Coppey “Lattice theory of trapping reactions with mobile species” In Phys. Rev. E 69, 2004
- [34] T. Müller and M. Staps “The diameter of KPKVB random graphs” In Advances in Applied Probability 51.2, 2019, pp. 358–377
- [35] O. Omelchenko and A. Bulatov “Concentration inequalities for sums of random variables, each having power bounded tails”, 2019 arXiv:1903.02529
- [36] Y. Peres, P. Sousi and A. Stauffer “The isolation time of Poisson Brownian motions” In ALEA, Lat. Am. J. Probab. Math. Stat. 10.2, 2013, pp. 813–829
- [37] Y. Peres, A. Sinclair, P. Sousi and A. Stauffer “Mobile Geometric Graphs: Detection, Coverage and Percolation” In Probability Theory and Related Fields 156, 2013, pp. 273–305
- [38] F. Spitzer “Electrostatic capacity, heat flow, and Brownian motion” In Z. Wahrscheinlichkeitstheorie verw. Geb. 3, 1964, pp. 110–121
- [39] A. Stauffer “Space–time percolation and detection by mobile nodes” In Ann. Appl. Probab. 25.5, 2015, pp. 2416–2461
- [40] D. Stoyan, W.S. Kendall and J. Mecke “Stochastic Geometry and its Applications” John Wiley & Sons, 2nd edition, 1995
- [41] M. Yor “On Some Exponential Functionals of Brownian Motion” In Advances in Applied Probability 24, 2011, pp. 23–48