Interlacement limit of a stopped random walk trace on a torus
Abstract
We consider a simple random walk on started at the origin and stopped on its first exit time from . Write in the form with and an integer going to infinity in such a way that for some real constant . Our main result is that for , the projection of the stopped trajectory to the -torus locally converges, away from the origin, to an interlacement process at level , where is the exit time of a Brownian motion from the unit cube that is independent of the interlacement process. The above problem is a variation on results of Windisch (2008) and Sznitman (2009).
Keywords: interlacement, random walk, torus, hashing, loop-erased random walk
MSC 2020: 60K35
1 Introduction
A special case of a result of Windisch [15] — extended further in [1] — states that the trace of a simple random walk on the discrete -dimensional torus , for , started from stationarity and run for time converges, in a local sense, to an interlacement process at level , as . In this paper we will be concerned with a variation on this result, for which our motivation was a heuristic analysis of an algorithm we used to simulate high-dimensional loop-erased random walk and the sandpile height distribution [7]. Let us first describe our main result and then discuss the motivating problem.
Consider a discrete-time lazy simple random walk starting at the origin on . We write for the probability measure governing this walk. We stop the walk at the first time it exits the large box , where is an integer. We will take of the form , where and is an integer, such that for some , as . We consider the projection of the trajectory to the -torus . The projection is given by the map , where for any , is the unique point of such that , where congruence is understood coordinate-wise.
Let denote the exit time from of a standard Brownian motion started at . We write for the expectation associated to this Brownian motion. For any finite set , let denote the capacity of [9]. For any and , we denote , where is the Euclidean norm. Let denote the collection of all subsets of . Given let denote the translation of the torus by . Let be any function satisfying .
Theorem 1.1.
Let . For any , any , and any satisfying we have
(1) |
The error term depends on and , but is uniform in and .
Note that the trace of the lazy simple random walk stopped at time is the same as the trace of the simple random walk stopped at the analogous exit time. We use the lazy walk for convenience of the proof.
Our result is close in spirit — but different in details — compared to a result of Sznitman [12] that is concerned with simple random walk on a discrete cylinder. The interlacement process was introduced by Sznitman in [13]. It consists of a one-parameter family of random subsets of (), where the distribution of can be characterized by the relation
(2) |
The precise construction of a process satisfying (2) represents as the trace of a Poisson cloud of bi-infinite random walk trajectories (up to time-shifts), where is an intensity parameter. We refer to [13] and the books [3, 14] for further details. Comparing (1) and (2) we now formulate precisely what we mean by saying that the stopped trajectory, locally, is described by an interlacement process at the random level .
Let be any function satisfying . Note this does not have to be the same function as . Let be an arbitrary sequence satisfying . Define the sequence of random configurations by
Define the process by requiring that for all finite we have
To see that this formula indeed defines a process that is also unique, write the right-hand side as
where is the density of . Then via the inclusion-exclusion formula, we see that we necessarily have for all finite sets the equality
and the right-hand side can be used as the definition of the finite-dimensional marginals of . Note that lives in a compact space (the space can be identified with with the product topology). Hence the finite-dimensional marginals uniquely determine the distribution of , by Kolmogorov’s extension theorem.
Theorem 1.2.
Let . Under the law of the configuration converges weakly to the law of , as .
Proof of Theorem 1.2 assuming Theorem 1.1..
For events of the form , Theorem 1.1 immediately implies that
For events of the form , the inclusion-exclusion formula represents as a linear combination of probabilities of the former kind, and hence convergence follows. ∎
Our motivation to study the question in Theorem 1.1 was a simulation problem that arose in our numerical study of high-dimensional sandpiles [7]. We refer the interested reader to [11, 2, 6] for background on sandpiles. In our simulations we needed to generate loop-erased random walks (LERW) from the origin to the boundary of where . The LERW is defined by running a simple random walk from until it hits the boundary, and erasing all loops from its trajectory chronologically, as they are created. We refer to the book [9] for further background on LERW (which is not needed to understand the results in this paper). It is known from results of Lawler [8] that in dimensions the LERW visits on the order of vertices, the same as the simple random walk generating it. As the number of vertices visited is a lot smaller than the volume of the box, an efficient way to store the path generating the LERW is provided by the well-known method of hashing. We refer to [7] for a discussion of this approach, and only provide a brief summary here. Assign to any an integer value that is used to label the information relevant to position , where can be a large constant or slowly growing to infinity. Thus is necessarily highly non-injective. However, we may be able to arrange that with high probability the restriction of to the simple random walk trajectory is not far from injective, and then memory use can be reduced from order to roughly .
A simple possible choice of the hash function can be to compose the map with a linear enumeration of the vertices of , whose range has the required size111This is slightly different than what was used in [7].. The method can be expected to be effective, if the projection spreads roughly evenly over the torus with high probability. Our main theorem establishes a version of such a statement, as the right-hand side expression in (1) is independent of .
We now make some comments on the proof of Theorem 1.1. We refer to [3, Theorem 3.1] for the strategy of the proof in the case when the walk is run for a fixed time . The argument presented there goes by decomposing the walk into stretches of length for some , and then estimating the (small) probability in each stretch that is hit by the projection. We follow the same outline for the stopped lazy random walk. However, the elegant time-reversal argument given in [3] is not convenient in our setting, and we need to prove a delicate estimate on the probability that is hit, conditional on the start and end-points of the stretch. For this, we only want to consider stretches with “well-behaved” starting and end-points. We also classify stretches as “good stretch” where the total displacement is not too large, and as “bad stretch” otherwise. We do this in such a way that the expected number of “bad stretches” is small and summing over the “good stretches” gives us the required behaviour.
Possible generalisations.
(1) It is not essential that we restrict to the simple
random walk: any random walk for which the results in Section 2, hold
(such as finite range symmetric walks) would work equally well.
(2) The paper [15] considers several distant sets , and we believe this
would also be possible here, but would lead to further technicalities in the presentation.
(3) It is also not essential that the rescaled domain be , and we believe it could be
replaced by any other domain with sufficient regularity of the boundary.
A note on constants. All constants will be positive and finite. Constants denoted or will only depend on dimension and may change from line to line. If we need to refer to a constant later, it will be given an index, such as .
We now describe the organization of this paper. In Section 2, we first introduce some basic notation, then we recall several useful known results on random walk and state the key propositions required for the proof of the main theorem, Theorem 1.1. Section 3 contains the proof of the main theorem, assuming the key propositions. Finally, in Section 4 we provide the proofs of the propositions stated in Section 2.
2 Preliminaries
2.1 Some notation
We first introduce some notation used in this paper. In Section 1, we denoted the discrete torus , and the canonical projection map . From here on, we will omit the -dependence and write and instead.
We write vertices and subsets of the torus in bold, i.e. and . In order to simplify notation, in the rest of the paper we abbreviate .
Let be a discrete-time lazy simple random walk on , that is,
We denote the corresponding lazy random walk on by . Let denote the distribution of the lazy random walk on started from , and write for the distribution of the lazy random walk on started from . We write for the -step transition probability. Further notation we will use:
-
•
, where as for some constant
-
•
, rescaled box, indicates which copy of the torus the walk is in
-
•
for some , long enough for the mixing property on the torus, but short compared to
-
•
is a fixed point of
-
•
we write points in the original lattice with a prime, such as , and decompose a point as with in another lattice isomorphic to and
-
•
, the first exit time from
-
•
, so that the first multiple of when the rescaled point is not in equals
We omit the dependence on and from some notation above for simplicity.
2.2 Some auxiliary results on random walk
In this section, we collect some known results required for the proof of Theorem 1.1. We will rely heavily on the Local Central Limit Theorem (LCLT) [9, Chapter 2], with error term, and the Martingale maximal inequality [9, Eqn. (12.12) of Corollary 12.2.7]. We will also use Equation (6.31) in [9], that relates to the probability that a random walk started from the boundary of a large ball with radius hits the set before exiting the ball. In estimating some error terms in our arguments, sometimes we will use the Gaussian upper and lower bounds [5]. We also need to derive a lemma related to the mixing property on the torus [10, Theorem 5.6] to show that the starting positions of different stretches are not far from uniform on the torus; see Lemma 2.1.
We recall the LCLT from [9, Chapter 2]. The following is a specialisation of [9, Theorem 2.3.11] to lazy simple random walk. The covariance matrix and the square root of the associated quadratic form are given by
where is the -unit matrix.
Let denote the estimate of that one obtains by the LCLT, for lazy simple random walk. We have
The lazy simple random walk in is aperiodic, irreducible with mean zero, finite second moment, and finite exponential moments. All joint third moments of the components of vanish.
Theorem 2.1 ([9], Theorem 2.3.11).
For lazy simple random walk in , there exists such that for all and all with ,
The Martingale maximal inequality in [9, Eqn. (12.12) of Corollary 12.2.7] is stated as follows. Let denote the -th coordinate of (). The standard deviation of is given by . For all and all we have
(3) |
Now we state the result of [9, Eqn. (6.31)]. Recall that is the discrete ball centred at with radius . Let for any subset
Let . For a given finite set , let denote the hitting time
Then we have
(4) |
Here is the capacity of ; see [9, Section 6.5], which states the analogous statement for the simple random walk. Since we consider the lazy random walk, this introduces a factor of .
In estimating some error terms in our arguments, sometimes we will use the Gaussian upper and lower bounds [5]: there exist constants and such that
(5) |
Recall that the norm refers to the Euclidean norm.
Regarding mixing times, recall that lazy simple random walk on the -torus mixes in time [10, Theorem 5.6]. With this in mind we derive the following simple lemma.
Recall that and .
Lemma 2.1.
There exists such that for any and any we have
Proof.
Using the Gaussian upper bound, the left-hand side can be bounded by
Here we bounded the number of in satisfying by , where .
∎
2.3 Key propositions
In this section we state some propositions to be used in Section 3 to prove Theorem 1.1. The propositions will be proved in Section 4.
The strategy of the proof is to consider stretches of length of the walk, and estimate the small probability in each stretch that is hit by the projection. What makes this strategy work is that we can estimate, conditionally on the starting and end-points of a stretch, the probability that is hit, and this event asymptotically decouples from the number of stretches. The number of stretches will be the random variable . Since , and is in probability, we have that is . In Lemma 2.2 below we show a somewhat weaker estimate for (which suffices for our needs).
The main part of the proof will be to show that during a fixed stretch, is not hit with probability
(6) |
Heuristically, conditionally on this results in the probability
and we will conclude by showing that converges in distribution to a constant multiple of the Brownian exit time .
The factor in (6) arises as the expected time spent by the projected walk at a fixed point of the torus during a given stretch. The capacity term arises as we pass from expected time to hitting probability.
For the above approach to work, we need a small probability event on which the number of stretches or end-points of stretches are not sufficiently well-behaved. First, we will need to restrict to realizations where , which occurs with high probability as (see Lemma 2.2 below). Second, suppose that the -th stretch starts at the point and ends at the point , that is, and are realizations of and . In order to have a good estimate of the probability that is hit during this stretch, we will need to impose a few conditions on and . One of these is that the displacement is not too large: we will require that for all stretches, it is at most , for a function to be chosen later that increases to infinity with . We will be able to choose of the form in such a way that this restriction holds for all stretches with high probability. A third condition we need to impose, that will also hold with high probability, is that is at least a certain distance from for a parameter (this will only be required for , and is not needed for the first stretch starting with ). The reason we need this is to be able to appeal to (4) to extract the contribution, when we know that is hit from a long distance (we will take in (4)). The larger the value of , the better error bound we get on the approach to . On the other hand, should not be too close to , because we want the separation of from to occur with high enough probability.
The set defined below represents realizations of and the sequence satisfying the above restrictions. Proposition 2.1 below implies that these restrictions hold with high probability. First, we will need to satisfy the inequality
(7) |
This can be satisfied if and is sufficiently close to , say . Since the left-hand side of the left-hand inequality in (7) equals , we can subsequently choose such that we also have
(8) |
With the parameter fixed satisfying the above, we now define:
(9) |
where and recall that we write and , and we define . The time is corresponding to a particular value of the exit time , so for and . The parameter will be chosen in the course of the proof. See Figure 1 for a visual illustration of the sequence in the definition of .
The next lemma shows that the restriction made on the time-parameter in the definition of holds with high probability for .
Lemma 2.2.
We have as .
Proof.
By the definitions of and , we first notice that
where denotes the -th coordinate of the -dimensional lazy random walk.
We are going to use (3). Setting and , we can evaluate each term (similarly for the event with ) in the sum
(10) |
Recall that and . For the main term in the exponential in (10), we have the upper bound
The big term in the exponential in (10) produces an error term because
Coming to the second event , observe that the Central Limit Theorem applied to implies that
for some . Hence we have for all .
Applying this with ,
as required.
∎
The starting point for the proof of Theorem 1.1 is the following proposition that decomposes the probability we are interested in into terms involving single stretches of duration .
Proposition 2.1.
For a sufficiently large value of , we have that
(11) |
Furthermore,
(12) |
where as .
Central to the proof of Theorem 1.1 is the following proposition, that estimates the probability of hitting a copy of during a “good stretch” where the displacement is almost of order . This will not hold for all stretches with high probability, but the fraction of stretches for which it fails will vanish asymptotically.
Proposition 2.2.
There exists a sufficiently large value of such that the following holds. Let . Then for all such that we have
(13) |
In addition to the above proposition (that we prove in Section 4.2), we will need a weaker version for the remaining “bad stretches” that have less restriction on the distance . This will be needed to estimate error terms arising from the “bad stretches”, and it will also be useful to demonstrate some of our proof ideas for other error terms arising later in the paper. It will be proved in Section 4.1.
Proposition 2.3.
Let . For all we have
(14) |
and for the first stretch we have
(15) |
Here the and error terms are negative.
Our final proposition is needed to estimate the number of stretches that are “bad”.
Proposition 2.4.
We have
(16) |
as .
3 Proof of the main theorem assuming the key propositions
This section is the proof of Theorem 1.1.
Proof of Theorem 1.1 assuming Propositions 2.1–2.4.
First, given any , we define
(17) |
Thus we have
We further denote by
We have by Proposition 2.1 that
(18) |
By Proposition 2.4, we can replace the summation over elements of by summation over just elements of , at the cost of a term. That is,
(19) |
Applying Proposition 2.2 for the factors and Proposition 2.3 for the factors we get
(20) |
Note that since the summation is over elements of only, we have
(21) |
By (21), we can lower bound the last product in (20) by
Since the product is also at most , it equals .
Also, due to (21), we have
Since , we have . This implies that the penultimate product in (20) equals
(22) |
Recall that . By summing over and appealing to (11), we get that (20) equals
(23) |
where the primed summation denotes restriction to . Since due to Lemma 2.2, satisfies the bounds on with probability going to , the latter expression equals
(24) |
Let denote the covariance matrix for , so that . Let , with the covariance matrix . Let for .
Since , the event is the same as . Converting to events in terms of we have
Now we can write as
Let be the exit time of Brownian motion from . By Donsker’s Theorem [4, Theorem 8.1.5] we have
Then we have that converges in distribution to , with . This completes the proof. ∎
4 Proofs of the key propositions
4.1 Proof of Proposition 2.3
In the proof of the proposition we will need the following lemma that bounds the probability of hitting some copy of in terms of the Green’s function of the random walk. Recall that the Green’s function is defined by
and in all satisfies the bound [9]
for a constant . For part (ii) of the lemma recall that . We also define as the maximum Euclidean distance between two points in .
Lemma 4.1.
Let . Assume that is large enough so that .
(i) If satisfies , then
for all sufficiently small we have
(25) |
(ii) If , then for all sufficiently small we have
(26) |
(iii) If satisfies , then for all sufficiently small we have
(27) |
Proof.
(i) We split the sum according to whether or . In the first case we use (5) and write to get
For the remaining terms, we have the upper bound
Let be the cube with radius centred at and then are disjoint annuli for . We decompose the sum over according to which annulus falls into. For we have
where is the Green’s function constant. The contribution from any copy of in will be of order if its distance from is at least , say. Note that there is at most one copy of within distance of , which may have a distance as small as .
We have to sum over the following values of :
Since and for , the distance between and is at least . Therefore, we get the upper bound as follows:
Here the last inequality follows from the choice of , (8), for sufficiently small .
(ii) The proof is essentially the same, except for the contribution of the ”nearest” copy of , which is now .
(iii) The proof is very similar to that in part (i). Recall that . This time we need to sum over , which results in the bound
Here, for small enough, the second term dominates due to the choice of ; see (8). ∎
Proof of Proposition 2.3.
Since
we need to show that
Define , so that
(28) |
We have
(29) |
We bound this by splitting up the sum into different contributions. Let that will be chosen sufficiently small in the course of the proof.
Case 1. and , . By the LCLT we have that
(30) |
For this note that we have
where the first term tends to and the rest equals
Here we choose so that . A similar observation shows the estimate for .
The way we are going to use (30) is to replace the summation over by a summation over all , and at the same time inserting a factor . Hence the contribution of the values of and in Case 1 to the right-hand side of (28) is at most
This completes the bound in Case 1. For future use, note that if is any sequence, and we add the restriction to the conditions in Case 1, we obtain the upper bound
(31) |
Case 2a. but . In this case we bound and have that the contribution of this case to the right-hand side of (28) is at most
where in the first step we used (3) and in the last step we used the Gaussian lower bound (5) for . Indeed, the requirement for the Gaussian lower bound is satisfied for sufficiently large because . Therefore, we have
(32) |
Then we have
Case 2b. but . This case can be handled very similarly to Case 2a.
Case 3a. and . By the LCLT we have
We claim that
(33) |
We first note that , then we deduce that and
Since we have in the exponent, we have, as ,
and
These imply that
Thus (33) follows from comparing the LCLT approximations of the two sides.
We now have that the contribution of this case to the right-hand side of (28) is at most
where in the first step we used Lemma 4.1(i) and the last step holds for the value of we chose; cf. (8).
Case 3b. but . Use the Gaussian upper bound (5) to bound and bound the sum over all of by 1 using symmetry of to get
In the last step we used a Gaussian lower bound for ; cf. (32).
Case 4a. and . This case can be handled very similarly to Case 3a.
Case 4b. and . This case can be handled very similarly to Case 3b.
Therefore, we discussed all possible cases and proved statement (14) of the proposition as required.
The proof of (15) is similar to the first part with only a few modifications. In this part we have to show that
Define , so that
(34) |
We have
(35) |
We bound the term above by splitting up the sum into the same cases as in the proof of (14). The different cases can be handled very similarly to the first part. The difference is only in Case 3a while applying the Green’s function bound Lemma 4.1.
In Case 3a, by the LCLT, we can deduce that
If , the bound of Lemma 4.1(i) can be used as before. If , by Lemma 4.1(ii), we have that the contribution of this case to the right-hand side of (34) is at most
Here we used that .
Note that Case 4a can be handled in the same way as in the proof of (14), since the distance between and any copy of is at least .
Therefore, we discussed all possible cases and proved (15) as required. ∎
For future use, we extract a few corollaries of the proof of Proposition 2.3.
Corollary 4.1.
Assume that are points such that . Then for all we have
(36) |
Proof.
In the course of the proof of Proposition 2.3, we established the above with , where and , and with in place of in the upper bound on the displacement . Note that in this case the restriction in the summation holds for all , due to conditions imposed on and in the definition of .
The arguments when and with the upper bound increased by a factor of are essentially the same. The information that and are at least distance from was only used in Cases 3a and 4a to handle terms close to these points. Since in (36) we exclude such from the summation, the statement follows without restricting the location of . ∎
The following is merely a restatement of what was observed in (31) (with part (ii) below holding by symmetry).
Corollary 4.2.
(i) For and any sequence , we have
(37) |
(ii) The same right-hand side expression is valid if we replace the restriction by .
The following is a restatement of the bounds of Cases 2a and 2b.
Corollary 4.3.
For we have
(i)
(38) |
(ii)
(39) |
The following is a restatement of the bounds of Cases 3a and 3b combined.
Corollary 4.4.
For we have
(40) |
4.2 Proof of Proposition 2.2
In this section we need large enough so that we have
(41) |
We have
where
The strategy is to estimate the probability via the Bonferroni inequalities:
(42) |
We are going to use a parameter that we choose as so that in particular .
4.2.1 The main contribution
In this section, we consider only stretches with . We will show that the main contribution in (42) comes from in the set:
We first examine for . Putting , let be the time of the last visit to before hitting , let be the time of the first hit of , and let . See Figure 2 for an illustration of this decomposition.
Then we can write:
(43) |
where
We are going to use another parameter that will need to go to slowly. We choose it as . The main contribution to (43) will be when , and . Therefore, we split the sum over in (43) into a main contribution and an error term . In order to define these, let
(44) |
Then with
(45) |
we have
Lemma 4.2.
When and , we have
with the term uniform in and .
Proof.
By the LCLT, we have
We compare the exponents
as . ∎
Lemma 4.3.
When and , we have
with the term uniform in and .
Proof.
The statement will follow if we show the following claim:
For this, observe that by (5) we have
(46) |
On the other hand, using the Markov property, (5), and the fact that for we have and , we get
(47) |
We note here that the sum over and the sum over are symmetric. Bounding the sum over gives
(48) |
In the second sum we can bound the exponential by , and get the upper bound
In the first sum, we group terms on dyadic scales so that , . This gives the bound
which is also . ∎
In order to apply the previous two lemmas to analyze in (45), we first define a modification of in (LABEL:e:F-def), where and are both replaced by a vertex . That is, we define
Then the Lemmas 4.2 and 4.3 allow us to write, for , the main term in (45) as
(49) |
where the sum over is removed since
Lemma 4.4.
Assume that and .
(i) We have
(ii) We have
(50) |
Proof.
(i) When , we have
Hence the exponential term in the LCLT for is
where we used that , and hence .
(ii) If we summed over all , we would get exactly . Thus the claim amounts to showing that
(51) |
First, note that from the Local CLT we have
In order to estimate the left-hand side of (51), using (5), the contribution of can be estimated as follows. First, we have
Here we used and .
On the other hand, note that either or . Without loss of generality, assume that . Then the contribution to the left-hand side of (51), using (5), and by summing in dyadic shells with radii , we get the bound
(52) |
∎
The above lemma allows us to write
(53) |
The next lemma will help us extract the contribution from the right-hand side of (43).
Lemma 4.5.
We have
(54) |
Proof.
From the above lemma we get that the main contribution equals
(56) |
It is left to estimate all the error terms.
4.2.2 The error terms
Lemma 4.6.
We have
Proof.
We split the estimates according to which condition is violated in the sum. Recall that in the proof of Proposition 2.3 we chose such that . Here we make the further restriction that .
Case 1. . We claim that
(57) |
Since in every time interval of duration , the walk has a positive chance to exit the ball , we have
By (5) on and since and we have
Here we lower bounded by . The claim (57) is proved.
We also have the bound
We then get (summing over and ) that the contribution to from Case 1 is at most
Case 2. and . Note that since for large enough , if we put and , we can upper bound the contribution of this case by
Now we can make use of the corollaries stated after the proof of Proposition 2.3 as follows.
Case 2–(i). and and . Note that for large enough we have . Hence due to Corollary 4.2(i) (with there replaced by ) the contribution of this case is
Case 2–(ii). but either or . Again, we have . Hence, neglecting the requirement , Corollary 4.3 immediately implies that the contribution of this case is
Case 2–(iii). . It follows immediately from Corollary 4.4 that the contribution of this case is
Case 3. and . Due to symmetry, this case can be handled very similarly to Case 2. ∎
Lemma 4.7.
We have
Proof.
By the same arguments as in Lemma 4.4(ii), we have
For , let be the dyadic scale that satisfies
The same bounds hold up to constants for .
Then we have
Due to symmetry of the right-hand side, it is enough to consider the contribution of , which is bounded by
Now summing over we have that the number of the copies of the torus at dyadic scale is at most . Hence
∎
Lemma 4.8.
We have
Proof.
The summand on the left-hand side is bounded above by
Due to symmetry it is enough to consider the first term inside the summation. The estimates are again modelled on the proof of Proposition 2.3.
Case 1. and . In this case we can use Corollary 4.1 with and to perform the summation over and and get the upper bound:
(58) |
where we have written and . Using again Corollary 4.1, this time with and yields the upper bound
(59) |
Case 2. and . We are going to use that , which we can clearly assume. First sum over all to get the upper bound
(60) |
where the primed summation denotes the restriction . The choice of (recall (41)) implies that is . Due to the triangle inequality we also have . Using the LCLT for we get that
(61) |
Substituting this bound and into (60) we get
Case 3. and . Summing over all , we get the transition probability . This is stretched-exponentially small, and hence this case satisfies the required bound.
Case 4. . Due to symmetry, this case can be handled analogously to Cases 1–3.
∎
4.3 Proof of Proposition 2.1
Proof of Proposition 2.1..
We start with the proof of the second claim. We denote the error term in (12) as , which we claim to satisfy , with
Since , we have
By the Markov property, for ,
We denote the probability on the right-hand side by . On the event of the right-hand side, since , we have . We claim that
(62) |
Let be the events in the definitions of respectively. Let
Then, we have
From above, the claim (62) follows.
We bound and in the following lemmas.
Lemma 4.9.
We have for some . Consequently, .
Proof.
Since the number of points in is , and since we are considering times after the first stretch, the random walk is well mixed, so by Lemma 2.1 the probability to visit any point in the torus is . Using a union bound we have
Since , we have
∎
Before we bound the error term , we first introduce the following lemma. Let , where recall that denotes the -th coordinate of the -dimensional lazy random walk. We will denote by an instance of .
Lemma 4.10.
For all , for any integer such that and we have
Proof.
Using the Markov property at time , we get
where the third step is due to Cauchy-Schwarz inequality and for some , and the second to last step is due to .
We can similarly bound the conditional expectation of as follows:
∎
Lemma 4.11.
We have as .
Proof.
First we are going to bound the time difference between and . We are going to consider separately the cases when is in each face of the cube . Assume that we have for some . (The arguments needed are very similar when for some , and these will not be given.)
Let us consider the lazy random walk in multiples of steps. Let
and similarly, let
We let and for . We have that is a martingale. Let , and we are going to bound for some small that we are going to choose in the course of the proof. We are going to adapt an argument in [10, Proposition 17.19] to this purpose.
Define
where we set . Let denote the filtration generated by . We have
(63) |
where recall that is the variance of .
We first estimate . Since , by the same argument as in Lemma 4.10 we have that
We first bound . Since is bounded, by the Optional Stopping Theorem, we have
where we denote by and the last step is due to . Hence, we have
We now estimate . Let . The sequence is a martingale by (63). We can bound both the ‘overshoot’ above and the ‘undershoot’ below by Lemma 4.10. For the ‘undershoot’ below we have
For the ‘overshoot’ above , write , then we have
Hence
For , we have . Therefore, we have
Since is a martingale
We conclude that . Letting , by the Monotone Convergence Theorem,
where . This gives
Taking expectations of both sides, we have
Combining the above bounds, we get
We now bound the probability that a copy of is hit between times and .
We first show that the probability that the lazy random walk on the torus is in the ball at time goes to . Indeed, we have
where we have , so the last expression goes to . Here we used that , for example using a half-space Poisson kernel estimate [9, Theorem 8.1.2]. As for the number of terms in the sum, we have that there are copies of the torus within distance of the boundary . Considering the intersection of the union of balls and the boundary , the worst case is that within a single copy of the torus the intersection has size at most .
Condition on the location of the walk at the exit time . For we first bound the probability of hitting between the times between and . After time , the random walk is well mixed, and we can apply a simpler union bound.
We thus have the upper bound
The first term is stretched-exponentially small due to the Martingale maximal inequality (3). The Green’s function term is bounded by Lemma 4.1(iii).
After time , by Lemma 2.1, we have that
Therefore, combining the above upper bounds, we have the required result.
if and are sufficiently small. ∎
Lemma 4.12.
We have for some . Consequently, there exists such that if , then .
Proof.
By the Martingale maximal inequality (3), we have that
Taking say implies that if , we have
∎
4.4 Proof of Proposition 2.4
Proof.
By Martingale maximal inequality (3) used in the second step we have
Hence we have
By Markov’s inequality, it follows that
as .
∎
Acknowledgements. We thank two anonymous referees for their constructive criticism. The research of Minwei Sun was supported by an EPSRC doctoral training grant to the University of Bath with project reference EP/N509589/1/2377430.
References
- [1] Jiří Černý and Augusto Teixeira. Random walks on torus and random interlacements: Macroscopic coupling and phase transition. Annals of Applied Probability, 26(5):2883–2914, 2016.
- [2] Deepak Dhar. Theoretical studies of self-organized criticality. Phys. A, 369(1):29–70, 2006.
- [3] Alexander Drewitz, Balázs Ráth, and Artëm Sapozhnikov. An Introduction to Random Interlacements. Springer Briefs in Mathematics. Springer-Verlag, 1st ed. 2014. edition, 2014.
- [4] Rick Durrett. Probability: Theory and Examples. Cambridge Series in Statistical and Probabilistic Mathematics. Cambridge University Press, 5 edition, 2019.
- [5] W. Hebisch and L. Saloff-Coste. Gaussian Estimates for Markov Chains and Random Walks on Groups. The Annals of Probability, 21(2):673 – 709, 1993.
- [6] Antal A. Járai. Sandpile models. Probab. Surv., 15:243–306, 2018.
- [7] Antal A Járai and Minwei Sun. Toppling and height probabilities in sandpiles. Journal of Statistical Mechanics: Theory and Experiment, 2019(11):113204, nov 2019.
- [8] Gregory F Lawler. Intersections of Random Walks. Modern Birkhäuser Classics. Springer-Verlag, New York, NY, 1. aufl. edition, 2013.
- [9] Gregory F. Lawler and Vlada Limic. Random walk : a modern introduction. Cambridge studies in advanced mathematics ; 123. Cambridge University Press, Cambridge, 2010.
- [10] David A. Levin, Yuval Peres, and Elizabeth L. Wilmer. Markov chains and mixing times. American Mathematical Society, Providence, Rhode Island, second edition. edition, 2017.
- [11] Frank Redig. Mathematical aspects of the abelian sandpile model. In Mathematical statistical physics, pages 657–729. Elsevier B. V., Amsterdam, 2006.
- [12] Alain-Sol Sznitman. Random walks on discrete cylinders and random interlacements. Probability Theory and Related Fields, 145(1/2):143–175, September 2009.
- [13] Alain-Sol Sznitman. Vacant set of random interlacements and percolation. Annals of Mathematics, 171(3):2039–2087, 2010.
- [14] Alain-Sol Sznitman. Topics in occupation times and Gaussian free fields. Zurich Lectures in Advanced Mathematics. European Mathematical Society (EMS), Zürich, 2012.
- [15] David Windisch. Random walk on a discrete torus and random interlacements. Electronic Communications in Probability, 13(none), 2008.
Antal A. Járai and Minwei Sun
Address: Department of Mathematical Sciences, University of Bath, Claverton Down, Bath, BA2 7AY, United Kingdom
Email: [email protected], [email protected]