Scaling limit for random walk
on the range of random walk
in four dimensions
Abstract
We establish scaling limits for the random walk whose state space is the range of a simple random walk on the four-dimensional integer lattice. These concern the asymptotic behaviour of the graph distance from the origin and the spatial location of the random walk in question. The limiting processes are the analogues of those for higher-dimensional versions of the model, but additional logarithmic terms in the scaling factors are needed to see these. The proof applies recently developed machinery relating the scaling of resistance metric spaces and stochastic processes, with key inputs being natural scaling statements for the random walk’s invariant measure, the associated effective resistance metric, the graph distance, and the cut times for the underlying simple random walk.
Keywords: random walk, scaling limit, range of random walk, random environment.
MSC: 60K37 (primary), 05C81, 60K35, 82B41.
1 Introduction
The object of interest in this article is the discrete-time simple random walk on the range of a four-dimensional random walk. To introduce this, we follow the presentation of [8], in which the same and higher-dimensional versions of the model were studied. Let be the simple random walk on started from , built on an underlying probability space with probability measure . Define the range of the random walk to be the graph with vertex set
(1) |
and edge set
(2) |
For -a.e. random walk path, the graph is infinite, connected and has bounded degree. Given a realisation of , we write
to denote the discrete-time Markov chain with transition probabilities
where is the usual graph degree of in . For , the law is the quenched law of the simple random walk on started from . Since is always an element of , we can also define an averaged (or annealed) law for the random walk on started from 0 as the semi-direct product of the environment law and the quenched law by setting
Our main results are the following averaged scaling limits for the simple random walk on . The processes and are assumed to be independent standard Brownian motions on and respectively, both started from the origin of the relevant space. The notation is used to represent the shortest path graph distance on . We note that it was previously shown in [8, Theorem 1.3] that is with high probability of an order in the interval , and part (b) refines this result substantially; a reparameterisation yields that the correct order of is . Similarly, a reparameterisation of part (a) yields that the correct order of is .
Theorem 1.1.
(a) There exist deterministic slowly-varying functions and satisfying
such that the law of
under converges as to the law of .
(b) For as in part (a), the law of
under converges as to the law of .
It was shown in [8] that the corresponding result is true in dimensions , but without the need for the and terms in the scaling factors. (Actually, [8] established the convergence of part (a) in a stronger way, namely under the quenched law , for -a.e. realisation of .) As will become clear in our argument, the appearance of and in the four-dimensional case stem from their appearance in the scaling of the graph distance and effective resistance between and (see Theorem 1.2 below). In the lower-dimensional cases, the situation is substantially different. Indeed, for , the underlying random walk is recurrent, and so is simply equipped with all nearest-neighbour edges. Thus, the random walk simply scales diffusively to Brownian motion on . As for the case , we do not have a conjecture for the scaling limit of the process. However, the random walk and its scaling limit are both transient, and therefore neither the graph , nor its scaling limit, will be space filling. On the other hand, we know that the graph no longer has asymptotically linear volume growth (see [26, Proposition 4.3.1]), has spectral dimension strictly greater than one (see [26, Theorem 1.2.3]), and has loops that persist in the limit (see [15] for the original proof that Brownian motion has double points, and [23, 25] for a description of how the random walk ‘loop soup’ converges to that of Brownian motion). Therefore, in three dimensions, we do not expect scaling limits for as above.
As was noted in [8], one motivation for studying random walk on the range of a random walk is its application to the transport properties of sedimentary rocks of low porosity, where the commonly considered sub-critical percolation model does not reflect the pore connectivity properties seen in experiments, see [3]. (We note that, in [3], it was observed that the effective resistance in between 0 and is of order , as was later rigourously proved in [27].) Further motivation given in [8] was that the model provides a simple prototypical example of a graph with loops that nonetheless has a tree-like scaling limit, for which one could establish convergence of the associated random walks to the natural diffusion on the limiting tree-like space. Such a situation is also expected to be the case for more challenging models, such as critical percolation in high dimensions (see [5, 6, 7] for progress in this direction). In addition to works specifically related to high-dimensional percolation, recent years have seen the development of a general approach for deriving scaling limits of random walks on graphs with a suitably ‘low-dimensional’ asymptotic structure, which demonstrates that if the effective resistance and invariant measure of the random walk converge in an appropriate fashion, then so does the random walk itself (see [9, 12], and [2] for the particular case of tree-like spaces). We will appeal to this framework to prove Theorem 1.1, with relevant details being introduced in Section 3. We highlight that the present work is a first application of such ‘resistance form’ theory to a model at its critical dimension, where logarithmic terms appear in the scaling factors. (NB. Although the viewpoint of [8] was a little different, combining the basic estimates of [8] with the argument of [9] would also yield the scaling limits of random walk on the range of random walk in dimensions greater than or equal to 5 as given in [8].)
Providing the key inputs into the resistance form argument that we apply to deduce Theorem 1.1, the first part of the paper is devoted to establishing a number of basic properties about the underlying graph . These give appropriate notions of volume, resistance, graph distance, and cut-time scaling for . We highlight that much of the work needed to establish the following result was completed in earlier articles, see Remark 1.3(a) for details. To state the result, we need to define a number of quantities. Firstly, let be the measure on given by
this is the unique (up to scaling) invariant measure of the Markov chain . Secondly, let be the effective resistance metric on when we consider to be an electrical network with unit resistors placed along each edge (see [14, 24] for elementary background on the effective resistance, and [4, Chapter 2] for a careful treatment of effective resistance for infinite graphs). We have already introduced the shortest path graph distance . Finally, we write
(3) |
for the set of cut times of , where and for . It is known that is -a.s. an infinite set (see [21]), and we denote the elements of this set as .
Theorem 1.2.
For any , the following limits hold in -probability as .
(a) For some deterministic constant ,
(b) For some deterministic slowly-varying function satisfying ,
(c) For some deterministic slowly-varying function satisfying ,
(d) For some deterministic constant ,
Remark 1.3.
(a) As is claimed in [21], the techniques of [19, Chapter 7] yield part (d) of the above result. In order for completeness, we provide details below. As for part (a), this is new, though its proof is also based on ideas from [19]. Concerning part (b), the one-dimensional marginal of the process in question was dealt with in [27]. Here, we do the additional work to derive the functional convergence statement. (Since the resistance distance from 0 to is not monotone increasing, this is not a completely trivial exercise.) Part (c) is handled in the same way as part (b).
(b) The of Theorem 1.1 is given by , where and are given above.
(c) We conjecture that it is possible to take , in the above result (and therefore also in Theorem 1.1).
Apart from the scaling limits of Theorem 1.1, given Theorem 1.2, various consequences for the simple random walk stem from general techniques of [11] and [18]. In particular, we have the following results describing the growth rate of the quenched exit times and the scaling limit of the heat kernel of the process. To state these, we introduce the notation
for the time that exits a -ball of radius centred at the origin, and
for the (smoothed) transition density of . We note that [8, Theorem 1.4] showed that deviated from by a logarithmic amount, but did not pin down the precise exponent of the logarithm.
Corollary 1.4.
(a) It holds that
(b) For any compact interval and , it holds that
in -probability.
Remark 1.5.
In [27, Theorem 1.2.2], it was stated that there exist constants such that, for -a.e. realisation of ,
for all large . (Note that, in [27], was defined to be equal to , and this is consistent with the current article, since the conclusion of Theorem 1.2(b) holds for this particular choice of .) However, the proof in [27] incorporated a misapplication of [18, Proposition 3.2], in that it did not include a verification of the resistance lower bound of [18, Equation (3.3)]. This gap is filled in [13], which applies the estimates of [27] to obtain the missing resistance estimate. The latter can be considered a -a.s. version of Proposition 3.2 below (or a quantitative version of (32)).
The remainder of the paper is organised as follows. In Section 2, we derive the fundamental estimates on the underlying graph that are stated as Theorem 1.2. In Section 3, we then explain how these can be used to obtain the scaling limit for the random walk of Theorem 1.1, and also give a proof of Corollary 1.4. Throughout the paper, we write if for some absolute constant . Also, we write if . We let denote arbitrary constants in that may change from line to line. To simplify notation, we sometimes use a continuous variable, say, where a discrete argument is required, with the understanding that it should be treated as .
2 Estimates for the underlying graph
We establish the parts of Theorem 1.2 concerning the measure, the metrics and cut times separately.
2.1 Volume scaling
Towards proving the volume asymptotics of Theorem 1.2(a), similarly to (1) and (2), for each for , we define a subgraph of by setting
(4) |
We moreover let .
Proof of Theorem 1.2(a).
For each , we introduce a random variable by letting
where stands for the cardinality of a set . In particular, for to be strictly positive, we require to be the time of the last visit by to the vertex . It follows that
(5) |
To see this, for each , we set to be the last time that the simple random walk hits . Since if and only if for some , it follows that
which gives (5).
Next we will compare the volume of with . To do this, we let be the indicator function of the event that is a cut time, i.e.,
(6) |
where was defined at (3). Also, we set
for the event that there are no cut times in . It follows from [19, Lemma 7.7.4] that
(7) |
where is a constant. Suppose that there exists a cut time in . Then we have for all . Since , it follows that on the event ,
Combining this with (5) and (7), we find that
(8) |
and so, in order to show the desired convergence in -probability of , it will suffice to prove that, as ,
(9) |
in -probability for some deterministic constant .
We now derive an upper bound on the variance of . To do this, take with . We want to control the dependence between and . Define an event by setting
We also let
The condition ensures that and are independent. Also, we note that on the event , it holds that and . The local central limit theorem (see [19, Theorem 1.2.1]) implies that
where constants change line to line. Therefore we have
Since and are independent, we consequently obtain that: for with ,
and it readily follows from this that
In particular, this bound yields
(10) |
The conclusion of the previous paragraph means that we have reduced the problem of establishing (9) to showing the asymptotic linearity of the deterministic sequence , and that is what we do now. Let and be independent simple random walks on started at the origin. Define a random variable by setting
where the graph is defined from as at (1) and (2). We will define the that appears in the statement of the result as
Now, take , and define two events and by setting
By again applying the local central limit theorem (see [19, Theorem 1.2.1]), we see that
With this in mind, we define
where the graph is defined from similarly to (4). For , on the event (respectively ), we have (respectively ). Thus the translation invariance and time-reversibility of the simple random walk yields that, for ,
This gives
Finally, since is clearly monotone in , extending from the above convergence to the functional statement of Theorem 1.2(a) is elementary. ∎
2.2 Resistance and distance scaling
The essential work for the resistance and distance asymptotics of Theorem 1.2(b),(c) was completed in [27]. In particular the following result concerning the effective resistance was established. We simply highlight how this, and the techniques used to prove it, can be adapted to our current purposes.
Lemma 2.1 ([27, Theorems 1.2.1 and 1.2.3]).
For some deterministic slowly-varying function satisfying ,
in -probability.
The corresponding result for the graph distance is as follows.
Lemma 2.2.
For some deterministic slowly-varying function satisfying ,
in -probability.
Proof.
The properties of the effective resistance used to prove Lemma 2.1 were simply that:
-
•
is a metric on ,
-
•
if are such that , then
where is the effective resistance metric for the graph , as defined at (4) (with unit resistors placed along edges).
These two properties are clearly satisfied when we replace the effective resistance by the shortest path graph distance. So we have the same results for the graph distance. The easy details are left to the reader. ∎
As we hinted in the introduction, the functional convergence statements of Theorem 1.2(b), (c) do not immediately follow from Lemmas 2.1 and 2.2 due to the non-monotonicity of and . To deal with this issue, we will apply the following observation about the gaps between cut times.
Lemma 2.3.
Define and . It then holds that, as ,
in -probability.
Proof.
The above lemma readily allows us to compare the effective resistance process with its past maximum, , and similarly for the graph distance.
Lemma 2.4.
It holds that, as ,
in -probability. The same claim also holds when is replaced by .
Proof.
2.3 Cut-time scaling
Recall the definition of a cut time from (3) and the random variables from (6). Moreover, define
to be the number of cut times in . On [21, page 3], it is stated that there exists a deterministic constant such that, in -probability, as ,
(11) |
From this and the monotonicity of the sequence , Theorem 1.2(d) readily follows with . As we noted in Remark 1.3, it is claimed in [21] that the limit at (11) can be proved by using the methods developed in [19, Chapter 7]. However, for the sake of the reader, we will give a proof of (11) here.
Proof of Theorem 1.2(d).
As noted above, it will suffice to prove (11). Let and be two independent simple random walks on started at the origin. We write for the first time that exits , that is, a ball of radius with respect to the Euclidean distance centered at the origin. It is then proved in [20, Theorem 5.2] that there exists a constant such that
(12) |
where we let for , and . Moreover, it is known that
(13) |
for some constants , see [22, Proposition 2.4.5]. Combining (12) and (13), we see that
Reparameterising and letting , we find that
and thus that
(14) |
Consequently, it follows that
(15) |
Thus to complete the proof, it will be enough to establish a suitable bound on the variance of for large .
Take so that: for all ,
(16) |
where . Throughout the proof here, we will assume . For , we set
Also, we write
We note that and are independent if and . In order to estimate the variance of , we consider with . Noting that , we have that
We will estimate the probabilities on the right-hand side. It is shown in [19, Lemma 7.7.3] that
Since , we have by (16). From this, it follows that . Thus we have
(17) |
which implies in turn that
Moreover, by (14), it holds that , and so
A similar estimate applies to , and thus we conclude that
for with .
We are now in a position to complete the proof. It follows from (17) that, for ,
Also, we recall that and are independent for with . Consequently, writing for the sum over integers and satisfying with ,
where for the final line we recall (15). This implies that converges to in -probability as . Combining this with (15), we get the limit at (11), as required. ∎
3 Simple random walk scaling limit
As briefly outlined in the introduction, the way in which we will establish the random walk scaling limits of Theorem 1.1 is via the resistance form approach developed in [9, 12]. In particular, we will apply [9, Theorem 7.2], which requires us to check a certain convergence result for compact metric spaces equipped with measures, marked points and continuous maps (see Proposition 3.1 below), as well as a non-explosion condition (see Proposition 3.2). To this end, we start by introducing the framework that will enable us to check the conditions of [9, Theorem 7.2]. Once these have been verified, we recall the necessary resistance form background for deriving our main conclusions for the random walk on the range of random walk in four dimensions that appear as Theorem 1.1. To conclude the section, we prove Corollary 1.4.
Broadly following the notation of [9], we consider a space of elements of the form , where we assume: is a compact metric space, is a finite Borel measure on , is a marked point of , and is a continuous map from into some image space , which is a fixed complete, separable metric space. (When proving Theorem 1.1(a), we will take to be equipped with the Euclidean distance, and when proving Theorem 1.1(b), we will take to be , again equipped with the Euclidean distance.) For two elements of , we define the distance between them to be equal to
(18) |
where the infimum is taken over all metric spaces , isometric embeddings , , and correspondences between and . Note that, by a correspondence between and , we mean a subset of such that for every there exists at least one such that , and conversely, for every there exists at least one such that . To make sense of the expression at (18), we further define to be the Prohorov distance between finite Borel measures on . We note that, up to trivial equivalences, it is possible to check that is a separable metric space.
For checking Theorem 1.1(a), the (random) elements of that will be of interest to us will be bounded restrictions of
where , and are given by Theorem 1.2, and
where is the Euclidean distance on , is Lebesgue measure on , and is the identity map on . In particular, for , let
where is the (open) resistance ball in with centre and radius , and
where , and are the natural restrictions of , and to . Similarly, for checking Theorem 1.2(b), we introduce
where is the identity map on restricted to , and
where is standard Brownian motion on . The bounded restrictions of these are then given by
where is the identity map on restricted to , and
where is the continuous map given by on the relevant interval. Note that, since and for all , the set is finite, and therefore -compact, -a.s. In particular, both and are elements of , -a.s. It is clear that and are as well. The first goal of this section is to prove the following. We note that the proof is similar to that of [10, Theorem 4.1], which demonstrated a corresponding convergence result for the resistance metric spaces associated with the one-dimensional Mott random walk.
Proposition 3.1.
(a) For every , as ,
in -probability in the space , where is equipped with the Euclidean distance.
(b) For every , as ,
in distribution in the space , where is equipped with the Euclidean distance.
Proof.
First, define
(19) |
to be the number of cut points that fall into , and set
where and
Note that, by construction (with equality between and if and only if ). We will approximate by
where is a measure on given by
for , , and if . In particular, we claim that, for any ,
(20) |
in -probability as .
To prove (20), for each , we consider the embedding of into given by for , and . Moreover, we define a correspondence by pairing with 0, with each other element of , with each element of for , and with each remaining element of . For this particular choice of embedding and correspondence, we will estimate each of the terms in the definition of given at (18), where is given by equipped with the rescaled resistance metric, i.e. . (Note that is indeed an isometry from into .) Firstly, observe that is the measure on obtained by assigning mass to , and, for each , assigning mass to (and assigning mass 0 to all other elements of ). Hence, applying the definition of the Prohorov metric, it readily follows that
Since is -a.s. a finite random variable, the first term converges to 0, -a.s. As for the second and third terms, for large , the sum of these is bounded above by
(21) |
Now, by Theorem 1.2(b), it holds that, for any ,
in -probability. Combined with (11), it follows that
(22) |
in -probability. Therefore, again applying (11), with probability going to one as ,
(23) |
where the notation is defined as in the statement of Lemma 2.3. Applying the latter result, we see that the right-hand side above converges to zero in -probability, and thus we have established that, in -probability,
Secondly,
(24) | |||||
Again, the first term converges to zero -a.s. Moreover, bounding the sum of the second and third terms by the expression at (21) and arguing as at (23) shows that they converge to zero in -probability. Thirdly,
The second, third and fourth terms can be dealt with exactly as were the terms on the right-hand side of (24). As for the first term, by Theorem 1.2(d) and (22), we have that, with probability going to one as ,
Hence, from Theorem 1.2(b),(c), this also converges to zero in -probability. This completes the proof of (20).
As a consequence of (20), to establish part (a) of the proposition, it will suffice to check that
(25) |
in -probability in the space , where is equipped with the Euclidean distance. For this, we start by noting that is isometrically embedded into by the identity map. Moreover, similarly to above, it is possible to define a correspondence between and by pairing with 0, with each element of for , and with each remaining element of . Now, if is such that , then
Hence, by applying Theorem 1.2(a),(d) and (22), we find that, for each fixed , in -probability as . It readily follows that, writing for the Prohorov distance on ,
in -probability as . Moreover,
(26) |
Now, , -a.s., and the sum of the remaining terms can be bounded above by the expression at (21), and so converges to zero in -probability. In conclusion, since the maps we consider for both the approximating and limiting spaces are the identity map, we have proved that
in -probability as , which confirms (25).
The proof of part (b) is similar, though we need to deal with the embedding into . To do this, we will first apply Skorohod coupling to give us a sequence of copies of built on the same probability space as such that, -a.s.
(27) |
uniformly on compact intervals of . We then suppose is built from , and further define
where the first four components are defined as for above (but now from ), and is the map from into given by for , and . To show that
(28) |
in -probability as , we proceed as in the first part of the proof. We bound the only term not already dealt with as follows:
(29) | |||||
where
Now, from (11) and (22), we have that, with -probability converging to one as , . It therefore follows from Lemma 2.3 that in -probability as . Moreover, from Theorem 1.2(d) and (22), we have that in -probability. Combining these observations with (27) shows that the bound at (29) converges to zero in -probability, and hence we arrive at (28). Thus it remains to show that
(30) |
in -probability in the space , where is equipped with the Euclidean distance. The remaining term to deal with is now
By (26) and the continuity of , the first two terms converge to zero in -probability. For the third term, we have
(31) |
Since converges to in -probability, the first term above converges to zero in -probability by (27). As for the second term above, we first note that
Again applying the fact that in -probability, Theorem 1.2(b) gives that the above upper bound converges to zero in -probability. In combination with (27), this establishes that the second term at (31) also converges to zero in -probability. Thus we have completed the proof of (30). ∎
The next result gives that the resistance across balls diverges at the same rate as the point-to-point resistance.
Proposition 3.2.
It holds that
Proof.
We are nearly ready to complete the proof of Theorem 1.1. It remains to introduce the resistance form framework in which [9, Theorem 7.2] is stated. (We highlight that the idea of a resistance form was originally pioneered by Kigami in the context of analysis on self-similar fractals, see [16].) To this end, we let be quintuplets of the form , where:
-
•
is a non-empty set;
-
•
is a resistance metric on (i.e. for each finite , there exist conductances between the elements of for which is the associated effective resistance metric, see [16, Definition 2.3.2]), and closed bounded sets in are compact;
-
•
is a locally finite Borel regular measure of full support on ;
-
•
is a distinguished point in ;
-
•
is a continuous map from into some image space , which is a fixed complete, separable metric space.
We highlight that the resistance metric is associated with a certain quadratic form known as a resistance form , as characterised by
(see [16, Section 2.3],) and we will further assume that for elements of this form is regular in the sense of [17, Definition 6.2]. In particular, this ensures the existence of a related regular Dirichlet form on (see [17, Theorem 9.4]), which we suppose is recurrent. The following result shows that the spaces introduced above fall into this class.
Lemma 3.3.
-a.s., , , and are all elements of .
Proof.
By construction, is the resistance metric associated with the resistance form
where is the collection of functions on such that , see [17, Example 3.5]. Moreover, as was commented above Proposition 3.1, balls are finite, and so compact. Not only does this confirm and satisfy the properties in the first three bullet points above, but it also implies that contains all compactly supported functions. The latter observation gives that is regular. As for the recurrence of the associated Dirichlet form on , this is equivalent to
(32) |
see [9, Lemma 2.3], for instance. This divergence is given by Proposition 3.2. Hence we obtain that and are in , -a.s. The rescaled spaces and can be dealt with similarly.
By [17, Proposition 16.4], is a resistance metric space with associated regular resistance form
where is the collection of in such that there exists an -Cauchy sequence of functions in that converge uniformly on compacts to . By taking the trace onto the one-sided space as per [17, Theorem 8.4], we find that is also a resistance metric space associated with a regular resistance form . To check the recurrence of the associated Dirichlet form on , we again appeal to (32). In this case, we have that the resistance from to the boundary of the Euclidean ball of radius centred at 0 is simply given by , and hence (32) holds. Since the remaining conditions are straightforward to check, we have thus established that and lie in , -a.s. ∎
Now, since the Dirichlet forms elements associated with elements of are regular, they are naturally associated with Hunt processes. In particular, applying the spatial embeddings, the random processes corresponding to and are given by
respectively, where is the continuous-time random walk on with jump chain equal to and mean one exponential holding times (see [4, Remark 5.7], for example). Moreover, the random processes corresponding to and are given by
respectively, where we use the fact that has the same law as reflected Brownian motion on (see [1, Example 8.1], as well as [1, Remarks 1.6 and 3.1] for the connection between the framework of that paper and that of Kigami). With these preparations in place we are now ready to proceed with the proof of the main result.
Proof of Theorem 1.1.
Given Propositions 3.1 and 3.2, Lemma 3.3, and the descriptions of the processes associated to the resistance spaces that precede this proof, applying [9, Theorem 7.2] immediately yields that the conclusions of Theorem 1.1 hold when is replaced by its continuous-time counterpart . Since has jump rate one from every vertex and the limiting processes are continuous, it is then straightforward to obtain the same results for . ∎
We conclude the article by checking the further random walk properties that are stated as Corollary 1.4.
Proof of Corollary 1.4.
Defining as at (19), we have that . By Theorem 1.2(d) and (22), it holds that and both converge to one in -probability. Together with Theorem 1.2(a), we thus obtain that
(33) |
in -probability. A reparameterisation applying [27, Remark 2.1.3] (which implies that ) then gives that
in -probability. Combined with Proposition 3.2, this establishes that [18, Assumption 1.2(1)] holds in our setting with , and . Therefore we can apply [18, Proposition 1.3] to deduce that
where . Now, by Theorem 1.2(b),(c), a -ball of radius is comparable with an -ball of radius . Or, after reparameterisation, a -ball of radius is comparable with an -ball of radius . Hence we obtain part (a) of the result.
Towards proving part (b), we start by considering the continuity of the heat kernel. In particular, applying [11, Proposition 12], we have that
(34) |
where and is a deterministic constant that only depends on . Now, from (33), we have that
(35) |
in -probability, for some deterministic constant (depending only on ). Putting together (34), (35), and Theorem 1.2(b), it follows that: for any and ,
(36) |
with -probability converging to one as , where is a deterministic constant that depends on , but not . The remainder of the proof is similar to that of [11, Theorem 1]. In particular, we decompose the expression we are trying to bound as follows: for any ,
where
is the transition density of the process , and
By Theorem 1.2(c), we know that
for all with -probability converging to one. Together with (36), if , this implies
with -probability converging to one as . Similarly, the continuity of allows us to deduce that, if is chosen sufficiently small, then . Finally, we note that Theorems 1.1(a) and 1.2(a),(c) imply that in -probability. The result follows. ∎
Acknowledgements
DC was supported by JSPS Grant-in-Aid for Scientific Research (A), 17H01093, JSPS Grant-in-Aid for Scientific Research (C), 19K03540, and the Research Institute for Mathematical Sciences, an International Joint Usage/Research Center located in Kyoto University. DS was supported by a JSPS Grant-in-Aid for Early-Career Scientists, 18K13425, JSPS Grant-in-Aid for Scientific Research (B), 17H02849, and JSPS Grant-in-Aid for Scientific Research (B), 18H01123.
References
- [1] S. Athreya, M. Eckhoff, and A. Winter, Brownian motion on -trees, Trans. Amer. Math. Soc. 365 (2013), no. 6, 3115–3150.
- [2] S. Athreya, W. Löhr, and A. Winter, Invariance principle for variable speed random walks on trees, Ann. Probab. 45 (2017), no. 2, 625–667.
- [3] J. R. Banavar, A. Brooks Harris, and J. Koplik, Resistance of random walks, Phys. Rev. Lett. 51 (1983), no. 13, 1115–1118.
- [4] M. T. Barlow, Random walks and heat kernels on graphs, London Mathematical Society Lecture Note Series, vol. 438, Cambridge University Press, Cambridge, 2017.
- [5] G. Ben Arous, M. Cabezas, and A. Fribergh, Scaling limit for the ant in a simple high-dimensional labyrinth, Probab. Theory Related Fields 174 (2019), no. 1-2, 553–646.
- [6] , Scaling limit for the ant in high-dimensional labyrinths, Comm. Pure Appl. Math. 72 (2019), no. 4, 669–763.
- [7] D. A. Croydon, Hausdorff measure of arcs and Brownian motion on Brownian spatial trees, Ann. Probab. 37 (2009), no. 3, 946–978.
- [8] , Random walk on the range of random walk, J. Stat. Phys. 136 (2009), no. 2, 349–372.
- [9] , Scaling limits of stochastic processes associated with resistance forms, Ann. Inst. Henri Poincaré Probab. Stat. 54 (2018), no. 4, 1939–1968.
- [10] D. A. Croydon, R. Fukushima, and S. Junk, Anomalous scaling regime for one-dimensional Mott variable-range hopping, preprint appears at arXiv:2010.01779, 2020.
- [11] D. A. Croydon and B. M. Hambly, Local limit theorems for sequences of simple random walks on graphs, Potential Anal. 29 (2008), no. 4, 351–389.
- [12] D. A. Croydon, B. M. Hambly, and T. Kumagai, Time-changes of stochastic processes associated with resistance forms, Electron. J. Probab. 22 (2017), Paper No. 82, 41.
- [13] D. A. Croydon and D. Shiraishi, Erratum to “Exact value of the resistance exponent for four dimensional random walk trace”, preprint, 2021.
- [14] P. G. Doyle and J. L. Snell, Random walks and electric networks, Carus Mathematical Monographs, vol. 22, Mathematical Association of America, Washington, DC, 1984.
- [15] A. Dvoretzky, P. Erdös, and S. Kakutani, Double points of paths of Brownian motion in -space, Acta Sci. Math. (Szeged) 12 (1950), 75–81.
- [16] J. Kigami, Analysis on fractals, Cambridge Tracts in Mathematics, vol. 143, Cambridge University Press, Cambridge, 2001.
- [17] , Resistance forms, quasisymmetric maps and heat kernel estimates, Mem. Amer. Math. Soc. 216 (2012), no. 1015, vi+132.
- [18] T. Kumagai and J. Misumi, Heat kernel estimates for strongly recurrent random walk on random media, J. Theoret. Probab. 21 (2008), no. 4, 910–935.
- [19] G. F. Lawler, Intersections of random walks, Probability and its Applications, Birkhäuser Boston, Inc., Boston, MA, 1991.
- [20] , Escape probabilities for slowly recurrent sets, Probab. Theory Related Fields 94 (1992), no. 1, 91–117.
- [21] , Cut times for simple random walk, Electron. J. Probab. 1 (1996), no. 13, approx. 24 pp.
- [22] G. F. Lawler and V. Limic, Random walk: a modern introduction, Cambridge Studies in Advanced Mathematics, vol. 123, Cambridge University Press, Cambridge, 2010.
- [23] G. F. Lawler and J. A. Trujillo Ferreras, Random walk loop soup, Trans. Amer. Math. Soc. 359 (2007), no. 2, 767–787.
- [24] D. A. Levin and Y. Peres, Markov chains and mixing times, American Mathematical Society, Providence, RI, 2017, Second edition, With contributions by Elizabeth L. Wilmer, With a chapter on “Coupling from the past” by James G. Propp and David B. Wilson.
- [25] A. Sapozhnikov and D. Shiraishi, On Brownian motion, simple paths, and loops, Probab. Theory Related Fields 172 (2018), no. 3-4, 615–662.
- [26] D. Shiraishi, Heat kernel for random walk trace on and , Ann. Inst. Henri Poincaré Probab. Stat. 46 (2010), no. 4, 1001–1024.
- [27] , Exact value of the resistance exponent for four dimensional random walk trace, Probab. Theory Related Fields 153 (2012), no. 1-2, 191–232.