This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Scaling limit for random walk
on the range of random walk
in four dimensions

D. A. Croydon and D. Shiraishi
Abstract

We establish scaling limits for the random walk whose state space is the range of a simple random walk on the four-dimensional integer lattice. These concern the asymptotic behaviour of the graph distance from the origin and the spatial location of the random walk in question. The limiting processes are the analogues of those for higher-dimensional versions of the model, but additional logarithmic terms in the scaling factors are needed to see these. The proof applies recently developed machinery relating the scaling of resistance metric spaces and stochastic processes, with key inputs being natural scaling statements for the random walk’s invariant measure, the associated effective resistance metric, the graph distance, and the cut times for the underlying simple random walk.
Keywords: random walk, scaling limit, range of random walk, random environment.
MSC: 60K37 (primary), 05C81, 60K35, 82B41.

1 Introduction

The object of interest in this article is the discrete-time simple random walk on the range of a four-dimensional random walk. To introduce this, we follow the presentation of [8], in which the same and higher-dimensional versions of the model were studied. Let S=(Sn)n0S=(S_{n})_{n\geq 0} be the simple random walk on 4\mathbb{Z}^{4} started from 0, built on an underlying probability space Ω\Omega with probability measure 𝐏\mathbf{P}. Define the range of the random walk SS to be the graph 𝒢:=(V(𝒢),E(𝒢))\mathcal{G}:=(V(\mathcal{G}),E(\mathcal{G})) with vertex set

V(𝒢):={Sn:n0},V(\mathcal{G}):=\left\{S_{n}:\>n\geq 0\right\}, (1)

and edge set

E(𝒢):={{Sn,Sn+1}:n0}.E(\mathcal{G}):=\left\{\{S_{n},S_{n+1}\}:\>n\geq 0\right\}. (2)

For 𝐏\mathbf{P}-a.e. random walk path, the graph 𝒢\mathcal{G} is infinite, connected and has bounded degree. Given a realisation of 𝒢\mathcal{G}, we write

X=((Xn)n0,(𝐏x𝒢)xV(𝒢))X=\left((X_{n})_{n\geq 0},(\mathbf{P}_{x}^{\mathcal{G}})_{x\in V(\mathcal{G})}\right)

to denote the discrete-time Markov chain with transition probabilities

P𝒢(x,y):=1deg𝒢(x)𝟏{{x,y}E(𝒢)},P_{\mathcal{G}}(x,y):=\frac{1}{\mathrm{deg}_{\mathcal{G}}(x)}\mathbf{1}_{\{\{x,y\}\in E(\mathcal{G})\}},

where deg𝒢(x)\mathrm{deg}_{\mathcal{G}}(x) is the usual graph degree of xx in 𝒢\mathcal{G}. For xV(𝒢)x\in V(\mathcal{G}), the law 𝐏x𝒢\mathbf{P}_{x}^{\mathcal{G}} is the quenched law of the simple random walk on 𝒢\mathcal{G} started from xx. Since 0 is always an element of V(𝒢)V(\mathcal{G}), we can also define an averaged (or annealed) law \mathbb{P} for the random walk on 𝒢\mathcal{G} started from 0 as the semi-direct product of the environment law 𝐏\mathbf{P} and the quenched law 𝐏0𝒢\mathbf{P}_{0}^{\mathcal{G}} by setting

:=𝐏0𝒢()d𝐏.\mathbb{P}:=\int\mathbf{P}_{0}^{\mathcal{G}}(\cdot)\mathrm{d}\mathbf{P}.

Our main results are the following averaged scaling limits for the simple random walk XX on 𝒢\mathcal{G}. The processes (Bt)t0(B_{t})_{t\geq 0} and (Wt)t0(W_{t})_{t\geq 0} are assumed to be independent standard Brownian motions on \mathbb{R} and 4\mathbb{R}^{4} respectively, both started from the origin of the relevant space. The notation d𝒢d_{\mathcal{G}} is used to represent the shortest path graph distance on 𝒢\mathcal{G}. We note that it was previously shown in [8, Theorem 1.3] that n1/4|Xn|n^{-1/4}|X_{n}| is with high probability of an order in the interval [(logn)1/24,(logn)7/12][(\log n)^{1/24},(\log n)^{7/12}], and part (b) refines this result substantially; a reparameterisation yields that the correct order of n1/4|Xn|n^{-1/4}|X_{n}| is (logn)1/8+o(1)(\log n)^{1/8+o(1)}. Similarly, a reparameterisation of part (a) yields that the correct order of n1/2d𝒢(0,Xn)n^{-1/2}d_{\mathcal{G}}(0,X_{n}) is (logn)1/4+o(1)(\log n)^{-1/4+o(1)}.

Theorem 1.1.

(a) There exist deterministic slowly-varying functions ϕ:\phi:\mathbb{N}\rightarrow\mathbb{R} and ψ:\psi:\mathbb{N}\rightarrow\mathbb{R} satisfying

ϕ(n)=(logn)12+o(1),ψ(n)=(logn)12+o(1),\phi(n)=\left(\log n\right)^{-\frac{1}{2}+o(1)},\qquad\psi(n)=\left(\log n\right)^{-\frac{1}{2}+o(1)},

such that the law of

(1nϕ(n)d𝒢(0,Xtn2ψ(n)))t0\left(\frac{1}{n\phi(n)}d_{\mathcal{G}}\left(0,X_{\lfloor tn^{2}\psi(n)\rfloor}\right)\right)_{t\geq 0}

under \mathbb{P} converges as nn\rightarrow\infty to the law of (|Bt|)t0(|B_{t}|)_{t\geq 0}.
(b) For ψ:\psi:\mathbb{N}\rightarrow\mathbb{R} as in part (a), the law of

(1n1/2Xtn2ψ(n))t0\left(\frac{1}{n^{1/2}}X_{\lfloor tn^{2}\psi(n)\rfloor}\right)_{t\geq 0}

under \mathbb{P} converges as nn\rightarrow\infty to the law of (W|Bt|)t0(W_{|B_{t}|})_{t\geq 0}.

It was shown in [8] that the corresponding result is true in dimensions d5d\geq 5, but without the need for the ϕ\phi and ψ\psi terms in the scaling factors. (Actually, [8] established the convergence of part (a) in a stronger way, namely under the quenched law 𝐏0𝒢\mathbf{P}_{0}^{\mathcal{G}}, for 𝐏\mathbf{P}-a.e. realisation of 𝒢\mathcal{G}.) As will become clear in our argument, the appearance of ϕ\phi and ψ\psi in the four-dimensional case stem from their appearance in the scaling of the graph distance and effective resistance between 0 and SnS_{n} (see Theorem 1.2 below). In the lower-dimensional cases, the situation is substantially different. Indeed, for d=1,2d=1,2, the underlying random walk is recurrent, and so 𝒢\mathcal{G} is simply d\mathbb{Z}^{d} equipped with all nearest-neighbour edges. Thus, the random walk XX simply scales diffusively to Brownian motion on d\mathbb{R}^{d}. As for the case d=3d=3, we do not have a conjecture for the scaling limit of the process. However, the random walk SS and its scaling limit WW are both transient, and therefore neither the graph 𝒢\mathcal{G}, nor its scaling limit, will be space filling. On the other hand, we know that the graph 𝒢\mathcal{G} no longer has asymptotically linear volume growth (see [26, Proposition 4.3.1]), has spectral dimension strictly greater than one (see [26, Theorem 1.2.3]), and has loops that persist in the limit (see [15] for the original proof that Brownian motion has double points, and [23, 25] for a description of how the random walk ‘loop soup’ converges to that of Brownian motion). Therefore, in three dimensions, we do not expect scaling limits for XX as above.

As was noted in [8], one motivation for studying random walk on the range of a random walk is its application to the transport properties of sedimentary rocks of low porosity, where the commonly considered sub-critical percolation model does not reflect the pore connectivity properties seen in experiments, see [3]. (We note that, in [3], it was observed that the effective resistance in 𝒢\mathcal{G} between 0 and SnS_{n} is of order n(logn)1/2n(\log n)^{-1/2}, as was later rigourously proved in [27].) Further motivation given in [8] was that the model provides a simple prototypical example of a graph with loops that nonetheless has a tree-like scaling limit, for which one could establish convergence of the associated random walks to the natural diffusion on the limiting tree-like space. Such a situation is also expected to be the case for more challenging models, such as critical percolation in high dimensions (see [5, 6, 7] for progress in this direction). In addition to works specifically related to high-dimensional percolation, recent years have seen the development of a general approach for deriving scaling limits of random walks on graphs with a suitably ‘low-dimensional’ asymptotic structure, which demonstrates that if the effective resistance and invariant measure of the random walk converge in an appropriate fashion, then so does the random walk itself (see [9, 12], and [2] for the particular case of tree-like spaces). We will appeal to this framework to prove Theorem 1.1, with relevant details being introduced in Section 3. We highlight that the present work is a first application of such ‘resistance form’ theory to a model at its critical dimension, where logarithmic terms appear in the scaling factors. (NB. Although the viewpoint of [8] was a little different, combining the basic estimates of [8] with the argument of [9] would also yield the scaling limits of random walk on the range of random walk in dimensions greater than or equal to 5 as given in [8].)

Providing the key inputs into the resistance form argument that we apply to deduce Theorem 1.1, the first part of the paper is devoted to establishing a number of basic properties about the underlying graph 𝒢\mathcal{G}. These give appropriate notions of volume, resistance, graph distance, and cut-time scaling for SS. We highlight that much of the work needed to establish the following result was completed in earlier articles, see Remark 1.3(a) for details. To state the result, we need to define a number of quantities. Firstly, let μ𝒢\mu_{\mathcal{G}} be the measure on V(𝒢)V(\mathcal{G}) given by

μ𝒢({x}):=deg𝒢(x),xV(𝒢);\mu_{\mathcal{G}}(\{x\}):=\mathrm{deg}_{\mathcal{G}}(x),\qquad\forall x\in V(\mathcal{G});

this is the unique (up to scaling) invariant measure of the Markov chain XX. Secondly, let R𝒢R_{\mathcal{G}} be the effective resistance metric on V(𝒢)V(\mathcal{G}) when we consider 𝒢\mathcal{G} to be an electrical network with unit resistors placed along each edge (see [14, 24] for elementary background on the effective resistance, and [4, Chapter 2] for a careful treatment of effective resistance for infinite graphs). We have already introduced the shortest path graph distance d𝒢d_{\mathcal{G}}. Finally, we write

𝒯:={n:S[0,n]S[n+1,)=}\mathcal{T}:=\left\{n:\>S_{[0,n]}\cap S_{[n+1,\infty)}=\emptyset\right\} (3)

for the set of cut times of SS, where S[a,b]:={Sk:akb}S_{[a,b]}:=\{S_{k}:\>a\leq k\leq b\} and S[a,):={Sk:ka}S_{[a,\infty)}:=\{S_{k}:\>k\geq a\} for 0ab<0\leq a\leq b<\infty. It is known that 𝒯\mathcal{T} is 𝐏\mathbf{P}-a.s. an infinite set (see [21]), and we denote the elements of this set as 0T1<T2<0\leq T_{1}<T_{2}<\dots.

Theorem 1.2.

For any T(0,)T\in(0,\infty), the following limits hold in 𝐏\mathbf{P}-probability as nn\rightarrow\infty.
(a) For some deterministic constant λ(0,)\lambda\in(0,\infty),

(μ𝒢(S[0,nt])n)t[0,T](λt)t[0,T].\left(\frac{\mu_{\mathcal{G}}\left(S_{[0,nt]}\right)}{n}\right)_{t\in[0,T]}\rightarrow\left(\lambda t\right)_{t\in[0,T]}.

(b) For some deterministic slowly-varying function ψ~:\tilde{\psi}:\mathbb{N}\rightarrow\mathbb{R} satisfying ψ~(n)=(logn)12+o(1)\tilde{\psi}(n)=(\log n)^{-\frac{1}{2}+o(1)},

(R𝒢(0,Snt)nψ~(n))t[0,T](t)t[0,T].\left(\frac{R_{\mathcal{G}}(0,S_{nt})}{n\tilde{\psi}(n)}\right)_{t\in[0,T]}\rightarrow\left(t\right)_{t\in[0,T]}.

(c) For some deterministic slowly-varying function ϕ:\phi:\mathbb{N}\rightarrow\mathbb{R} satisfying ϕ(n)=(logn)12+o(1)\phi(n)=(\log n)^{-\frac{1}{2}+o(1)},

(d𝒢(0,Snt)nϕ(n))t[0,T](t)t[0,T].\left(\frac{d_{\mathcal{G}}(0,S_{nt})}{n\phi(n)}\right)_{t\in[0,T]}\rightarrow\left(t\right)_{t\in[0,T]}.

(d) For some deterministic constant τ(0,)\tau\in(0,\infty),

(Tntn(logn)12)t[0,T](τt)t[0,T].\left(\frac{T_{nt}}{n(\log n)^{\frac{1}{2}}}\right)_{t\in[0,T]}\rightarrow\left(\tau t\right)_{t\in[0,T]}.
Remark 1.3.

(a) As is claimed in [21], the techniques of [19, Chapter 7] yield part (d) of the above result. In order for completeness, we provide details below. As for part (a), this is new, though its proof is also based on ideas from [19]. Concerning part (b), the one-dimensional marginal of the process in question was dealt with in [27]. Here, we do the additional work to derive the functional convergence statement. (Since the resistance distance from 0 to SnS_{n} is not monotone increasing, this is not a completely trivial exercise.) Part (c) is handled in the same way as part (b).
(b) The ψ\psi of Theorem 1.1 is given by λ×ψ~\lambda\times\tilde{\psi}, where λ\lambda and ψ~\tilde{\psi} are given above.
(c) We conjecture that it is possible to take ϕ(n)=c1(logn)1/2\phi(n)=c_{1}(\log n)^{-1/2}, ψ~(n)=c2(logn)1/2\tilde{\psi}(n)=c_{2}(\log n)^{-1/2} in the above result (and therefore also ψ(n)=c3(logn)1/2\psi(n)=c_{3}(\log n)^{-1/2} in Theorem 1.1).

Apart from the scaling limits of Theorem 1.1, given Theorem 1.2, various consequences for the simple random walk XX stem from general techniques of [11] and [18]. In particular, we have the following results describing the growth rate of the quenched exit times and the scaling limit of the heat kernel of the process. To state these, we introduce the notation

τr𝒢:=inf{n0:d𝒢(0,Xn)r}\tau_{r}^{\mathcal{G}}:=\inf\left\{n\geq 0:\>d_{\mathcal{G}}(0,X_{n})\geq r\right\}

for the time that XX exits a d𝒢d_{\mathcal{G}}-ball of radius rr centred at the origin, and

pn𝒢(x,y):=𝐏x𝒢(Xn=y)+𝐏x𝒢(Xn+1=y)2deg𝒢(y)p_{n}^{\mathcal{G}}\left(x,y\right):=\frac{\mathbf{P}_{x}^{\mathcal{G}}\left(X_{n}=y\right)+\mathbf{P}_{x}^{\mathcal{G}}\left(X_{n+1}=y\right)}{2\mathrm{deg}_{\mathcal{G}}(y)}

for the (smoothed) transition density of XX. We note that [8, Theorem 1.4] showed that pn𝒢(0,0)p_{n}^{\mathcal{G}}(0,0) deviated from n1/2n^{-1/2} by a logarithmic amount, but did not pin down the precise exponent of the logarithm.

Corollary 1.4.

(a) It holds that

limΛinfr1𝐏(Λ1𝐄0𝒢(τr𝒢)r2ψ~(r)ϕ(r)2Λ)=1.\lim_{\Lambda\rightarrow\infty}\inf_{r\geq 1}\mathbf{P}\left(\Lambda^{-1}\leq\frac{\mathbf{E}_{0}^{\mathcal{G}}\left(\tau^{\mathcal{G}}_{r}\right)}{r^{2}\tilde{\psi}(r)\phi(r)^{-2}}\leq\Lambda\right)=1.

(b) For any compact interval I(0,)I\subseteq(0,\infty) and x0[0,)x_{0}\in[0,\infty), it holds that

suptIsupx[0,x0]|λnptn2ψ(n)𝒢(0,Snx)2πtex22t|0,\sup_{t\in I}\sup_{x\in[0,x_{0}]}\left|\lambda np_{\lfloor tn^{2}\psi(n)\rfloor}^{\mathcal{G}}\left(0,S_{\lfloor nx\rfloor}\right)-\sqrt{\frac{2}{\pi t}}e^{-\frac{x^{2}}{2t}}\right|\rightarrow 0,

in 𝐏\mathbf{P}-probability.

Remark 1.5.

In [27, Theorem 1.2.2], it was stated that there exist constants c1,c2(0,)c_{1},c_{2}\in(0,\infty) such that, for 𝐏\mathbf{P}-a.e. realisation of 𝒢\mathcal{G},

c1n1/2ψ~(n)1/2pn𝒢(0,0)c2n1/2ψ~(n)1/2c_{1}n^{-1/2}\tilde{\psi}(n)^{1/2}\leq p_{n}^{\mathcal{G}}\left(0,0\right)\leq c_{2}n^{-1/2}\tilde{\psi}(n)^{1/2}

for all large nn. (Note that, in [27], ψ~(n)\tilde{\psi}(n) was defined to be equal to 𝐄(R𝒢(0,Sn))/n\mathbf{E}(R_{\mathcal{G}}(0,S_{n}))/n, and this is consistent with the current article, since the conclusion of Theorem 1.2(b) holds for this particular choice of ψ~\tilde{\psi}.) However, the proof in [27] incorporated a misapplication of [18, Proposition 3.2], in that it did not include a verification of the resistance lower bound of [18, Equation (3.3)]. This gap is filled in [13], which applies the estimates of [27] to obtain the missing resistance estimate. The latter can be considered a 𝐏\mathbf{P}-a.s. version of Proposition 3.2 below (or a quantitative version of (32)).

The remainder of the paper is organised as follows. In Section 2, we derive the fundamental estimates on the underlying graph 𝒢\mathcal{G} that are stated as Theorem 1.2. In Section 3, we then explain how these can be used to obtain the scaling limit for the random walk XX of Theorem 1.1, and also give a proof of Corollary 1.4. Throughout the paper, we write an=O(bn)a_{n}=O(b_{n}) if |an|Cbn|a_{n}|\leq Cb_{n} for some absolute constant C>0C>0. Also, we write an=o(bn)a_{n}=o(b_{n}) if an/bn0a_{n}/b_{n}\to 0. We let c,C,c,C,\dots denote arbitrary constants in (0,)(0,\infty) that may change from line to line. To simplify notation, we sometimes use a continuous variable, xx say, where a discrete argument is required, with the understanding that it should be treated as x\lfloor x\rfloor.

2 Estimates for the underlying graph

We establish the parts of Theorem 1.2 concerning the measure, the metrics and cut times separately.

2.1 Volume scaling

Towards proving the volume asymptotics of Theorem 1.2(a), similarly to (1) and (2), for each for 0m<n0\leq m<n, we define a subgraph 𝒢m,n=(V(𝒢m,n),E(𝒢m,n)){\cal G}_{m,n}=(V({\cal G}_{m,n}),E({\cal G}_{m,n})) of 𝒢\mathcal{G} by setting

V(𝒢m,n):={Sk:mkn},E(𝒢m,n):={{Sk,Sk+1}:mkn1}.V({\cal G}_{m,n}):=\left\{S_{k}:\>m\leq k\leq n\right\},\qquad E({\cal G}_{m,n}):=\left\{\{S_{k},S_{k+1}\}:\>m\leq k\leq n-1\right\}. (4)

We moreover let 𝒢n:=𝒢0,n{\cal G}_{n}:={\cal G}_{0,n}.

Proof of Theorem 1.2(a).

For each k,n0k,n\geq 0, we introduce a random variable Yk(n)Y_{k}^{(n)} by letting

Yk(n):=|{eE(𝒢k+1):Ske}|×𝟏{SkS[0,n] and SmSk for all mk+1},Y_{k}^{(n)}:=\left|\left\{e\in E({\cal G}_{k+1}):\>S_{k}\in e\right\}\right|\times{\bf 1}_{\{S_{k}\in S_{[0,n]}\text{ and }S_{m}\neq S_{k}\text{ for all }m\geq k+1\}},

where |A||A| stands for the cardinality of a set AA. In particular, for Yk(n)Y_{k}^{(n)} to be strictly positive, we require kk to be the time of the last visit by SS to the vertex SkS[0,n]S_{k}\in S_{[0,n]}. It follows that

μ𝒢(S[0,n])=k=0Yk(n).\mu_{{\cal G}}\left(S_{[0,n]}\right)=\sum_{k=0}^{\infty}Y_{k}^{(n)}. (5)

To see this, for each xS[0,n]x\in S_{[0,n]}, we set σx=max{k0:Sk=x}\sigma_{x}=\max\{k\geq 0:\>S_{k}=x\} to be the last time that the simple random walk SS hits xx. Since Yk(n)>0Y_{k}^{(n)}>0 if and only if k=σxk=\sigma_{x} for some xS[0,n]x\in S_{[0,n]}, it follows that

k=0Yk(n)=xS[0,n]Yσx(n)=xS[0,n]|{eE(𝒢σx+1):xe}|=μ𝒢(S[0,n]),\sum_{k=0}^{\infty}Y_{k}^{(n)}=\sum_{x\in S_{[0,n]}}Y_{\sigma_{x}}^{(n)}=\sum_{x\in S_{[0,n]}}\left|\left\{e\in E({\cal G}_{\sigma_{x}+1}):\>x\in e\right\}\right|=\mu_{{\cal G}}\left(S_{[0,n]}\right),

which gives (5).

Next we will compare the volume of S[0,n]S_{[0,n]} with k=0nYk(n)\sum_{k=0}^{n}Y_{k}^{(n)}. To do this, we let IkI_{k} be the indicator function of the event that kk is a cut time, i.e.,

Ik:=𝟏{k𝒯},I_{k}:={\bf 1}_{\{k\in\mathcal{T}\}}, (6)

where 𝒯\mathcal{T} was defined at (3). Also, we set

Fn:={Ik=0 for all k[n,n+n(logn)6]},F_{n}:=\left\{I_{k}=0\text{ for all }k\in[n,n+n(\log n)^{-6}]\right\},

for the event that there are no cut times in [n,n+n(logn)6][n,n+n(\log n)^{-6}]. It follows from [19, Lemma 7.7.4] that

𝐏(Fn)Cloglognlogn,\mathbf{P}(F_{n})\leq\frac{C\log\log n}{\log n}, (7)

where C(0,)C\in(0,\infty) is a constant. Suppose that there exists a cut time in [n,n+n(logn)6][n,n+n(\log n)^{-6}]. Then we have Yk(n)=0Y_{k}^{(n)}=0 for all kn+n(logn)6k\geq n+n(\log n)^{-6}. Since Yk(n)8Y_{k}^{(n)}\leq 8, it follows that on the event FncF_{n}^{c},

|k=0Yk(n)k=0nYk(n)|=k=n+1n+n(logn)6Yk(n)8n(logn)6.\left|\sum_{k=0}^{\infty}Y_{k}^{(n)}-\sum_{k=0}^{n}Y_{k}^{(n)}\right|=\sum_{k=n+1}^{n+n(\log n)^{-6}}Y_{k}^{(n)}\leq 8n(\log n)^{-6}.

Combining this with (5) and (7), we find that

𝐏(|μ𝒢(S[0,n])k=0nYk(n)|>8n(logn)6)Cloglognlogn,\mathbf{P}\left(\left|\mu_{{\cal G}}\left(S_{[0,n]}\right)-\sum_{k=0}^{n}Y_{k}^{(n)}\right|>8n(\log n)^{-6}\right)\leq\frac{C\log\log n}{\log n}, (8)

and so, in order to show the desired convergence in 𝐏\mathbf{P}-probability of n1μ𝒢(S[0,n])n^{-1}\mu_{{\cal G}}(S_{[0,n]}), it will suffice to prove that, as nn\to\infty,

k=0nYk(n)nλ\frac{\sum_{k=0}^{n}Y_{k}^{(n)}}{n}\to\lambda (9)

in 𝐏\mathbf{P}-probability for some deterministic constant λ(0,)\lambda\in(0,\infty).

We now derive an upper bound on the variance of k=0nYk(n)\sum_{k=0}^{n}Y_{k}^{(n)}. To do this, take 0k<ln0\leq k<l\leq n with lk3nl-k\geq 3\sqrt{n}. We want to control the dependence between Yk(n)Y_{k}^{(n)} and Yl(n)Y_{l}^{(n)}. Define an event GG by setting

G:={SkS[k+n,),SlS[0,ln]}.G:=\left\{S_{k}\notin S_{[k+\sqrt{n},\infty)},\>S_{l}\notin S_{[0,l-\sqrt{n}]}\right\}.

We also let

Zk\displaystyle Z_{k} :=|{eE(𝒢k+1):Ske}|×𝟏{SmSk for all k+1mk+n},\displaystyle:=\left|\left\{e\in E({\cal G}_{k+1}):\>S_{k}\in e\right\}\right|\times{\bf 1}_{\{S_{m}\neq S_{k}\text{ for all }k+1\leq m\leq k+\sqrt{n}\}},
Wl\displaystyle W_{l} :=|{eE(𝒢ln,l+1):Sle}|×𝟏{SmSl for all ml}.\displaystyle:=\left|\left\{e\in E({\cal G}_{l-\sqrt{n},l+1}):\>S_{l}\in e\right\}\right|\times{\bf 1}_{\{S_{m}\neq S_{l}\text{ for all }m\geq l\}}.

The condition lk3nl-k\geq 3\sqrt{n} ensures that ZkZ_{k} and WlW_{l} are independent. Also, we note that on the event GG, it holds that Yk(n)=ZkY_{k}^{(n)}=Z_{k} and Yl(n)=WlY_{l}^{(n)}=W_{l}. The local central limit theorem (see [19, Theorem 1.2.1]) implies that

𝐏(Gc)\displaystyle\mathbf{P}(G^{c}) 𝐏(SkS[k+n,))+𝐏(SlS[0,ln])\displaystyle\leq\mathbf{P}\left(S_{k}\in S_{[k+\sqrt{n},\infty)}\right)+\mathbf{P}\left(S_{l}\in S_{[0,l-\sqrt{n}]}\right)
2𝐏(S0S[n,))\displaystyle\leq 2\mathbf{P}\left(S_{0}\in S_{[\sqrt{n},\infty)}\right)
2k=n𝐏(Sk=S0)\displaystyle\leq 2\sum_{k=\sqrt{n}}^{\infty}\mathbf{P}\left(S_{k}=S_{0}\right)
Ck=nk2\displaystyle\leq C\sum_{k=\sqrt{n}}^{\infty}k^{-2}
Cn,\displaystyle\leq\frac{C}{\sqrt{n}},

where constants change line to line. Therefore we have

|𝐄(Yk(n))𝐄(Zk)|Cn,|𝐄(Yl(n))𝐄(Wl)|Cn.\left|\mathbf{E}\left(Y_{k}^{(n)}\right)-\mathbf{E}\left(Z_{k}\right)\right|\leq\frac{C}{\sqrt{n}},\qquad\left|\mathbf{E}\left(Y_{l}^{(n)}\right)-\mathbf{E}\left(W_{l}\right)\right|\leq\frac{C}{\sqrt{n}}.

Since ZkZ_{k} and WlW_{l} are independent, we consequently obtain that: for 0k<ln0\leq k<l\leq n with lk3nl-k\geq 3\sqrt{n},

𝐄((Yk(n)𝐄(Yk(n)))(Yl(n)𝐄(Yl(n))))\displaystyle\mathbf{E}\left(\left(Y_{k}^{(n)}-\mathbf{E}(Y_{k}^{(n)})\right)\left(Y_{l}^{(n)}-\mathbf{E}(Y_{l}^{(n)})\right)\right)
𝐄((Yk(n)𝐄(Yk(n)))(Yl(n)𝐄(Yl(n)))𝟏G)+C𝐏(Gc)\displaystyle\leq\mathbf{E}\left(\left(Y_{k}^{(n)}-\mathbf{E}(Y_{k}^{(n)})\right)\left(Y_{l}^{(n)}-\mathbf{E}(Y_{l}^{(n)})\right)\mathbf{1}_{G}\right)+C\mathbf{P}(G^{c})
𝐄((Zk𝐄(Yk(n)))(Wl𝐄(Yl(n))))+C𝐏(Gc)\displaystyle\leq\mathbf{E}\left(\left(Z_{k}-\mathbf{E}(Y_{k}^{(n)})\right)\left(W_{l}-\mathbf{E}(Y_{l}^{(n)})\right)\right)+C\mathbf{P}(G^{c})
=𝐄(Zk𝐄(Yk(n)))𝐄(Wl𝐄(Yl(n)))+C𝐏(Gc)\displaystyle=\mathbf{E}\left(Z_{k}-\mathbf{E}(Y_{k}^{(n)})\right)\mathbf{E}\left(W_{l}-\mathbf{E}(Y_{l}^{(n)})\right)+C\mathbf{P}(G^{c})
Cn,\displaystyle\leq\frac{C}{\sqrt{n}},

and it readily follows from this that

Var(k=0nYk(n))Cn32.\text{Var}\left(\sum_{k=0}^{n}Y_{k}^{(n)}\right)\leq Cn^{\frac{3}{2}}.

In particular, this bound yields

𝐏(|k=0nYk(n)𝐄(k=0nYk(n))|n7/8)Cn1/4.\mathbf{P}\left(\left|\sum_{k=0}^{n}Y_{k}^{(n)}-\mathbf{E}\left(\sum_{k=0}^{n}Y_{k}^{(n)}\right)\right|\geq n^{7/8}\right)\leq\frac{C}{n^{1/4}}. (10)

The conclusion of the previous paragraph means that we have reduced the problem of establishing (9) to showing the asymptotic linearity of the deterministic sequence 𝐄(k=0nYk(n))\mathbf{E}(\sum_{k=0}^{n}Y_{k}^{(n)}), and that is what we do now. Let S1S^{1} and S2S^{2} be independent simple random walks on 4\mathbb{Z}^{4} started at the origin. Define a random variable YY by setting

Y=|{{S01,S11}}{eE(𝒢2): 0e}|×𝟏{Sk10 for all k1},Y=\left|\left\{\{S^{1}_{0},S^{1}_{1}\}\right\}\cup\left\{e\in E({\cal G}^{2}):\>0\in e\right\}\right|\times{\bf 1}_{\{S^{1}_{k}\neq 0\text{ for all }k\geq 1\}},

where the graph 𝒢2=(V(𝒢2),E(𝒢2)){\cal G}^{2}=(V({\cal G}^{2}),E({\cal G}^{2})) is defined from S2S^{2} as at (1) and (2). We will define the λ\lambda that appears in the statement of the result as

λ:=𝐄(Y).\lambda:=\mathbf{E}(Y).

Now, take nkn\sqrt{n}\leq k\leq n, and define two events HH and HH^{\prime} by setting

H\displaystyle H ={SmSk for all m[0,kn][k+n,)},\displaystyle=\left\{S_{m}\neq S_{k}\text{ for all }m\in[0,k-\sqrt{n}]\cup[k+\sqrt{n},\infty)\right\},
H\displaystyle H^{\prime} ={Sm10 and Sm20 for all mn}.\displaystyle=\left\{S^{1}_{m}\neq 0\text{ and }S^{2}_{m}\neq 0\text{ for all }m\geq\sqrt{n}\right\}.

By again applying the local central limit theorem (see [19, Theorem 1.2.1]), we see that

𝐏(Hc)\displaystyle\mathbf{P}(H^{c}) 2𝐏(S0S[n,))2k=n𝐏(Sk=S0)Ck=nk2Cn,\displaystyle\leq 2\mathbf{P}\left(S_{0}\in S_{[\sqrt{n},\infty)}\right)\leq 2\sum_{k=\sqrt{n}}^{\infty}\mathbf{P}\left(S_{k}=S_{0}\right)\leq C\sum_{k=\sqrt{n}}^{\infty}k^{-2}\leq\frac{C}{\sqrt{n}},
𝐏((H)c)\displaystyle\mathbf{P}\left((H^{\prime})^{c}\right) 2𝐏(0S1[n,))Cn.\displaystyle\leq 2\mathbf{P}\left(0\in S^{1}[\sqrt{n},\infty)\right)\leq\frac{C}{\sqrt{n}}.

With this in mind, we define

Uk=|{eE(𝒢kn,k+1):Ske}|×𝟏{SmSk for all m[k+1,k+n]},\displaystyle U_{k}=\left|\left\{e\in E({\cal G}_{k-\sqrt{n},k+1}):\>S_{k}\in e\right\}\right|\times{\bf 1}_{\{S_{m}\neq S_{k}\text{ for all }m\in[k+1,k+\sqrt{n}]\}},
U=|{{S1(0),S1(1)}}{eE(𝒢n2): 0e}|𝟏{Sm10 for all m[1,n]},\displaystyle U=\left|\left\{\{S^{1}(0),S^{1}(1)\}\right\}\cup\left\{e\in E({\cal G}^{2}_{\sqrt{n}}):\>0\in e\right\}\right|{\bf 1}_{\{S^{1}_{m}\neq 0\text{ for all }m\in[1,\sqrt{n}]\}},

where the graph 𝒢m2=(V(𝒢m2),E(𝒢m2)){\cal G}^{2}_{m}=(V({\cal G}^{2}_{m}),E({\cal G}^{2}_{m})) is defined from S2S^{2} similarly to (4). For nkn\sqrt{n}\leq k\leq n, on the event HH (respectively HH^{\prime}), we have Yk(n)=UkY^{(n)}_{k}=U_{k} (respectively Y=UY=U). Thus the translation invariance and time-reversibility of the simple random walk yields that, for nkn\sqrt{n}\leq k\leq n,

𝐄(Yk(n))\displaystyle\mathbf{E}\left(Y_{k}^{(n)}\right) =𝐄(Yk(n)𝟏H)+O(n1/2)\displaystyle=\mathbf{E}\left(Y_{k}^{(n)}\mathbf{1}_{H}\right)+O(n^{-1/2})
=𝐄(Uk𝟏H)+O(n1/2)\displaystyle=\mathbf{E}\left(U_{k}\mathbf{1}_{H}\right)+O(n^{-1/2})
=𝐄(Uk)+O(n1/2)\displaystyle=\mathbf{E}\left(U_{k}\right)+O(n^{-1/2})
=𝐄(U)+O(n1/2)\displaystyle=\mathbf{E}(U)+O(n^{-1/2})
=𝐄(U𝟏H)+O(n1/2)\displaystyle=\mathbf{E}\left(U\mathbf{1}_{H^{\prime}}\right)+O(n^{-1/2})
=𝐄(Y𝟏H)+O(n1/2)\displaystyle=\mathbf{E}\left(Y\mathbf{1}_{H^{\prime}}\right)+O(n^{-1/2})
=λ+O(n1/2).\displaystyle=\lambda+O(n^{-1/2}).

This gives

𝐄(k=0nYk(n))=λn+O(n).\mathbf{E}\left(\sum_{k=0}^{n}Y_{k}^{(n)}\right)=\lambda n+O(\sqrt{n}).

Recalling (8) and (10), the result at (9) follows.

Finally, since μ𝒢(S[0,n])\mu_{\mathcal{G}}(S_{[0,n]}) is clearly monotone in nn, extending from the above convergence to the functional statement of Theorem 1.2(a) is elementary. ∎

2.2 Resistance and distance scaling

The essential work for the resistance and distance asymptotics of Theorem 1.2(b),(c) was completed in [27]. In particular the following result concerning the effective resistance was established. We simply highlight how this, and the techniques used to prove it, can be adapted to our current purposes.

Lemma 2.1 ([27, Theorems 1.2.1 and 1.2.3]).

For some deterministic slowly-varying function ψ~:\tilde{\psi}:\mathbb{N}\rightarrow\mathbb{R} satisfying ψ~(n)=(logn)12+o(1)\tilde{\psi}(n)=(\log n)^{-\frac{1}{2}+o(1)},

R𝒢(0,Sn)nψ~(n)1\frac{R_{\mathcal{G}}(0,S_{n})}{n\tilde{\psi}(n)}\rightarrow 1

in 𝐏\mathbf{P}-probability.

The corresponding result for the graph distance is as follows.

Lemma 2.2.

For some deterministic slowly-varying function ϕ:\phi:\mathbb{N}\rightarrow\mathbb{R} satisfying ϕ(n)=(logn)12+o(1)\phi(n)=(\log n)^{-\frac{1}{2}+o(1)},

d𝒢(0,Sn)nϕ(n)1\frac{d_{\mathcal{G}}(0,S_{n})}{n\phi(n)}\rightarrow 1

in 𝐏\mathbf{P}-probability.

Proof.

The properties of the effective resistance used to prove Lemma 2.1 were simply that:

  • RG(,)R_{G}(\cdot,\cdot) is a metric on GG,

  • if 0m<k<n0\leq m<k<n are such that S[m,k]S[k+1,n]=S_{[m,k]}\cap S_{[k+1,n]}=\emptyset, then

    R𝒢m,n(Sm,Sn)=R𝒢m,k(Sm,Sk)+R𝒢k,n(Sk,Sn),R_{{\cal G}_{m,n}}\left(S_{m},S_{n}\right)=R_{{\cal G}_{m,k}}\left(S_{m},S_{k}\right)+R_{{\cal G}_{k,n}}\left(S_{k},S_{n}\right),

    where R𝒢m,nR_{{\cal G}_{m,n}} is the effective resistance metric for the graph 𝒢m,n{{\cal G}_{m,n}}, as defined at (4) (with unit resistors placed along edges).

These two properties are clearly satisfied when we replace the effective resistance by the shortest path graph distance. So we have the same results for the graph distance. The easy details are left to the reader. ∎

As we hinted in the introduction, the functional convergence statements of Theorem 1.2(b), (c) do not immediately follow from Lemmas 2.1 and 2.2 due to the non-monotonicity of R𝒢(0,Sn)R_{\mathcal{G}}(0,S_{n}) and d𝒢(0,Sn)d_{\mathcal{G}}(0,S_{n}). To deal with this issue, we will apply the following observation about the gaps between cut times.

Lemma 2.3.

Define n={i2|Tin}{\cal I}_{n}=\{i\geq 2\ |\ T_{i}\leq n\} and T(n):=TsupnT_{(n)}:=T_{\sup{\mathcal{I}_{n}}}. It then holds that, as nn\to\infty,

maxin(TiTi1)+(nT(n))n(logn)7/120\frac{\max_{i\in{\cal I}_{n}}\left(T_{i}-T_{i-1}\right)+(n-T_{(n)})}{n(\log n)^{-7/12}}\to 0

in 𝐏\mathbf{P}-probability.

Proof.

We set un=n(logn)23u_{n}=n(\log n)^{-\frac{2}{3}}, rn=n(logn)6r_{n}=n(\log n)^{-6} and N=nunN=\lfloor\frac{n}{u_{n}}\rfloor. For each 1iN+11\leq i\leq N+1, we define the event ZiZ_{i} by

Zi={𝒯[(i1)un,(i1)un+rn],𝒯[iunrn,iun]},Z_{i}=\left\{{\cal T}\cap[(i-1)u_{n},(i-1)u_{n}+r_{n}]\neq\emptyset,\>{\cal T}\cap[iu_{n}-r_{n},iu_{n}]\neq\emptyset\right\},

where 𝒯{\cal T} is the set of cut times, as defined at (3). It then follows from [19, Lemma 7.7.4] that

𝐏(Zi)1Cloglognlogn\mathbf{P}(Z_{i})\geq 1-\frac{C\log\log n}{\log n}

for all 1iN+11\leq i\leq N+1. Thus, since NC(logn)23N\leq C(\log n)^{\frac{2}{3}}, we have that

𝐏(i=1N+1Zi)1Cloglogn(logn)13.\mathbf{P}\left(\bigcap_{i=1}^{N+1}Z_{i}\right)\geq 1-\frac{C\log\log n}{(\log n)^{\frac{1}{3}}}.

Clearly, on the event i=1N+1Zi\cap_{i=1}^{N+1}Z_{i}, it holds that maxin(TiTi1)un\max_{i\in{\cal I}_{n}}(T_{i}-T_{i-1})\leq u_{n} and also nT(n)unn-T_{(n)}\leq u_{n}, and so the result follows. ∎

The above lemma readily allows us to compare the effective resistance process R𝒢(0,Sn)R_{\mathcal{G}}(0,S_{n}) with its past maximum, maxmnR𝒢(0,Sn)\max_{m\leq n}R_{\mathcal{G}}(0,S_{n}), and similarly for the graph distance.

Lemma 2.4.

It holds that, as nn\to\infty,

maxmn|maxkmR𝒢(0,Sk)R𝒢(0,Sm)|n(logn)7/120\frac{\max_{m\leq n}\left|\max_{k\leq m}R_{\mathcal{G}}(0,S_{k})-R_{\mathcal{G}}(0,S_{m})\right|}{n(\log n)^{-7/12}}\rightarrow 0

in 𝐏\mathbf{P}-probability. The same claim also holds when R𝒢R_{\mathcal{G}} is replaced by d𝒢d_{\mathcal{G}}.

Proof.

If mT1m\leq T_{1}, it holds that

|maxkmR𝒢(0,Sk)R𝒢(0,Sm)|T1.\left|\max_{k\leq m}R_{\mathcal{G}}(0,S_{k})-R_{\mathcal{G}}(0,S_{m})\right|\leq T_{1}.

If m[Ti1,Ti]m\in[T_{i-1},T_{i}] for some ini\in\mathcal{I}_{n}, where n\mathcal{I}_{n} is defined as in the statement of Lemma 2.3, then

|maxkmR𝒢(0,Sk)R𝒢(0,Sm)|=|maxk[Ti1,m]R𝒢(STi1,Sk)R𝒢(STi1,Sm)|TiTi1.\left|\max_{k\leq m}R_{\mathcal{G}}(0,S_{k})-R_{\mathcal{G}}(0,S_{m})\right|=\left|\max_{k\in[T_{i-1},m]}R_{\mathcal{G}}(S_{T_{i-1}},S_{k})-R_{\mathcal{G}}(S_{T_{i-1}},S_{m})\right|\leq T_{i}-T_{i-1}.

If m[T(n),n]m\in[T_{(n)},n], then

|maxkmR𝒢(0,Sk)R𝒢(0,Sm)|=|maxk[T(n),n]R𝒢(ST(n),Sk)R𝒢(ST(n),Sn)|nT(n).\left|\max_{k\leq m}R_{\mathcal{G}}(0,S_{k})-R_{\mathcal{G}}(0,S_{m})\right|=\left|\max_{k\in[T_{(n)},n]}R_{\mathcal{G}}(S_{T_{(n)}},S_{k})-R_{\mathcal{G}}(S_{T_{(n)}},S_{n})\right|\leq n-T_{(n)}.

We thus obtain the first claim of the lemma by applying Lemma 2.3 (and the fact that T1T_{1} is 𝐏\mathbf{P}-a.s. finite). The second assertion can be proved similarly. ∎

Proof of Theorem 1.2(b),(c).

From Lemmas 2.1 and 2.4, we find that

maxmnR𝒢(0,Sm)nψ~(n)1\frac{\max_{m\leq n}R_{\mathcal{G}}(0,S_{m})}{n\tilde{\psi}(n)}\rightarrow 1

in 𝐏\mathbf{P}-probability. Since maxmnR𝒢(0,Sm)\max_{m\leq n}R_{\mathcal{G}}(0,S_{m}) is monotone, it follows by an elementary argument that

(maxmntR𝒢(0,Sm)nψ~(n))t[0,T](t)t[0,T]\left(\frac{\max_{m\leq nt}R_{\mathcal{G}}(0,S_{m})}{n\tilde{\psi}(n)}\right)_{t\in[0,T]}\rightarrow\left(t\right)_{t\in[0,T]}

in 𝐏\mathbf{P}-probability. Combining this with Lemma 2.4, we obtain Theorem 1.2(b). The proof of Theorem 1.2(c) is essentially the same. ∎

2.3 Cut-time scaling

Recall the definition of a cut time from (3) and the random variables (Ik)k0(I_{k})_{k\geq 0} from (6). Moreover, define

Nn=k=0nIkN_{n}=\sum_{k=0}^{n}I_{k}

to be the number of cut times in [0,n][0,n]. On [21, page 3], it is stated that there exists a deterministic constant α(0,)\alpha\in(0,\infty) such that, in 𝐏\mathbf{P}-probability, as nn\to\infty,

Nnn(logn)12α.\frac{N_{n}}{n(\log n)^{-\frac{1}{2}}}\to\alpha. (11)

From this and the monotonicity of the sequence (Tn)n1(T_{n})_{n\geq 1}, Theorem 1.2(d) readily follows with τ=α1\tau=\alpha^{-1}. As we noted in Remark 1.3, it is claimed in [21] that the limit at (11) can be proved by using the methods developed in [19, Chapter 7]. However, for the sake of the reader, we will give a proof of (11) here.

Proof of Theorem 1.2(d).

As noted above, it will suffice to prove (11). Let S1=(Sk1)k0S^{1}=(S^{1}_{k})_{k\geq 0} and S2=(Sk2)k0S^{2}=(S^{2}_{k})_{k\geq 0} be two independent simple random walks on 4\mathbb{Z}^{4} started at the origin. We write ξn1\xi_{n}^{1} for the first time that S1S^{1} exits {x4:|x|n}\{x\in\mathbb{Z}^{4}:\>|x|\leq n\}, that is, a ball of radius nn with respect to the Euclidean distance centered at the origin. It is then proved in [20, Theorem 5.2] that there exists a constant β(0,)\beta\in(0,\infty) such that

limn(logn)12𝐏(S[1,ξn1]1S[0,)2=)=β,\lim_{n\to\infty}(\log n)^{\frac{1}{2}}\mathbf{P}\left(S^{1}_{[1,\xi_{n}^{1}]}\cap S^{2}_{[0,\infty)}=\emptyset\right)=\beta, (12)

where we let S[a,b]1={Sk1:akb}S^{1}_{[a,b]}=\{S^{1}_{k}:\>a\leq k\leq b\} for 0ab<0\leq a\leq b<\infty, and S[0,)2={Sk2:k0}S^{2}_{[0,\infty)}=\{S^{2}_{k}:\>k\geq 0\}. Moreover, it is known that

𝐏(n2(logn)1ξn1n2logn)1Cnc,\mathbf{P}\left(n^{2}(\log n)^{-1}\leq\xi_{n}^{1}\leq n^{2}\log n\right)\geq 1-Cn^{-c}, (13)

for some constants c,C(0,)c,C\in(0,\infty), see [22, Proposition 2.4.5]. Combining (12) and (13), we see that

𝐏(S[1,n2logn]1S[0,)2=)\displaystyle\mathbf{P}\left(S^{1}_{[1,n^{2}\log n]}\cap S^{2}_{[0,\infty)}=\emptyset\right) 𝐏(S[1,ξn1]1S[0,)2=)+Cnc=(β+o(1))(logn)12,\displaystyle\leq\mathbf{P}\left(S^{1}_{[1,\xi_{n}^{1}]}\cap S^{2}_{[0,\infty)}=\emptyset\right)+Cn^{-c}=\left(\beta+o(1)\right)(\log n)^{-\frac{1}{2}},
𝐏(S[1,n2(logn)1]1S[0,)2=)\displaystyle\mathbf{P}\left(S^{1}_{[1,n^{2}(\log n)^{-1}]}\cap S^{2}_{[0,\infty)}=\emptyset\right) 𝐏(S[1,ξn1]1S[0,)2=)Cnc=(β+o(1))(logn)12.\displaystyle\geq\mathbf{P}\left(S^{1}_{[1,\xi_{n}^{1}]}\cap S^{2}_{[0,\infty)}=\emptyset\right)-Cn^{-c}=\left(\beta+o(1)\right)(\log n)^{-\frac{1}{2}}.

Reparameterising and letting α=2β\alpha=\sqrt{2}\beta, we find that

limn(logn)12𝐏(S[1,n]1S[0,)2=)=α,\lim_{n\to\infty}(\log n)^{\frac{1}{2}}\mathbf{P}\left(S^{1}_{[1,n]}\cap S^{2}_{[0,\infty)}=\emptyset\right)=\alpha,

and thus that

limn(logn)12𝐄(In)=α.\lim_{n\to\infty}(\log n)^{\frac{1}{2}}\mathbf{E}(I_{n})=\alpha. (14)

Consequently, it follows that

limn𝐄(Nn)n(logn)12=α.\lim_{n\to\infty}\frac{\mathbf{E}(N_{n})}{n(\log n)^{-\frac{1}{2}}}=\alpha. (15)

Thus to complete the proof, it will be enough to establish a suitable bound on the variance of NnN_{n} for large nn.

Take n01n_{0}\geq 1 so that: for all n0mnn_{0}\leq m\leq n,

bmbn,b_{m}\leq b_{n}, (16)

where bn=n(logn)9b_{n}=n(\log n)^{-9}. Throughout the proof here, we will assume bnn0b_{n}\geq n_{0}. For bnknb_{n}\leq k\leq n, we set

Ik,n=𝟏{S[kbn,k]S[k+1,k+bn]=}.I_{k,n}={\bf 1}_{\{S_{[k-b_{n},k]}\cap S_{[k+1,k+b_{n}]}=\emptyset\}}.

Also, we write

Fk\displaystyle F_{k} :={k is a cut time}{Ik=1},\displaystyle:=\{k\text{ is a cut time}\}\equiv\{I_{k}=1\},\qquad
Fk,n\displaystyle F_{k,n} :={S[kbn,k]S[k+1,k+bn]=}{Ik,n=1}.\displaystyle:=\left\{S_{[k-b_{n},k]}\cap S_{[k+1,k+b_{n}]}=\emptyset\right\}\equiv\{I_{k,n}=1\}.

We note that Fk,nF_{k,n} and Fl,nF_{l,n} are independent if bnk<lb_{n}\leq k<l and lk>2bnl-k>2b_{n}. In order to estimate the variance of NnN_{n}, we consider bnk<lnb_{n}\leq k<l\leq n with lk>2bnl-k>2b_{n}. Noting that FkFlFk,nFl,nF_{k}\cap F_{l}\subseteq F_{k,n}\cap F_{l,n}, we have that

|𝐄(IkIl)𝐄(Ik,nIl,n)|\displaystyle\left|\mathbf{E}(I_{k}I_{l})-\mathbf{E}(I_{k,n}I_{l,n})\right| =𝐏(Fk,nFl,n)𝐏(FkFl)\displaystyle=\mathbf{P}(F_{k,n}\cap F_{l,n})-\mathbf{P}(F_{k}\cap F_{l})
𝐏(Fk,nFl,nFkc)+𝐏(Fk,nFl,nFlc)\displaystyle\leq\mathbf{P}(F_{k,n}\cap F_{l,n}\cap F_{k}^{c})+\mathbf{P}(F_{k,n}\cap F_{l,n}\cap F_{l}^{c})
𝐏(Fk,nFkc)+𝐏(Fl,nFlc).\displaystyle\leq\mathbf{P}(F_{k,n}\cap F_{k}^{c})+\mathbf{P}(F_{l,n}\cap F_{l}^{c}).

We will estimate the probabilities on the right-hand side. It is shown in [19, Lemma 7.7.3] that

𝐏(Fn,n)=𝐏(Fn)(1+O(loglognlogn)).\mathbf{P}(F_{n,n})=\mathbf{P}(F_{n})\left(1+O\left(\frac{\log\log n}{\log n}\right)\right).

Since n0bnknn_{0}\leq b_{n}\leq k\leq n, we have bkbnb_{k}\leq b_{n} by (16). From this, it follows that FkFk,nFk,kF_{k}\subseteq F_{k,n}\subseteq F_{k,k}. Thus we have

𝐏(Fk)𝐏(Fk,n)𝐏(Fk,k)=𝐏(Fk)(1+O(loglogklogk))=𝐏(Fk)(1+O(loglognlogn)),\mathbf{P}(F_{k})\leq\mathbf{P}(F_{k,n})\leq\mathbf{P}(F_{k,k})=\mathbf{P}(F_{k})\left(1+O\left(\frac{\log\log k}{\log k}\right)\right)=\mathbf{P}(F_{k})\left(1+O\left(\frac{\log\log n}{\log n}\right)\right), (17)

which implies in turn that

𝐏(Fk,nFkc)=𝐏(Fk,n)𝐏(Fk)=𝐏(Fk)O(loglognlogn).\mathbf{P}(F_{k,n}\cap F_{k}^{c})=\mathbf{P}(F_{k,n})-\mathbf{P}(F_{k})=\mathbf{P}(F_{k})O\left(\frac{\log\log n}{\log n}\right).

Moreover, by (14), it holds that 𝐏(Fk)C(logn)12\mathbf{P}(F_{k})\leq C(\log n)^{-\frac{1}{2}}, and so

𝐏(Fk,nFkc)=O(loglogn/(logn)32).\mathbf{P}(F_{k,n}\cap F_{k}^{c})=O\left(\log\log n/(\log n)^{\frac{3}{2}}\right).

A similar estimate applies to 𝐏(Fl,nFlc)\mathbf{P}(F_{l,n}\cap F_{l}^{c}), and thus we conclude that

|𝐄(IkIl)𝐄(Ik,nIl,n)|C(logn)32(loglogn),\left|\mathbf{E}(I_{k}I_{l})-\mathbf{E}(I_{k,n}I_{l,n})\right|\leq C(\log n)^{-\frac{3}{2}}(\log\log n),

for bnk<lnb_{n}\leq k<l\leq n with lk>2bnl-k>2b_{n}.

We are now in a position to complete the proof. It follows from (17) that, for bnknb_{n}\leq k\leq n,

𝐄(Ik,n)=𝐄(Ik)(1+O(loglognlogn)).\mathbf{E}(I_{k,n})=\mathbf{E}(I_{k})\left(1+O\left(\frac{\log\log n}{\log n}\right)\right).

Also, we recall that Ik,nI_{k,n} and Il,nI_{l,n} are independent for bnk,lnb_{n}\leq k,l\leq n with |lk|>2bn|l-k|>2b_{n}. Consequently, writing \sum_{\ast} for the sum over integers kk and ll satisfying bnk,lnb_{n}\leq k,l\leq n with |lk|>2bn|l-k|>2b_{n},

Var(Nn)\displaystyle\text{Var}(N_{n}) =0k,ln(𝐄(IkIl)𝐄(Ik)𝐄(Il))\displaystyle=\sum_{0\leq k,l\leq n}\left(\mathbf{E}(I_{k}I_{l})-\mathbf{E}(I_{k})\mathbf{E}(I_{l})\right)
=(𝐄(IkIl)𝐄(Ik)𝐄(Il))+O(nbn)\displaystyle=\sum\nolimits_{\ast}\left(\mathbf{E}(I_{k}I_{l})-\mathbf{E}(I_{k})\mathbf{E}(I_{l})\right)+O(nb_{n})
=(𝐄(Ik,nIl,n)𝐄(Ik)𝐄(Il))+O(n2(logn)32(loglogn))\displaystyle=\sum\nolimits_{\ast}\left(\mathbf{E}(I_{k,n}I_{l,n})-\mathbf{E}(I_{k})\mathbf{E}(I_{l})\right)+O\left(n^{2}(\log n)^{-\frac{3}{2}}(\log\log n)\right)
=𝐄(Ik)𝐄(Il)O(loglognlogn)+O(n2(logn)32(loglogn))\displaystyle=\sum\nolimits_{\ast}\mathbf{E}(I_{k})\mathbf{E}(I_{l})O\left(\frac{\log\log n}{\log n}\right)+O\left(n^{2}(\log n)^{-\frac{3}{2}}(\log\log n)\right)
=𝐄(Nn)2O((logn)12(loglogn)),\displaystyle=\mathbf{E}(N_{n})^{2}O\left((\log n)^{-\frac{1}{2}}(\log\log n)\right),

where for the final line we recall (15). This implies that Nn/𝐄(Nn)N_{n}/\mathbf{E}(N_{n}) converges to 11 in 𝐏\mathbf{P}-probability as nn\to\infty. Combining this with (15), we get the limit at (11), as required. ∎

3 Simple random walk scaling limit

As briefly outlined in the introduction, the way in which we will establish the random walk scaling limits of Theorem 1.1 is via the resistance form approach developed in [9, 12]. In particular, we will apply [9, Theorem 7.2], which requires us to check a certain convergence result for compact metric spaces equipped with measures, marked points and continuous maps (see Proposition 3.1 below), as well as a non-explosion condition (see Proposition 3.2). To this end, we start by introducing the framework that will enable us to check the conditions of [9, Theorem 7.2]. Once these have been verified, we recall the necessary resistance form background for deriving our main conclusions for the random walk on the range of random walk in four dimensions that appear as Theorem 1.1. To conclude the section, we prove Corollary 1.4.

Broadly following the notation of [9], we consider a space 𝕂c\mathbb{K}^{*}_{c} of elements of the form (K,dK,μ,ρ,f)(K,d_{K},\mu,\rho,f), where we assume: (K,dK)(K,d_{K}) is a compact metric space, μ\mu is a finite Borel measure on KK, ρ\rho is a marked point of KK, and ff is a continuous map from KK into some image space (M,dM)(M,d_{M}), which is a fixed complete, separable metric space. (When proving Theorem 1.1(a), we will take (M,dM)(M,d_{M}) to be \mathbb{R} equipped with the Euclidean distance, and when proving Theorem 1.1(b), we will take (M,dM)(M,d_{M}) to be 4\mathbb{R}^{4}, again equipped with the Euclidean distance.) For two elements of 𝕂c\mathbb{K}^{*}_{c}, we define the distance between them Δc((K,dK,μ,ρ,f),(K,dK,μ,ρ,f))\Delta^{*}_{c}((K,d_{K},\mu,\rho,f),(K^{\prime},d_{K^{\prime}},\mu^{\prime},\rho^{\prime},f^{\prime})) to be equal to

inf(Z,dZ),π,π,𝒞:(ρ,ρ)𝒞{dZP(μπ1,μπ1)+sup(x,x)𝒞(dZ(π(x),π(x))+dM(f(x),f(x)))},\inf_{\begin{subarray}{c}(Z,d_{Z}),\pi,\pi^{\prime},\mathcal{C}:\\ (\rho,\rho^{\prime})\in\mathcal{C}\end{subarray}}\left\{d_{Z}^{P}\left(\mu\circ\pi^{-1},\mu^{\prime}\circ{\pi^{\prime}}^{-1}\right)+\sup_{(x,x^{\prime})\in\mathcal{C}}\left(d_{Z}(\pi(x),\pi^{\prime}(x^{\prime}))+d_{M}(f(x),f^{\prime}(x^{\prime}))\right)\right\}, (18)

where the infimum is taken over all metric spaces (Z,dZ)(Z,d_{Z}), isometric embeddings π:(K,dK)(Z,dZ)\pi:(K,d_{K})\rightarrow(Z,d_{Z}), π:(K,dK)(Z,dZ)\pi^{\prime}:(K^{\prime},d_{K^{\prime}})\rightarrow(Z,d_{Z}), and correspondences 𝒞\mathcal{C} between KK and KK^{\prime}. Note that, by a correspondence 𝒞\mathcal{C} between KK and KK^{\prime}, we mean a subset of K×KK\times K^{\prime} such that for every xKx\in K there exists at least one xKx^{\prime}\in K^{\prime} such that (x,x)𝒞(x,x^{\prime})\in\mathcal{C}, and conversely, for every xKx^{\prime}\in K^{\prime} there exists at least one xKx\in K such that (x,x)𝒞(x,x^{\prime})\in\mathcal{C}. To make sense of the expression at (18), we further define dZPd_{Z}^{P} to be the Prohorov distance between finite Borel measures on (Z,dZ)(Z,d_{Z}). We note that, up to trivial equivalences, it is possible to check that (𝕂c,Δc)(\mathbb{K}^{*}_{c},\Delta^{*}_{c}) is a separable metric space.

For checking Theorem 1.1(a), the (random) elements of 𝕂c\mathbb{K}^{*}_{c} that will be of interest to us will be bounded restrictions of

𝒳n:=(V(𝒢),R𝒢nψ~(n),μ𝒢λn,0,d𝒢(0,)nϕ(n)),\mathcal{X}_{n}:=\left(V(\mathcal{G}),\frac{R_{\mathcal{G}}}{n\tilde{\psi}(n)},\frac{\mu_{\mathcal{G}}}{\lambda n},0,\frac{d_{\mathcal{G}}(0,\cdot)}{n{\phi}(n)}\right),

where λ\lambda, ψ~\tilde{\psi} and ϕ\phi are given by Theorem 1.2, and

𝒳:=([0,),dE,,0,I),\mathcal{X}:=\left([0,\infty),d_{E},\mathcal{L},0,{I}\right),

where dEd_{E} is the Euclidean distance on [0,)[0,\infty), \mathcal{L} is Lebesgue measure on [0,)[0,\infty), and II is the identity map on [0,)[0,\infty). In particular, for r(0,)r\in(0,\infty), let

𝒳n(r):=(B𝒢(0,rnψ~(n)),R𝒢nψ~(n),μ𝒢λn,0,d𝒢(0,)nϕ(n)),\mathcal{X}^{(r)}_{n}:=\left(B_{\mathcal{G}}(0,rn\tilde{\psi}(n)),\frac{R_{\mathcal{G}}}{n\tilde{\psi}(n)},\frac{\mu_{\mathcal{G}}}{\lambda n},0,\frac{d_{\mathcal{G}}(0,\cdot)}{n{\phi}(n)}\right),

where B𝒢(x,r):={yV(𝒢):R𝒢(x,y)<r}B_{\mathcal{G}}(x,r):=\{y\in V(\mathcal{G}):\>R_{\mathcal{G}}(x,y)<r\} is the (open) resistance ball in 𝒢\mathcal{G} with centre xx and radius rr, and

𝒳(r):=([0,r],dE(r),(r),0,I(r)),\mathcal{X}^{(r)}:=\left([0,r],d_{E}^{(r)},\mathcal{L}^{(r)},0,{I}^{(r)}\right),

where dE(r)d_{E}^{(r)}, (r)\mathcal{L}^{(r)} and I(r){I}^{(r)} are the natural restrictions of dEd_{E}, \mathcal{L} and II to [0,r][0,r]. Similarly, for checking Theorem 1.2(b), we introduce

𝒳~n:=(V(𝒢),R𝒢nψ~(n),μ𝒢λn,0,I𝒢n1/2),\tilde{\mathcal{X}}_{n}:=\left(V(\mathcal{G}),\frac{R_{\mathcal{G}}}{n\tilde{\psi}(n)},\frac{\mu_{\mathcal{G}}}{\lambda n},0,\frac{I_{\mathcal{G}}}{n^{1/2}}\right),

where I𝒢I_{\mathcal{G}} is the identity map on 4\mathbb{R}^{4} restricted to V(𝒢)V(\mathcal{G}), and

𝒳~:=([0,),dE,,0,W),\tilde{\mathcal{X}}:=\left([0,\infty),d_{E},\mathcal{L},0,W_{\cdot}\right),

where WW is standard Brownian motion on 4\mathbb{R}^{4}. The bounded restrictions of these are then given by

𝒳~n(r):=(B𝒢(0,rnψ~(n)),R𝒢nψ~(n),μ𝒢λn,0,In(r)n1/2),\tilde{\mathcal{X}}^{(r)}_{n}:=\left(B_{\mathcal{G}}(0,rn\tilde{\psi}(n)),\frac{R_{\mathcal{G}}}{n\tilde{\psi}(n)},\frac{\mu_{\mathcal{G}}}{\lambda n},0,\frac{I_{n}^{(r)}}{n^{1/2}}\right),

where In(r)I_{n}^{(r)} is the identity map on 4\mathbb{R}^{4} restricted to B𝒢(0,rnψ~(n))B_{\mathcal{G}}(0,rn\tilde{\psi}(n)), and

𝒳~(r):=([0,r],dE(r),(r),0,W(r)),\tilde{\mathcal{X}}^{(r)}:=\left([0,r],d_{E}^{(r)},\mathcal{L}^{(r)},0,W_{\cdot}^{(r)}\right),

where W(r):[0,r]4W^{(r)}:[0,r]\rightarrow\mathbb{R}^{4} is the continuous map given by xWxx\mapsto W_{x} on the relevant interval. Note that, since R𝒢(0,STm)m1R_{\mathcal{G}}(0,S_{T_{m}})\geq m-1 and Tm<T_{m}<\infty for all mm, the set B𝒢(0,rnψ~(n))B_{\mathcal{G}}(0,rn\tilde{\psi}(n)) is finite, and therefore R𝒢R_{\mathcal{G}}-compact, 𝐏\mathbf{P}-a.s. In particular, both 𝒳n(r)\mathcal{X}^{(r)}_{n} and 𝒳~n(r)\tilde{\mathcal{X}}^{(r)}_{n} are elements of 𝕂c\mathbb{K}_{c}^{*}, 𝐏\mathbf{P}-a.s. It is clear that 𝒳(r)\mathcal{X}^{(r)} and 𝒳~(r)\tilde{\mathcal{X}}^{(r)} are as well. The first goal of this section is to prove the following. We note that the proof is similar to that of [10, Theorem 4.1], which demonstrated a corresponding convergence result for the resistance metric spaces associated with the one-dimensional Mott random walk.

Proposition 3.1.

(a) For every r(0,)r\in(0,\infty), as nn\rightarrow\infty,

𝒳n(r)𝒳(r)\mathcal{X}_{n}^{(r)}\rightarrow\mathcal{X}^{(r)}

in 𝐏\mathbf{P}-probability in the space (𝕂c,Δc)(\mathbb{K}^{*}_{c},\Delta^{*}_{c}), where (M,dM)(M,d_{M}) is \mathbb{R} equipped with the Euclidean distance.
(b) For every r(0,)r\in(0,\infty), as nn\rightarrow\infty,

𝒳~n(r)𝒳~(r)\tilde{\mathcal{X}}_{n}^{(r)}\rightarrow\tilde{\mathcal{X}}^{(r)}

in distribution in the space (𝕂c,Δc)(\mathbb{K}^{*}_{c},\Delta^{*}_{c}), where (M,dM)(M,d_{M}) is 4\mathbb{R}^{4} equipped with the Euclidean distance.

Proof.

First, define

R(n,r):=i=1𝟏{R𝒢(0,STi)<rnψ~(n)}R(n,r):=\sum_{i=1}^{\infty}\mathbf{1}_{\{R_{\mathcal{G}}(0,S_{T_{i}})<rn\tilde{\psi}(n)\}} (19)

to be the number of cut points that fall into B𝒢(0,rnψ~(n))B_{\mathcal{G}}(0,rn\tilde{\psi}(n)), and set

Vn(r):={x0,x1,,xR(n,r)},V_{n}^{(r)}:=\{x_{0},x_{1},\dots,x_{R(n,r)}\},

where x0:=0x_{0}:=0 and

xi:=R𝒢(0,STi)nψ~(n).x_{i}:=\frac{R_{\mathcal{G}}\left(0,S_{T_{i}}\right)}{n\tilde{\psi}(n)}.

Note that, by construction x0x1<x2<<xR(n,r)x_{0}\leq x_{1}<x_{2}<\dots<x_{R(n,r)} (with equality between x0x_{0} and x1x_{1} if and only if T1=0T_{1}=0). We will approximate 𝒳n(r)\mathcal{X}_{n}^{(r)} by

𝒴n(r):=(Vn(r),dE|Vn(r),μn(r),0,I|Vn(r)),\mathcal{Y}_{n}^{(r)}:=\left(V_{n}^{(r)},d_{E}|_{V_{n}^{(r)}},\mu_{n}^{(r)},0,I|_{V_{n}^{(r)}}\right),

where μn(r)\mu_{n}^{(r)} is a measure on Vn(r)V_{n}^{(r)} given by

μn(r)({xi})=μ𝒢({STi1+1,,STi})λn\mu_{n}^{(r)}(\{x_{i}\})=\frac{\mu_{\mathcal{G}}\left(\{S_{T_{i-1}+1},\dots,S_{T_{i}}\}\right)}{\lambda n}

for i2i\geq 2, μn(r)({x1})=μ𝒢({S0,,ST1})/λn\mu_{n}^{(r)}(\{x_{1}\})={\mu_{\mathcal{G}}(\{S_{0},\dots,S_{T_{1}}\})}/{\lambda n}, and μn(r)(x0)=0\mu_{n}^{(r)}(x_{0})=0 if x0x1x_{0}\neq x_{1}. In particular, we claim that, for any r(0,)r\in(0,\infty),

Δc(𝒳n(r),𝒴n(r))0\Delta_{c}^{*}\left(\mathcal{X}_{n}^{(r)},\mathcal{Y}_{n}^{(r)}\right)\rightarrow 0 (20)

in 𝐏\mathbf{P}-probability as nn\rightarrow\infty.

To prove (20), for each n1n\geq 1, we consider the embedding πn\pi_{n} of Vn(r)V_{n}^{(r)} into B𝒢(0,rnψ~(n))B_{\mathcal{G}}(0,rn\tilde{\psi}(n)) given by xiSTix_{i}\mapsto S_{T_{i}} for i1i\geq 1, and x00x_{0}\mapsto 0. Moreover, we define a correspondence 𝒞n\mathcal{C}_{n} by pairing x0x_{0} with 0, x1x_{1} with each other element of {S0,,ST1}\{S_{0},\dots,S_{T_{1}}\}, xix_{i} with each element of {STi1+1,,STi}\{S_{T_{i-1}+1},\dots,S_{T_{i}}\} for i=2,,R(n,r)i=2,\dots,R(n,r), and xR(n,r)x_{R(n,r)} with each remaining element of B𝒢(0,rnψ~(n))B_{\mathcal{G}}(0,rn\tilde{\psi}(n)). For this particular choice of embedding and correspondence, we will estimate each of the terms in the definition of Δc\Delta_{c}^{*} given at (18), where (Z,dZ)(Z,d_{Z}) is given by B𝒢(0,rnψ~(n))B_{\mathcal{G}}(0,rn\tilde{\psi}(n)) equipped with the rescaled resistance metric, i.e. R𝒢(,)/nψ~(n)R_{\mathcal{G}}(\cdot,\cdot)/n\tilde{\psi}(n). (Note that πn\pi_{n} is indeed an isometry from (Vn(r),dE|Vn(r))(V_{n}^{(r)},d_{E}|_{V_{n}^{(r)}}) into (Z,dZ)(Z,d_{Z}).) Firstly, observe that μn(r)πn1\mu_{n}^{(r)}\circ\pi_{n}^{-1} is the measure on B𝒢(0,rnψ~(n))B_{\mathcal{G}}(0,rn\tilde{\psi}(n)) obtained by assigning mass μ𝒢({S0,,ST1})/λn{\mu_{\mathcal{G}}(\{S_{0},\dots,S_{T_{1}}\})}/{\lambda n} to ST1S_{T_{1}}, and, for each i=2,,R(n,r)i=2,\dots,R(n,r), assigning mass μ𝒢({STi1+1,,STi})/λn\mu_{\mathcal{G}}(\{S_{T_{i-1}+1},\dots,S_{T_{i}}\})/\lambda n to STiS_{T_{i}} (and assigning mass 0 to all other elements of B𝒢(0,rnψ~(n))B_{\mathcal{G}}(0,rn\tilde{\psi}(n))). Hence, applying the definition of the Prohorov metric, it readily follows that

dZP(μn(r)πn1,μ𝒢(B𝒢(0,rnψ~(n)))λn)\displaystyle d_{Z}^{P}\left(\mu_{n}^{(r)}\circ\pi_{n}^{-1},\frac{\mu_{\mathcal{G}}(\cdot\cap B_{\mathcal{G}}(0,rn\tilde{\psi}(n)))}{\lambda n}\right)
\displaystyle\leq dZP(μn(r)πn1,μ𝒢(B𝒢(0,{S0,S1,,STR(n,r)}))λn)+μ𝒢({STR(n,r)+1,,STR(n,r)+1})λn\displaystyle d_{Z}^{P}\left(\mu_{n}^{(r)}\circ\pi_{n}^{-1},\frac{\mu_{\mathcal{G}}(\cdot\cap B_{\mathcal{G}}(0,\{S_{0},S_{1},\dots,S_{T_{R(n,r)}}\}))}{\lambda n}\right)+\frac{\mu_{\mathcal{G}}(\{S_{T_{R(n,r)}+1},\dots,S_{T_{R(n,r)+1}}\})}{\lambda n}
\displaystyle\leq maxmT1R𝒢(Sm,ST1)nψ~(n)+maxi=2,,R(n,r)maxm{Ti1+1,,Ti}R𝒢(Sm,STi)nψ~(n)\displaystyle\max_{m\leq{T_{1}}}\frac{R_{\mathcal{G}}(S_{m},S_{T_{1}})}{n\tilde{\psi}(n)}+\max_{i=2,\dots,R(n,r)}\max_{m\in\{T_{i-1}+1,\dots,{T_{i}}\}}\frac{R_{\mathcal{G}}(S_{m},S_{T_{i}})}{n\tilde{\psi}(n)}
+μ𝒢({STR(n,r)+1,,STR(n,r)+1})λn.\displaystyle\hskip 60.0pt+\frac{\mu_{\mathcal{G}}(\{S_{T_{R(n,r)}+1},\dots,S_{T_{R(n,r)+1}}\})}{\lambda n}.

Since maxmT1R𝒢(Sm,ST1)T1\max_{m\leq{T_{1}}}R_{\mathcal{G}}(S_{m},S_{T_{1}})\leq T_{1} is 𝐏\mathbf{P}-a.s. a finite random variable, the first term converges to 0, 𝐏\mathbf{P}-a.s. As for the second and third terms, for large nn, the sum of these is bounded above by

maxi=2,,R(n,r)+12(TiTi1)nψ~(n).\max_{i=2,\dots,R(n,r)+1}\frac{2(T_{i}-T_{i-1})}{n\tilde{\psi}(n)}. (21)

Now, by Theorem 1.2(b), it holds that, for any T(0,)T\in(0,\infty),

(n1sup{m:R𝒢(0,Sm)tnψ~(n)})t[0,T](t)t[0,T]\left(n^{-1}\sup\left\{m:\>R_{\mathcal{G}}(0,S_{m})\leq tn\tilde{\psi}(n)\right\}\right)_{t\in[0,T]}\rightarrow(t)_{t\in[0,T]}

in 𝐏\mathbf{P}-probability. Combined with (11), it follows that

R(n,r)n(logn)1/2rτ\frac{R(n,r)}{n(\log n)^{-1/2}}\rightarrow\frac{r}{\tau} (22)

in 𝐏\mathbf{P}-probability. Therefore, again applying (11), with probability going to one as nn\rightarrow\infty,

maxi=2,,R(n,r)+12(TiTi1)nψ~(n)maxi2nr2(TiTi1)nψ~(n),\max_{i=2,\dots,R(n,r)+1}\frac{2(T_{i}-T_{i-1})}{n\tilde{\psi}(n)}\leq\max_{i\in\mathcal{I}_{2nr}}\frac{2(T_{i}-T_{i-1})}{n\tilde{\psi}(n)}, (23)

where the notation 2nr\mathcal{I}_{2nr} is defined as in the statement of Lemma 2.3. Applying the latter result, we see that the right-hand side above converges to zero in 𝐏\mathbf{P}-probability, and thus we have established that, in 𝐏\mathbf{P}-probability,

dZP(μn(r)πn1,μ𝒢(B𝒢(0,rnψ~(n)))λn)0.d_{Z}^{P}\left(\mu_{n}^{(r)}\circ\pi_{n}^{-1},\frac{\mu_{\mathcal{G}}(\cdot\cap B_{\mathcal{G}}(0,rn\tilde{\psi}(n)))}{\lambda n}\right)\rightarrow 0.

Secondly,

max(x,x)𝒞nR𝒢(πn(x),x)nψ~(n)\displaystyle\max_{(x,x^{\prime})\in\mathcal{C}_{n}}\frac{R_{\mathcal{G}}\left(\pi_{n}(x),x^{\prime}\right)}{n\tilde{\psi}(n)} \displaystyle\leq maxmT1R𝒢(Sm,ST1)nψ~(n)+maxi=2,,R(n,r)maxm{Ti1+1,,Ti}R𝒢(Sm,STi)nψ~(n)\displaystyle\max_{m\leq{T_{1}}}\frac{R_{\mathcal{G}}(S_{m},S_{T_{1}})}{n\tilde{\psi}(n)}+\max_{i=2,\dots,R(n,r)}\max_{m\in\{T_{i-1}+1,\dots,{T_{i}}\}}\frac{R_{\mathcal{G}}(S_{m},S_{T_{i}})}{n\tilde{\psi}(n)} (24)
+maxm{TR(n,r)+1,,TR(n,r)+1}R𝒢(Sm,STR(n,r))nψ~(n).\displaystyle\hskip 60.0pt+\max_{m\in\{T_{R(n,r)}+1,\dots,{T_{R(n,r)+1}}\}}\frac{R_{\mathcal{G}}(S_{m},S_{T_{R(n,r)}})}{n\tilde{\psi}(n)}.

Again, the first term converges to zero 𝐏\mathbf{P}-a.s. Moreover, bounding the sum of the second and third terms by the expression at (21) and arguing as at (23) shows that they converge to zero in 𝐏\mathbf{P}-probability. Thirdly,

max(x,x)𝒞n|xd𝒢(0,x)nϕ(n)|\displaystyle\max_{(x,x^{\prime})\in\mathcal{C}_{n}}\left|x-\frac{d_{\mathcal{G}}(0,x^{\prime})}{n\phi(n)}\right| \displaystyle\leq maxi=1,,R(n,r)|R𝒢(0,STi)nψ~(n)d𝒢(0,STi)nϕ(n)|\displaystyle\max_{i=1,\dots,R(n,r)}\left|\frac{R_{\mathcal{G}}\left(0,S_{T_{i}}\right)}{n\tilde{\psi}(n)}-\frac{d_{\mathcal{G}}\left(0,S_{T_{i}}\right)}{n{\phi}(n)}\right|
+maxmT1d𝒢(Sm,ST1)nϕ(n)+maxi=2,,R(n,r)maxm{Ti1+1,,Ti}d𝒢(Sm,STi)nϕ(n)\displaystyle+\max_{m\leq{T_{1}}}\frac{d_{\mathcal{G}}(S_{m},S_{T_{1}})}{n{\phi}(n)}+\max_{i=2,\dots,R(n,r)}\max_{m\in\{T_{i-1}+1,\dots,{T_{i}}\}}\frac{d_{\mathcal{G}}(S_{m},S_{T_{i}})}{n{\phi}(n)}
+maxm{TR(n,r)+1,,TR(n,r)+1}d𝒢(Sm,STR(n,r))nϕ(n).\displaystyle\hskip 60.0pt+\max_{m\in\{T_{R(n,r)}+1,\dots,{T_{R(n,r)+1}}\}}\frac{d_{\mathcal{G}}(S_{m},S_{T_{R(n,r)}})}{n{\phi}(n)}.

The second, third and fourth terms can be dealt with exactly as were the terms on the right-hand side of (24). As for the first term, by Theorem 1.2(d) and (22), we have that, with probability going to one as nn\rightarrow\infty,

maxi=1,,R(n,r)|R𝒢(0,STi)nψ~(n)d𝒢(0,STi)nϕ(n)|maxm=1,,2nr|R𝒢(0,Sm)nψ~(n)d𝒢(0,Sm)nϕ(n)|.\max_{i=1,\dots,R(n,r)}\left|\frac{R_{\mathcal{G}}\left(0,S_{T_{i}}\right)}{n\tilde{\psi}(n)}-\frac{d_{\mathcal{G}}\left(0,S_{T_{i}}\right)}{n{\phi}(n)}\right|\leq\max_{m=1,\dots,2nr}\left|\frac{R_{\mathcal{G}}\left(0,S_{m}\right)}{n\tilde{\psi}(n)}-\frac{d_{\mathcal{G}}\left(0,S_{m}\right)}{n{\phi}(n)}\right|.

Hence, from Theorem 1.2(b),(c), this also converges to zero in 𝐏\mathbf{P}-probability. This completes the proof of (20).

As a consequence of (20), to establish part (a) of the proposition, it will suffice to check that

𝒴n(r)𝒳(r)\mathcal{Y}_{n}^{(r)}\rightarrow\mathcal{X}^{(r)} (25)

in 𝐏\mathbf{P}-probability in the space (𝕂c,Δc)(\mathbb{K}^{*}_{c},\Delta^{*}_{c}), where (M,dM)(M,d_{M}) is \mathbb{R} equipped with the Euclidean distance. For this, we start by noting that (Vn(r),dE|Vn(r))(V_{n}^{(r)},d_{E}|_{V_{n}^{(r)}}) is isometrically embedded into ([0,r],dE(r))([0,r],d_{E}^{(r)}) by the identity map. Moreover, similarly to above, it is possible to define a correspondence 𝒞\mathcal{C} between Vn(r)V_{n}^{(r)} and [0,r][0,r] by pairing x0x_{0} with 0, xix_{i} with each element of (xi1,xi](x_{i-1},x_{i}] for i=1,,R(n,r)i=1,\dots,R(n,r), and xR(n,r)x_{R(n,r)} with each remaining element of [0,r][0,r]. Now, if a(0,r]a\in(0,r] is such that R𝒢(0,ST1)anψ~(n)R_{\mathcal{G}}(0,S_{T_{1}})\leq an\tilde{\psi}(n), then

μn(r)([0,a])=μ𝒢(S[0,TR(n,a)])λn.\mu_{n}^{(r)}\left([0,a]\right)=\frac{\mu_{\mathcal{G}}\left(S_{[0,T_{R(n,a)}]}\right)}{\lambda n}.

Hence, by applying Theorem 1.2(a),(d) and (22), we find that, for each fixed a(0,r]a\in(0,r], μn(r)([0,a])a\mu_{n}^{(r)}\left([0,a]\right)\rightarrow a in 𝐏\mathbf{P}-probability as nn\to\infty. It readily follows that, writing dPd^{P} for the Prohorov distance on \mathbb{R},

dP(μn(r),(r))0d^{P}\left(\mu_{n}^{(r)},\mathcal{L}^{(r)}\right)\to 0

in 𝐏\mathbf{P}-probability as nn\to\infty. Moreover,

sup(x,x)𝒞(dE(x,x)))x1+supi=2,,R(n,r)(xixi1)+rxR(n,r).\sup_{(x,x^{\prime})\in\mathcal{C}}\left(d_{E}(x,x^{\prime}))\right)\leq x_{1}+\sup_{i=2,\dots,R(n,r)}\left(x_{i}-x_{i-1}\right)+r-x_{R(n,r)}. (26)

Now, x1=R𝒢(0,ST1)/nψ~(n)0x_{1}=R_{\mathcal{G}}(0,S_{T_{1}})/n\tilde{\psi}(n)\rightarrow 0, 𝐏\mathbf{P}-a.s., and the sum of the remaining terms can be bounded above by the expression at (21), and so converges to zero in 𝐏\mathbf{P}-probability. In conclusion, since the maps we consider for both the approximating and limiting spaces are the identity map, we have proved that

Δc(𝒴n(r),𝒳(r))0\Delta_{c}^{*}\left(\mathcal{Y}_{n}^{(r)},\mathcal{X}^{(r)}\right)\rightarrow 0

in 𝐏\mathbf{P}-probability as nn\rightarrow\infty, which confirms (25).

The proof of part (b) is similar, though we need to deal with the embedding into d\mathbb{R}^{d}. To do this, we will first apply Skorohod coupling to give us a sequence (Sn)n1(S^{n})_{n\geq 1} of copies of SS built on the same probability space as WW such that, 𝐏\mathbf{P}-a.s.

(n1/2Sntn)t0(Wt)t0,\left(n^{-1/2}S^{n}_{nt}\right)_{t\geq 0}\rightarrow\left(W_{t}\right)_{t\geq 0}, (27)

uniformly on compact intervals of tt. We then suppose 𝒳~n(r)\tilde{\mathcal{X}}_{n}^{(r)} is built from SnS^{n}, and further define

𝒴~n(r):=(Vn(r),dE|Vn(r),μn(r),0,fn(r)),\tilde{\mathcal{Y}}_{n}^{(r)}:=\left(V_{n}^{(r)},d_{E}|_{V_{n}^{(r)}},\mu_{n}^{(r)},0,f_{n}^{(r)}\right),

where the first four components are defined as for 𝒴n(r)\mathcal{Y}_{n}^{(r)} above (but now from SnS^{n}), and fn(r)f_{n}^{(r)} is the map from Vn(r)V_{n}^{(r)} into 4\mathbb{R}^{4} given by fn(r)(xi)=n1/2STinf_{n}^{(r)}(x_{i})=n^{-1/2}S^{n}_{T_{i}} for i=1,,R(n,r)i=1,\dots,R(n,r), and fn(r)(x0)=0f_{n}^{(r)}(x_{0})=0. To show that

Δc(𝒳~n(r),𝒴~n(r))0\Delta_{c}^{*}\left(\tilde{\mathcal{X}}_{n}^{(r)},\tilde{\mathcal{Y}}_{n}^{(r)}\right)\rightarrow 0 (28)

in 𝐏\mathbf{P}-probability as nn\rightarrow\infty, we proceed as in the first part of the proof. We bound the only term not already dealt with as follows:

max(x,x)𝒞n|fn(r)(x)xn1/2|\displaystyle\max_{(x,x^{\prime})\in\mathcal{C}_{n}}\left|f_{n}^{(r)}(x)-\frac{x^{\prime}}{n^{1/2}}\right| \displaystyle\leq maxmT1|SmnST1n|n1/2+maxi=2,,R(n,r)maxm{Ti1+1,,Ti}|SmnSTin|n1/2\displaystyle\max_{m\leq{T_{1}}}\frac{|S_{m}^{n}-S_{T_{1}}^{n}|}{n^{1/2}}+\max_{i=2,\dots,R(n,r)}\max_{m\in\{T_{i-1}+1,\dots,{T_{i}}\}}\frac{|S_{m}^{n}-S_{T_{i}}^{n}|}{n^{1/2}} (29)
+maxm{TR(n,r)+1,,TR(n,r)+1}|SmnSTR(n,r)n|n1/2\displaystyle\hskip 60.0pt+\max_{m\in\{T_{R(n,r)}+1,\dots,T_{R(n,r)+1}\}}\frac{|S_{m}^{n}-S_{T_{R(n,r)}}^{n}|}{n^{1/2}}
\displaystyle\leq 3maxm,kTR(n,r)+1:|mk|δ(n,r)|SmnSkn|n1/2,\displaystyle 3\max_{\begin{subarray}{c}m,k\leq T_{R(n,r)+1}:\\ |m-k|\leq\delta(n,r)\end{subarray}}\frac{|S_{m}^{n}-S_{k}^{n}|}{n^{1/2}},

where

δ(n,r):=T1+maxi=2,,R(n,r)+1(TiTi1).\delta(n,r):=T_{1}+\max_{i=2,\dots,R(n,r)+1}(T_{i}-T_{i-1}).

Now, from (11) and (22), we have that, with 𝐏\mathbf{P}-probability converging to one as nn\to\infty, R(n,r)+1N2nrR(n,r)+1\leq N_{2nr}. It therefore follows from Lemma 2.3 that n1δ(n,r)0n^{-1}\delta(n,r)\rightarrow 0 in 𝐏\mathbf{P}-probability as nn\rightarrow\infty. Moreover, from Theorem 1.2(d) and (22), we have that n1(TR(n,r)+1)rn^{-1}(T_{R(n,r)+1})\rightarrow r in 𝐏\mathbf{P}-probability. Combining these observations with (27) shows that the bound at (29) converges to zero in 𝐏\mathbf{P}-probability, and hence we arrive at (28). Thus it remains to show that

𝒴~n(r)𝒳~(r)\tilde{\mathcal{Y}}_{n}^{(r)}\rightarrow\tilde{\mathcal{X}}^{(r)} (30)

in 𝐏\mathbf{P}-probability in the space (𝕂c,Δc)(\mathbb{K}^{*}_{c},\Delta^{*}_{c}), where (M,dM)(M,d_{M}) is 4\mathbb{R}^{4} equipped with the Euclidean distance. The remaining term to deal with is now

sup(x,x)𝒞|fn(r)(x)Wx|\displaystyle\sup_{(x,x^{\prime})\in\mathcal{C}}\left|f_{n}^{(r)}(x)-W_{x^{\prime}}\right| \displaystyle\leq maxi=1,,R(n,r)supx[xi1,xi]|WxWxi|+supx[xR(n,r),r]|WxWxR(n,r)|\displaystyle\max_{i=1,\dots,R(n,r)}\sup_{x^{\prime}\in[x_{i-1},x_{i}]}\left|W_{x^{\prime}}-W_{x_{i}}\right|+\sup_{x^{\prime}\in[x_{R(n,r)},r]}\left|W_{x^{\prime}}-W_{x_{R(n,r)}}\right|
+maxi=1,,R(n,r)|WxiSTinn1/2|.\displaystyle\hskip 60.0pt+\max_{i=1,\dots,R(n,r)}\left|W_{x_{i}}-\frac{S^{n}_{T_{i}}}{n^{1/2}}\right|.

By (26) and the continuity of WW, the first two terms converge to zero in 𝐏\mathbf{P}-probability. For the third term, we have

maxi=1,,R(n,r)|WxiSTinn1/2|maxi=1,,R(n,r)|WxiSnxinn1/2|+maxi=1,,R(n,r)|SnxinSTinn1/2|.\max_{i=1,\dots,R(n,r)}\left|W_{x_{i}}-\frac{S^{n}_{T_{i}}}{n^{1/2}}\right|\leq\max_{i=1,\dots,R(n,r)}\left|W_{x_{i}}-\frac{S^{n}_{nx_{i}}}{n^{1/2}}\right|+\max_{i=1,\dots,R(n,r)}\left|\frac{S_{nx_{i}}^{n}-S^{n}_{T_{i}}}{n^{1/2}}\right|. (31)

Since xR(n,r)x_{R(n,r)} converges to rr in 𝐏\mathbf{P}-probability, the first term above converges to zero in 𝐏\mathbf{P}-probability by (27). As for the second term above, we first note that

n1maxi=1,,R(n,r)|nxiTi|suptn1TR(n,r)|R𝒢(0,Snt)nψ~(n)t|.n^{-1}\max_{i=1,\dots,R(n,r)}\left|nx_{i}-T_{i}\right|\leq\sup_{t\leq n^{-1}T_{R(n,r)}}\left|\frac{R_{\mathcal{G}}(0,S_{nt})}{n\tilde{\psi}(n)}-t\right|.

Again applying the fact that n1TR(n,r)rn^{-1}T_{R(n,r)}\rightarrow r in 𝐏\mathbf{P}-probability, Theorem 1.2(b) gives that the above upper bound converges to zero in 𝐏\mathbf{P}-probability. In combination with (27), this establishes that the second term at (31) also converges to zero in 𝐏\mathbf{P}-probability. Thus we have completed the proof of (30). ∎

The next result gives that the resistance across balls diverges at the same rate as the point-to-point resistance.

Proposition 3.2.

It holds that

limrlim infn𝐏(R𝒢(0,B𝒢(0,rnψ~(n))c)Λnψ~(n))=1,Λ0.\lim_{r\rightarrow\infty}\liminf_{n\rightarrow\infty}\mathbf{P}\left(R_{\mathcal{G}}\left(0,B_{\mathcal{G}}(0,rn\tilde{\psi}(n))^{c}\right)\geq\Lambda n\tilde{\psi}(n)\right)=1,\qquad\forall\Lambda\geq 0.
Proof.

Using the notation from the proof of Proposition 3.1, we clearly have that

R𝒢(0,B𝒢(0,rnψ~(n))c)nψ~(n)xR(n,r).R_{\mathcal{G}}\left(0,B_{\mathcal{G}}(0,rn\tilde{\psi}(n))^{c}\right)\geq n\tilde{\psi}(n)x_{R(n,r)}.

Hence, since xR(n,r)rx_{R(n,r)}\rightarrow r in 𝐏\mathbf{P}-probability as nn\rightarrow\infty (as follows from the convergence to zero of the expression at (26)), it holds that

lim infn𝐏(R𝒢(0,B𝒢(0,rnψ~(n))c)Λnψ~(n))𝐏(rΛ+1).\liminf_{n\rightarrow\infty}\mathbf{P}\left(R_{\mathcal{G}}\left(0,B_{\mathcal{G}}(0,rn\tilde{\psi}(n))^{c}\right)\geq\Lambda n\tilde{\psi}(n)\right)\geq\mathbf{P}\left(r\geq\Lambda+1\right).

Taking rΛ+1r\geq\Lambda+1 results in the right-hand side being equal to one, and so we obtain the desired result. ∎

We are nearly ready to complete the proof of Theorem 1.1. It remains to introduce the resistance form framework in which [9, Theorem 7.2] is stated. (We highlight that the idea of a resistance form was originally pioneered by Kigami in the context of analysis on self-similar fractals, see [16].) To this end, we let 𝔽\mathbb{F}^{*} be quintuplets of the form (F,R,μ,ρ,f)(F,R,\mu,\rho,f), where:

  • FF is a non-empty set;

  • RR is a resistance metric on FF (i.e. for each finite VFV\subseteq F, there exist conductances between the elements of VV for which R|V×VR|_{V\times V} is the associated effective resistance metric, see [16, Definition 2.3.2]), and closed bounded sets in (F,R)(F,R) are compact;

  • μ\mu is a locally finite Borel regular measure of full support on (F,R)(F,R);

  • ρ\rho is a distinguished point in FF;

  • ff is a continuous map from (F,R)(F,R) into some image space (M,dM)(M,d_{M}), which is a fixed complete, separable metric space.

We highlight that the resistance metric is associated with a certain quadratic form known as a resistance form (,)(\mathcal{E},\mathcal{F}), as characterised by

R(x,y)1=inf{(u,u):u,u(x)=0,u(y)=1},x,yF,xy,R(x,y)^{-1}=\inf\left\{\mathcal{E}(u,u):\>u\in\mathcal{F},\>u(x)=0,\>u(y)=1\right\},\qquad\forall x,y\in F,\>x\neq y,

(see [16, Section 2.3],) and we will further assume that for elements of 𝔽\mathbb{F} this form is regular in the sense of [17, Definition 6.2]. In particular, this ensures the existence of a related regular Dirichlet form (,𝒟)(\mathcal{E},\mathcal{D}) on L2(F,μ)L^{2}(F,\mu) (see [17, Theorem 9.4]), which we suppose is recurrent. The following result shows that the spaces introduced above fall into this class.

Lemma 3.3.

𝐏\mathbf{P}-a.s., 𝒳n\mathcal{X}_{n}, 𝒳~n\tilde{\mathcal{X}}_{n}, 𝒳\mathcal{X} and 𝒳~\tilde{\mathcal{X}} are all elements of 𝔽\mathbb{F}^{*}.

Proof.

By construction, R𝒢R_{\mathcal{G}} is the resistance metric associated with the resistance form

𝒢(u,v):=12x,yV(𝒢):{x,y}E(𝒢)(u(x)u(y))(v(x)v(y)),u,v𝒢,\mathcal{E}_{\mathcal{G}}(u,v):=\frac{1}{2}\sum_{\begin{subarray}{c}x,y\in V(\mathcal{G}):\\ \{x,y\}\in E(\mathcal{G})\end{subarray}}(u(x)-u(y))(v(x)-v(y)),\qquad\forall u,v\in\mathcal{F}_{\mathcal{G}},

where 𝒢\mathcal{F}_{\mathcal{G}} is the collection of functions on V(𝒢)V(\mathcal{G}) such that 𝒢(u,u)<\mathcal{E}_{\mathcal{G}}(u,u)<\infty, see [17, Example 3.5]. Moreover, as was commented above Proposition 3.1, R𝒢R_{\mathcal{G}} balls are finite, and so compact. Not only does this confirm (F,R)(F,R) and μ\mu satisfy the properties in the first three bullet points above, but it also implies that 𝒢\mathcal{F}_{\mathcal{G}} contains all compactly supported functions. The latter observation gives that (𝒢,𝒢)(\mathcal{E}_{\mathcal{G}},\mathcal{F}_{\mathcal{G}}) is regular. As for the recurrence of the associated Dirichlet form (𝒢,𝒟𝒢)(\mathcal{E}_{\mathcal{G}},\mathcal{D}_{\mathcal{G}}) on L2(V(𝒢,μ𝒢)L^{2}(V(\mathcal{G},\mu_{\mathcal{G}}), this is equivalent to

limrR𝒢(0,B𝒢(0,r)c)=,\lim_{r\rightarrow\infty}R_{\mathcal{G}}\left(0,B_{\mathcal{G}}(0,r)^{c}\right)=\infty, (32)

see [9, Lemma 2.3], for instance. This divergence is given by Proposition 3.2. Hence we obtain that 𝒳1\mathcal{X}_{1} and 𝒳~1\tilde{\mathcal{X}}_{1} are in 𝔽\mathbb{F}^{*}, 𝐏\mathbf{P}-a.s. The rescaled spaces 𝒳n\mathcal{X}_{n} and 𝒳~n\tilde{\mathcal{X}}_{n} can be dealt with similarly.

By [17, Proposition 16.4], (,dE)(\mathbb{R},d_{E}) is a resistance metric space with associated regular resistance form

(2)(u,v):=u(x)v(x)𝑑x,u,v(2),\mathcal{E}^{(2)}(u,v):=\int_{-\infty}^{\infty}u^{\prime}(x)v^{\prime}(x)dx,\qquad\forall u,v\in\mathcal{F}^{(2)},

where (2)\mathcal{F}^{(2)} is the collection of uu in C()C(\mathbb{R}) such that there exists an (2)\mathcal{E}^{(2)}-Cauchy sequence of functions in 0(2):={v:vC1(),v(x)2𝑑x<}\mathcal{F}_{0}^{(2)}:=\{v:\>v\in C^{1}(\mathbb{R}),\>\int_{-\infty}^{\infty}v^{\prime}(x)^{2}dx<\infty\} that converge uniformly on compacts to uu. By taking the trace onto the one-sided space [0,)[0,\infty) as per [17, Theorem 8.4], we find that ([0,),dE)([0,\infty),d_{E}) is also a resistance metric space associated with a regular resistance form (~(2),~(2))(\tilde{\mathcal{E}}^{(2)},\tilde{\mathcal{F}}^{(2)}). To check the recurrence of the associated Dirichlet form (~(2),𝒟~(2))(\tilde{\mathcal{E}}^{(2)},\tilde{\mathcal{D}}^{(2)}) on L2([0,),)L^{2}([0,\infty),\mathcal{L}), we again appeal to (32). In this case, we have that the resistance from 0 to the boundary of the Euclidean ball of radius rr centred at 0 is simply given by rr, and hence (32) holds. Since the remaining conditions are straightforward to check, we have thus established that 𝒳\mathcal{X} and 𝒳~\tilde{\mathcal{X}} lie in 𝔽\mathbb{F}^{*}, 𝐏\mathbf{P}-a.s. ∎

Now, since the Dirichlet forms elements associated with elements of 𝔽\mathbb{F}^{*} are regular, they are naturally associated with Hunt processes. In particular, applying the spatial embeddings, the random processes corresponding to 𝒳n\mathcal{X}_{n} and 𝒳~n\tilde{\mathcal{X}}_{n} are given by

(1nϕ(n)d𝒢(0,Xtn2ψ(n)cont))t0,(1n1/2Xtn2ψ(n)cont)t0,\left(\frac{1}{n\phi(n)}d_{\mathcal{G}}\left(0,X^{cont}_{\lfloor tn^{2}\psi(n)\rfloor}\right)\right)_{t\geq 0},\qquad\left(\frac{1}{n^{1/2}}X^{cont}_{\lfloor tn^{2}\psi(n)\rfloor}\right)_{t\geq 0},

respectively, where (Xtcont)t0(X^{cont}_{t})_{t\geq 0} is the continuous-time random walk on V(𝒢)V(\mathcal{G}) with jump chain equal to (Xn)n0(X_{n})_{n\geq 0} and mean one exponential holding times (see [4, Remark 5.7], for example). Moreover, the random processes corresponding to 𝒳\mathcal{X} and 𝒳~\tilde{\mathcal{X}} are given by

(|Bt|)t0,(W|Bt|)t0,\left(|B_{t}|\right)_{t\geq 0},\qquad\left(W_{|B_{t}|}\right)_{t\geq 0},

respectively, where we use the fact that |B||B| has the same law as reflected Brownian motion on [0,)[0,\infty) (see [1, Example 8.1], as well as [1, Remarks 1.6 and 3.1] for the connection between the framework of that paper and that of Kigami). With these preparations in place we are now ready to proceed with the proof of the main result.

Proof of Theorem 1.1.

Given Propositions 3.1 and 3.2, Lemma 3.3, and the descriptions of the processes associated to the resistance spaces that precede this proof, applying [9, Theorem 7.2] immediately yields that the conclusions of Theorem 1.1 hold when XX is replaced by its continuous-time counterpart XcontX^{cont}. Since XcontX^{cont} has jump rate one from every vertex and the limiting processes are continuous, it is then straightforward to obtain the same results for XX. ∎

We conclude the article by checking the further random walk properties that are stated as Corollary 1.4.

Proof of Corollary 1.4.

Defining R(n,1)R(n,1) as at (19), we have that S[0,TR(n,1)]B𝒢(0,nψ~(n))S[0,TR(n,1)+1]S_{[0,T_{R(n,1)}]}\subseteq B_{\mathcal{G}}(0,n\tilde{\psi}(n))\subseteq S_{[0,T_{R(n,1)+1}]}. By Theorem 1.2(d) and (22), it holds that n1TR(n,1)n^{-1}T_{R(n,1)} and n1TR(n,1)+1n^{-1}T_{R(n,1)+1} both converge to one in 𝐏\mathbf{P}-probability. Together with Theorem 1.2(a), we thus obtain that

μ𝒢(B𝒢(0,nψ~(n)))nλ\frac{\mu_{\mathcal{G}}\left(B_{\mathcal{G}}\left(0,n\tilde{\psi}(n)\right)\right)}{n}\to\lambda (33)

in 𝐏\mathbf{P}-probability. A reparameterisation applying [27, Remark 2.1.3] (which implies that ψ~(n)ψ~(nψ~(n))\tilde{\psi}(n)\sim\tilde{\psi}(n\tilde{\psi}(n))) then gives that

μ𝒢(B𝒢(0,n))nψ~(n)1λ\frac{\mu_{\mathcal{G}}\left(B_{\mathcal{G}}\left(0,n\right)\right)}{n\tilde{\psi}(n)^{-1}}\to\lambda

in 𝐏\mathbf{P}-probability. Combined with Proposition 3.2, this establishes that [18, Assumption 1.2(1)] holds in our setting with d=R𝒢d=R_{\mathcal{G}}, v(x)=xψ~(x)1v(x)=x\tilde{\psi}(x)^{-1} and r(x)=xr(x)=x. Therefore we can apply [18, Proposition 1.3] to deduce that

limΛinfr1𝐏(Λ1𝐄0𝒢(τ~r𝒢)r2ψ~(r)1Λ)=1,\lim_{\Lambda\rightarrow\infty}\inf_{r\geq 1}\mathbf{P}\left(\Lambda^{-1}\leq\frac{\mathbf{E}_{0}^{\mathcal{G}}\left(\tilde{\tau}^{\mathcal{G}}_{r}\right)}{r^{2}\tilde{\psi}(r)^{-1}}\leq\Lambda\right)=1,

where τ~r𝒢:=inf{n0:R𝒢(0,Xn)r}\tilde{\tau}_{r}^{\mathcal{G}}:=\inf\{n\geq 0:\>R_{\mathcal{G}}(0,X_{n})\geq r\}. Now, by Theorem 1.2(b),(c), a d𝒢d_{\mathcal{G}}-ball of radius nϕ(n)n\phi(n) is comparable with an R𝒢R_{\mathcal{G}}-ball of radius nψ~(n)n\tilde{\psi}(n). Or, after reparameterisation, a d𝒢d_{\mathcal{G}}-ball of radius rr is comparable with an R𝒢R_{\mathcal{G}}-ball of radius rψ~(r)ϕ(r)1r\tilde{\psi}(r)\phi(r)^{-1}. Hence we obtain part (a) of the result.

Towards proving part (b), we start by considering the continuity of the heat kernel. In particular, applying [11, Proposition 12], we have that

suptI|λnptn2ψ(n)𝒢(0,Snx)λnptn2ψ(n)𝒢(0,Sny)|2CR𝒢(Snx,Sny)h𝒢1(Cn2ψ(n))n2ψ(n)2,x,y0,\sup_{t\in I}\left|\lambda np_{tn^{2}\psi(n)}^{\mathcal{G}}\left(0,S_{nx}\right)-\lambda np_{tn^{2}\psi(n)}^{\mathcal{G}}\left(0,S_{ny}\right)\right|^{2}\leq\frac{CR_{\mathcal{G}}\left(S_{nx},S_{ny}\right)h_{\mathcal{G}}^{-1}\left(Cn^{2}\psi(n)\right)}{n^{2}\psi(n)^{2}},\>\forall x,y\geq 0, (34)

where h𝒢1(t):=sup{r:rμ𝒢(B𝒢(0,r))t}h_{\mathcal{G}}^{-1}(t):=\sup\{r:\>r\mu_{\mathcal{G}}(B_{\mathcal{G}}(0,r))\leq t\} and CC is a deterministic constant that only depends on II. Now, from (33), we have that

h𝒢1(Cn2ψ(n))nψ(n)c\frac{h_{\mathcal{G}}^{-1}\left(Cn^{2}\psi(n)\right)}{n\psi(n)}\rightarrow c (35)

in 𝐏\mathbf{P}-probability, for some deterministic constant cc (depending only on CC). Putting together (34), (35), and Theorem 1.2(b), it follows that: for any x00x_{0}\geq 0 and ε(0,1)\varepsilon\in(0,1),

suptIsupx,y[0,x0+1]:|xy|ε|λnptn2ψ(n)𝒢(0,Snx)λnptn2ψ(n)𝒢(0,Sny)|2Cε,\sup_{t\in I}\sup_{\begin{subarray}{c}x,y\in[0,x_{0}+1]:\\ |x-y|\leq\varepsilon\end{subarray}}\left|\lambda np_{tn^{2}\psi(n)}^{\mathcal{G}}\left(0,S_{nx}\right)-\lambda np_{tn^{2}\psi(n)}^{\mathcal{G}}\left(0,S_{ny}\right)\right|^{2}\leq C\varepsilon, (36)

with 𝐏\mathbf{P}-probability converging to one as nn\to\infty, where CC is a deterministic constant that depends on II, but not ε\varepsilon. The remainder of the proof is similar to that of [11, Theorem 1]. In particular, we decompose the expression we are trying to bound as follows: for any δ>0\delta>0,

|λnptn2ψ(n)𝒢(0,Snx)pt|B|(x)||λnptn2ψ(n)𝒢(0,Snx)T1|+|T1T2|+|T2pt|B|(x)|,\left|\lambda np_{tn^{2}\psi(n)}^{\mathcal{G}}\left(0,S_{nx}\right)-p^{|B|}_{t}(x)\right|\leq\left|\lambda np_{tn^{2}\psi(n)}^{\mathcal{G}}\left(0,S_{nx}\right)-T_{1}\right|+\left|T_{1}-T_{2}\right|+\left|T_{2}-p^{|B|}_{t}(x)\right|,

where

pt|B|(x):=2πtex22tp^{|B|}_{t}(x):=\sqrt{\frac{2}{\pi t}}e^{-\frac{x^{2}}{2t}}

is the transition density of the process |B||B|, and

T1:=λn(𝐏0𝒢(dE(x,d𝒢(0,Xtn2ψ(n))nϕ(n))δ)+𝐏0𝒢(dE(x,d𝒢(0,Xtn2ψ(n)+1)nϕ(n))δ))2μ𝒢({yV(𝒢):dE(x,d𝒢(0,y)nϕ(n))δ}),T_{1}:=\frac{\lambda n\left(\mathbf{P}_{0}^{\mathcal{G}}\left(d_{E}\left(x,\frac{d_{\mathcal{{G}}}\left(0,X_{tn^{2}\psi(n)}\right)}{n\phi(n)}\right)\leq\delta\right)+\mathbf{P}_{0}^{\mathcal{G}}\left(d_{E}\left(x,\frac{d_{\mathcal{{G}}}\left(0,X_{tn^{2}\psi(n)+1}\right)}{n\phi(n)}\right)\leq\delta\right)\right)}{2\mu_{\mathcal{G}}\left(\left\{y\in V(\mathcal{G}):\>d_{E}\left(x,\frac{d_{\mathcal{G}}(0,y)}{n\phi(n)}\right)\leq\delta\right\}\right)},
T2:=𝐏(dE(x,|Bt|)δ)0𝟏{dE(x,y)δ}𝑑y.T_{2}:=\frac{\mathbf{P}\left(d_{E}\left(x,|B_{t}|\right)\leq\delta\right)}{\int_{0}^{\infty}\mathbf{1}_{\{d_{E}(x,y)\leq\delta\}}dy}.

By Theorem 1.2(c), we know that

{yV(𝒢):dE(x,d𝒢(0,y)nϕ(n))δ}{Sm:max{n(x2δ),0}mn(x+2δ)}\left\{y\in V(\mathcal{G}):\>d_{E}\left(x,\frac{d_{\mathcal{G}}(0,y)}{n\phi(n)}\right)\leq\delta\right\}\subseteq\left\{S_{m}:\>\max\{n(x-2\delta),0\}\leq m\leq n(x+2\delta)\right\}

for all x[0,x0]x\in[0,x_{0}] with 𝐏\mathbf{P}-probability converging to one. Together with (36), if δε/2\delta\leq\varepsilon/2, this implies

suptIsupx[0,x0]|λnptn2ψ(n)𝒢(0,Snx)T1|Cε.\sup_{t\in I}\sup_{x\in[0,x_{0}]}\left|\lambda np_{tn^{2}\psi(n)}^{\mathcal{G}}\left(0,S_{nx}\right)-T_{1}\right|\leq C\varepsilon.

with 𝐏\mathbf{P}-probability converging to one as nn\to\infty. Similarly, the continuity of pt|B|(x)p^{|B|}_{t}(x) allows us to deduce that, if δ\delta is chosen sufficiently small, then |T2pt|B|(x)|Cε|T_{2}-p^{|B|}_{t}(x)|\leq C\varepsilon. Finally, we note that Theorems 1.1(a) and 1.2(a),(c) imply that T1T2T_{1}\rightarrow T_{2} in 𝐏\mathbf{P}-probability. The result follows. ∎

Acknowledgements

DC was supported by JSPS Grant-in-Aid for Scientific Research (A), 17H01093, JSPS Grant-in-Aid for Scientific Research (C), 19K03540, and the Research Institute for Mathematical Sciences, an International Joint Usage/Research Center located in Kyoto University. DS was supported by a JSPS Grant-in-Aid for Early-Career Scientists, 18K13425, JSPS Grant-in-Aid for Scientific Research (B), 17H02849, and JSPS Grant-in-Aid for Scientific Research (B), 18H01123.

References

  • [1] S. Athreya, M. Eckhoff, and A. Winter, Brownian motion on \mathbb{R}-trees, Trans. Amer. Math. Soc. 365 (2013), no. 6, 3115–3150.
  • [2] S. Athreya, W. Löhr, and A. Winter, Invariance principle for variable speed random walks on trees, Ann. Probab. 45 (2017), no. 2, 625–667.
  • [3] J. R. Banavar, A. Brooks Harris, and J. Koplik, Resistance of random walks, Phys. Rev. Lett. 51 (1983), no. 13, 1115–1118.
  • [4] M. T. Barlow, Random walks and heat kernels on graphs, London Mathematical Society Lecture Note Series, vol. 438, Cambridge University Press, Cambridge, 2017.
  • [5] G. Ben Arous, M. Cabezas, and A. Fribergh, Scaling limit for the ant in a simple high-dimensional labyrinth, Probab. Theory Related Fields 174 (2019), no. 1-2, 553–646.
  • [6]  , Scaling limit for the ant in high-dimensional labyrinths, Comm. Pure Appl. Math. 72 (2019), no. 4, 669–763.
  • [7] D. A. Croydon, Hausdorff measure of arcs and Brownian motion on Brownian spatial trees, Ann. Probab. 37 (2009), no. 3, 946–978.
  • [8]  , Random walk on the range of random walk, J. Stat. Phys. 136 (2009), no. 2, 349–372.
  • [9]  , Scaling limits of stochastic processes associated with resistance forms, Ann. Inst. Henri Poincaré Probab. Stat. 54 (2018), no. 4, 1939–1968.
  • [10] D. A. Croydon, R. Fukushima, and S. Junk, Anomalous scaling regime for one-dimensional Mott variable-range hopping, preprint appears at arXiv:2010.01779, 2020.
  • [11] D. A. Croydon and B. M. Hambly, Local limit theorems for sequences of simple random walks on graphs, Potential Anal. 29 (2008), no. 4, 351–389.
  • [12] D. A. Croydon, B. M. Hambly, and T. Kumagai, Time-changes of stochastic processes associated with resistance forms, Electron. J. Probab. 22 (2017), Paper No. 82, 41.
  • [13] D. A. Croydon and D. Shiraishi, Erratum to “Exact value of the resistance exponent for four dimensional random walk trace”, preprint, 2021.
  • [14] P. G. Doyle and J. L. Snell, Random walks and electric networks, Carus Mathematical Monographs, vol. 22, Mathematical Association of America, Washington, DC, 1984.
  • [15] A. Dvoretzky, P. Erdös, and S. Kakutani, Double points of paths of Brownian motion in nn-space, Acta Sci. Math. (Szeged) 12 (1950), 75–81.
  • [16] J. Kigami, Analysis on fractals, Cambridge Tracts in Mathematics, vol. 143, Cambridge University Press, Cambridge, 2001.
  • [17]  , Resistance forms, quasisymmetric maps and heat kernel estimates, Mem. Amer. Math. Soc. 216 (2012), no. 1015, vi+132.
  • [18] T. Kumagai and J. Misumi, Heat kernel estimates for strongly recurrent random walk on random media, J. Theoret. Probab. 21 (2008), no. 4, 910–935.
  • [19] G. F. Lawler, Intersections of random walks, Probability and its Applications, Birkhäuser Boston, Inc., Boston, MA, 1991.
  • [20]  , Escape probabilities for slowly recurrent sets, Probab. Theory Related Fields 94 (1992), no. 1, 91–117.
  • [21]  , Cut times for simple random walk, Electron. J. Probab. 1 (1996), no. 13, approx. 24 pp.
  • [22] G. F. Lawler and V. Limic, Random walk: a modern introduction, Cambridge Studies in Advanced Mathematics, vol. 123, Cambridge University Press, Cambridge, 2010.
  • [23] G. F. Lawler and J. A. Trujillo Ferreras, Random walk loop soup, Trans. Amer. Math. Soc. 359 (2007), no. 2, 767–787.
  • [24] D. A. Levin and Y. Peres, Markov chains and mixing times, American Mathematical Society, Providence, RI, 2017, Second edition, With contributions by Elizabeth L. Wilmer, With a chapter on “Coupling from the past” by James G. Propp and David B. Wilson.
  • [25] A. Sapozhnikov and D. Shiraishi, On Brownian motion, simple paths, and loops, Probab. Theory Related Fields 172 (2018), no. 3-4, 615–662.
  • [26] D. Shiraishi, Heat kernel for random walk trace on 3\mathbb{Z}^{3} and 4\mathbb{Z}^{4}, Ann. Inst. Henri Poincaré Probab. Stat. 46 (2010), no. 4, 1001–1024.
  • [27]  , Exact value of the resistance exponent for four dimensional random walk trace, Probab. Theory Related Fields 153 (2012), no. 1-2, 191–232.