This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Interlacement limit of a stopped random walk trace on a torus

Antal A. Járai    Minwei Sun
Abstract

We consider a simple random walk on d\mathbb{Z}^{d} started at the origin and stopped on its first exit time from (L,L)dd(-L,L)^{d}\cap\mathbb{Z}^{d}. Write LL in the form L=mNL=mN with m=m(N)m=m(N) and NN an integer going to infinity in such a way that L2ANdL^{2}\sim AN^{d} for some real constant A>0A>0. Our main result is that for d3d\geq 3, the projection of the stopped trajectory to the NN-torus locally converges, away from the origin, to an interlacement process at level Adσ1Ad\sigma_{1}, where σ1\sigma_{1} is the exit time of a Brownian motion from the unit cube (1,1)d(-1,1)^{d} that is independent of the interlacement process. The above problem is a variation on results of Windisch (2008) and Sznitman (2009).

Keywords: interlacement, random walk, torus, hashing, loop-erased random walk

MSC 2020: 60K35

1 Introduction

A special case of a result of Windisch [15] — extended further in [1] — states that the trace of a simple random walk on the discrete dd-dimensional torus (/N)d(\mathbb{Z}/N\mathbb{Z})^{d}, for d3d\geq 3, started from stationarity and run for time uNduN^{d} converges, in a local sense, to an interlacement process at level uu, as NN\to\infty. In this paper we will be concerned with a variation on this result, for which our motivation was a heuristic analysis of an algorithm we used to simulate high-dimensional loop-erased random walk and the sandpile height distribution [7]. Let us first describe our main result and then discuss the motivating problem.

Consider a discrete-time lazy simple random walk (Yt)t0(Y_{t})_{t\geq 0} starting at the origin oo on d\mathbb{Z}^{d}. We write 𝐏o\mathbf{P}_{o} for the probability measure governing this walk. We stop the walk at the first time TLT_{L} it exits the large box (L,L)d(-L,L)^{d}, where LL is an integer. We will take L=L(N)L=L(N) of the form L=mNL=mN, where m=m(N)m=m(N) and NN is an integer, such that L2ANdL^{2}\sim AN^{d} for some A(0,)A\in(0,\infty), as NN\to\infty. We consider the projection of the trajectory {Yt:0t<TL}\{Y_{t}:0\leq t<T_{L}\} to the NN-torus 𝕋N=[N/2,N/2)dd\mathbb{T}_{N}=[-N/2,N/2)^{d}\cap\mathbb{Z}^{d}. The projection is given by the map φN:d𝕋N\varphi_{N}:\mathbb{Z}^{d}\to\mathbb{T}_{N}, where for any xdx\in\mathbb{Z}^{d}, φN(x)\varphi_{N}(x) is the unique point of 𝕋N\mathbb{T}_{N} such that φN(x)x\varphi_{N}(x)\equiv x (modN)\pmod{N}, where congruence (modN)\pmod{N} is understood coordinate-wise.

Let σ1\sigma_{1} denote the exit time from (1,1)d(-1,1)^{d} of a standard Brownian motion started at oo. We write 𝔼\mathbb{E} for the expectation associated to this Brownian motion. For any finite set KdK\subset\mathbb{Z}^{d}, let Cap(K)\mathrm{Cap}(K) denote the capacity of KK [9]. For any 0<R<0<R<\infty and xdx\in\mathbb{Z}^{d}, we denote BR(x)={yd:|yx|<R}B_{R}(x)=\{y\in\mathbb{Z}^{d}:|y-x|<R\}, where |||\cdot| is the Euclidean norm. Let 𝒦R\mathcal{K}_{R} denote the collection of all subsets of BR(o)B_{R}(o). Given 𝐱𝕋N\mathbf{x}\in\mathbb{T}_{N} let τ𝐱,N:𝕋N𝕋N\tau_{\mathbf{x},N}:\mathbb{T}_{N}\to\mathbb{T}_{N} denote the translation of the torus by 𝐱\mathbf{x}. Let g:(0,)g:\mathbb{N}\to(0,\infty) be any function satisfying g(N)g(N)\to\infty.

Theorem 1.1.

Let d3d\geq 3. For any 0<R<0<R<\infty, any K𝒦RK\in\mathcal{K}_{R}, and any 𝐱\mathbf{x} satisfying τ𝐱,NφN(BR(o))φN(Bg(N)(o))=\tau_{\mathbf{x},N}\varphi_{N}(B_{R}(o))\cap\varphi_{N}(B_{g(N)}(o))=\emptyset we have

𝐏o[φN(Yt)τ𝐱,NφN(K), 0t<TL]=𝔼[edAσ1Cap(K)]+o(1)as N.\mathbf{P}_{o}\left[\varphi_{N}(Y_{t})\not\in\tau_{\mathbf{x},N}\varphi_{N}(K),\,0\leq t<T_{L}\right]=\mathbb{E}\left[e^{-dA\sigma_{1}\mathrm{Cap}(K)}\right]+o(1)\quad\text{as $N\to\infty$.} (1)

The error term depends on RR and gg, but is uniform in KK and 𝐱\mathbf{x}.

Note that the trace of the lazy simple random walk stopped at time TT is the same as the trace of the simple random walk stopped at the analogous exit time. We use the lazy walk for convenience of the proof.

Our result is close in spirit — but different in details — compared to a result of Sznitman [12] that is concerned with simple random walk on a discrete cylinder. The interlacement process was introduced by Sznitman in [13]. It consists of a one-parameter family (u)u>0(\mathcal{I}^{u})_{u>0} of random subsets of d\mathbb{Z}^{d} (d3d\geq 3), where the distribution of u\mathcal{I}^{u} can be characterized by the relation

𝐏[uK=]=exp(uCap(K))for any finite Kd.\mathbf{P}[\mathcal{I}^{u}\cap K=\emptyset]=\exp(-u\mathrm{Cap}(K))\quad\text{for any finite $\emptyset\not=K\subset\mathbb{Z}^{d}$.} (2)

The precise construction of a process satisfying (2) represents u\mathcal{I}^{u} as the trace of a Poisson cloud of bi-infinite random walk trajectories (up to time-shifts), where uu is an intensity parameter. We refer to [13] and the books [3, 14] for further details. Comparing (1) and (2) we now formulate precisely what we mean by saying that the stopped trajectory, locally, is described by an interlacement process at the random level u=Adσ1u=Ad\sigma_{1}.

Let g:(0,)g^{\prime}:\mathbb{N}\to(0,\infty) be any function satisfying g(N)g^{\prime}(N)\to\infty. Note this does not have to be the same function as g(N)g(N). Let 𝐱N\mathbf{x}_{N} be an arbitrary sequence satisfying τ𝐱N,NφN(Bg(N)(o))φN(Bg(N)(o))=\tau_{\mathbf{x}_{N},N}\varphi_{N}(B_{g^{\prime}(N)}(o))\cap\varphi_{N}(B_{g(N)}(o))=\emptyset. Define the sequence of random configurations ωNd\omega_{N}\subset\mathbb{Z}^{d} by

ωN={xd:τ𝐱N,NφN(x){φN(Yt):0t<TL}}.\omega_{N}=\{x\in\mathbb{Z}^{d}:\tau_{\mathbf{x}_{N},N}\varphi_{N}(x)\in\{\varphi_{N}(Y_{t}):0\leq t<T_{L}\}\}.

Define the process ~\tilde{\mathcal{I}} by requiring that for all finite KdK\subset\mathbb{Z}^{d} we have

𝐏[~K=]=𝔼[edAσ1Cap(K)].\mathbf{P}[\tilde{\mathcal{I}}\cap K=\emptyset]=\mathbb{E}[e^{-dA\sigma_{1}\mathrm{Cap}(K)}].

To see that this formula indeed defines a process that is also unique, write the right-hand side as

0euCap(K)fσ1(u)𝑑u=0𝐏[uK=]fσ1(u)𝑑u,\int_{0}^{\infty}e^{-u\mathrm{Cap}(K)}\,f_{\sigma_{1}}(u)\,du=\int_{0}^{\infty}\mathbf{P}[\mathcal{I}^{u}\cap K=\emptyset]\,f_{\sigma_{1}}(u)\,du,

where fσ1f_{\sigma_{1}} is the density of Adσ1Ad\sigma_{1}. Then via the inclusion-exclusion formula, we see that we necessarily have for all finite sets BKB\subset K the equality

𝐏[~K=B]=0𝐏[uK=B]fσ1(u)𝑑u,\mathbf{P}[\tilde{\mathcal{I}}\cap K=B]=\int_{0}^{\infty}\mathbf{P}[\mathcal{I}^{u}\cap K=B]\,f_{\sigma_{1}}(u)\,du,

and the right-hand side can be used as the definition of the finite-dimensional marginals of ~\tilde{\mathcal{I}}. Note that ~\tilde{\mathcal{I}} lives in a compact space (the space can be identified with {0,1}d\{0,1\}^{\mathbb{Z}^{d}} with the product topology). Hence the finite-dimensional marginals uniquely determine the distribution of ~\tilde{\mathcal{I}}, by Kolmogorov’s extension theorem.

Theorem 1.2.

Let d3d\geq 3. Under 𝐏o\mathbf{P}_{o} the law of the configuration ωN\omega_{N} converges weakly to the law of ~\tilde{\mathcal{I}}, as NN\to\infty.

Proof of Theorem 1.2 assuming Theorem 1.1..

For events of the form {ωNK=}\{\omega_{N}\cap K=\emptyset\}, Theorem 1.1 immediately implies that

𝐏o[ωNK=]N𝐏[~K=].\mathbf{P}_{o}[\omega_{N}\cap K=\emptyset]\stackrel{{\scriptstyle N\to\infty}}{{\longrightarrow}}\mathbf{P}[\tilde{\mathcal{I}}\cap K=\emptyset].

For events of the form {ωNK=B}\{\omega_{N}\cap K=B\}, the inclusion-exclusion formula represents 𝐏o[ωNK=B]\mathbf{P}_{o}[\omega_{N}\cap K=B] as a linear combination of probabilities of the former kind, and hence convergence follows. ∎

Our motivation to study the question in Theorem 1.1 was a simulation problem that arose in our numerical study of high-dimensional sandpiles [7]. We refer the interested reader to [11, 2, 6] for background on sandpiles. In our simulations we needed to generate loop-erased random walks (LERW) from the origin oo to the boundary of [L,L]d[-L,L]^{d} where d5d\geq 5. The LERW is defined by running a simple random walk from oo until it hits the boundary, and erasing all loops from its trajectory chronologically, as they are created. We refer to the book [9] for further background on LERW (which is not needed to understand the results in this paper). It is known from results of Lawler [8] that in dimensions d5d\geq 5 the LERW visits on the order of L2L^{2} vertices, the same as the simple random walk generating it. As the number of vertices visited is a lot smaller than the volume cLdcL^{d} of the box, an efficient way to store the path generating the LERW is provided by the well-known method of hashing. We refer to [7] for a discussion of this approach, and only provide a brief summary here. Assign to any x[L,L]ddx\in[-L,L]^{d}\cap\mathbb{Z}^{d} an integer value f(x){0,1,,CL2}f(x)\in\{0,1,\dots,CL^{2}\} that is used to label the information relevant to position xx, where CC can be a large constant or slowly growing to infinity. Thus ff is necessarily highly non-injective. However, we may be able to arrange that with high probability the restriction of ff to the simple random walk trajectory is not far from injective, and then memory use can be reduced from order LdL^{d} to roughly O(L2)O(L^{2}).

A simple possible choice of the hash function ff can be to compose the map φN:[L,L]dd𝕋N\varphi_{N}:[-L,L]^{d}\cap\mathbb{Z}^{d}\to\mathbb{T}_{N} with a linear enumeration of the vertices of 𝕋N\mathbb{T}_{N}, whose range has the required size111This is slightly different than what was used in [7].. The method can be expected to be effective, if the projection φN(Y[0,T))\varphi_{N}(Y[0,T)) spreads roughly evenly over the torus 𝕋N\mathbb{T}_{N} with high probability. Our main theorem establishes a version of such a statement, as the right-hand side expression in (1) is independent of 𝐱\mathbf{x}.

We now make some comments on the proof of Theorem 1.1. We refer to [3, Theorem 3.1] for the strategy of the proof in the case when the walk is run for a fixed time uNduN^{d}. The argument presented there goes by decomposing the walk into stretches of length Nδ\lfloor N^{\delta}\rfloor for some 2<δ<d2<\delta<d , and then estimating the (small) probability in each stretch that τ𝐱,NφN(K)\tau_{\mathbf{x},N}\varphi_{N}(K) is hit by the projection. We follow the same outline for the stopped lazy random walk. However, the elegant time-reversal argument given in [3] is not convenient in our setting, and we need to prove a delicate estimate on the probability that τ𝐱,NφN(K)\tau_{\mathbf{x},N}\varphi_{N}(K) is hit, conditional on the start and end-points of the stretch. For this, we only want to consider stretches with “well-behaved” starting and end-points. We also classify stretches as “good stretch” where the total displacement is not too large, and as “bad stretch” otherwise. We do this in such a way that the expected number of “bad stretches” is small and summing over the “good stretches” gives us the required behaviour.

Possible generalisations.
(1) It is not essential that we restrict to the simple random walk: any random walk for which the results in Section 2, hold (such as finite range symmetric walks) would work equally well.
(2) The paper [15] considers several distant sets K1,,KrK^{1},\dots,K^{r}, and we believe this would also be possible here, but would lead to further technicalities in the presentation.
(3) It is also not essential that the rescaled domain be (1,1)d(-1,1)^{d}, and we believe it could be replaced by any other domain with sufficient regularity of the boundary.

A note on constants. All constants will be positive and finite. Constants denoted CC or cc will only depend on dimension dd and may change from line to line. If we need to refer to a constant later, it will be given an index, such as C1C_{1}.

We now describe the organization of this paper. In Section 2, we first introduce some basic notation, then we recall several useful known results on random walk and state the key propositions required for the proof of the main theorem, Theorem 1.1. Section 3 contains the proof of the main theorem, assuming the key propositions. Finally, in Section 4 we provide the proofs of the propositions stated in Section 2.

2 Preliminaries

2.1 Some notation

We first introduce some notation used in this paper. In Section 1, we denoted the discrete torus 𝕋N=[N/2,N/2)dd\mathbb{T}_{N}=[-N/2,N/2)^{d}\cap\mathbb{Z}^{d}, d3d\geq 3 and the canonical projection map φN:d𝕋N\varphi_{N}:\mathbb{Z}^{d}\to\mathbb{T}_{N}. From here on, we will omit the NN-dependence and write φ\varphi and τ𝐱\tau_{\mathbf{x}} instead.

We write vertices and subsets of the torus in bold, i.e. 𝐱𝕋N\mathbf{x}\in\mathbb{T}_{N} and 𝐊𝕋N\mathbf{K}\subset\mathbb{T}_{N}. In order to simplify notation, in the rest of the paper we abbreviate 𝐊=τ𝐱φ(K)\mathbf{K}=\tau_{\mathbf{x}}\varphi(K).

Let (Yt)t0(Y_{t})_{t\geq 0} be a discrete-time lazy simple random walk on d\mathbb{Z}^{d}, that is,

𝐏[Yt+1=y|Yt=x]={12when y=x;14dwhen |yx|=1.\mathbf{P}[Y_{t+1}=y^{\prime}\,|\,Y_{t}=x^{\prime}]=\begin{cases}\frac{1}{2}&\text{when $y^{\prime}=x^{\prime}$;}\\ \frac{1}{4d}&\text{when $|y^{\prime}-x^{\prime}|=1$.}\end{cases}

We denote the corresponding lazy random walk on 𝕋N\mathbb{T}_{N} by (𝐘t)t0=(φ(Yt))t0(\mathbf{Y}_{t})_{t\geq 0}=(\varphi(Y_{t}))_{t\geq 0}. Let 𝐏x\mathbf{P}_{x^{\prime}} denote the distribution of the lazy random walk on d\mathbb{Z}^{d} started from xdx^{\prime}\in\mathbb{Z}^{d}, and write 𝐏𝐱\mathbf{P}_{\mathbf{x}} for the distribution of the lazy random walk on 𝕋N\mathbb{T}_{N} started from 𝐱=φ(x)𝕋N\mathbf{x}=\varphi(x^{\prime})\in\mathbb{T}_{N}. We write pt(x,y)=𝐏x[Yt=y]p_{t}(x^{\prime},y^{\prime})=\mathbf{P}_{x^{\prime}}[Y_{t}=y^{\prime}] for the tt-step transition probability. Further notation we will use:

  • L=mNL=mN, where L2ANdL^{2}\sim AN^{d} as NN\to\infty for some constant A(0,)A\in(0,\infty)

  • D=(m,m)dD=(-m,m)^{d}, rescaled box, indicates which copy of the torus the walk is in

  • n=Nδn=\lfloor N^{\delta}\rfloor for some 2<δ<d2<\delta<d, long enough for the mixing property on the torus, but short compared to L2L^{2}

  • 𝐱𝟎𝐊\mathbf{x_{0}}\in\mathbf{K} is a fixed point of 𝐊\mathbf{K}

  • we write points in the original lattice d\mathbb{Z}^{d} with a prime, such as yy^{\prime}, and decompose a point yy^{\prime} as yN+𝐲yN+\mathbf{y} with yy in another lattice isomorphic to d\mathbb{Z}^{d} and 𝐲=φ(y)𝕋N\mathbf{y}=\varphi(y^{\prime})\in\mathbb{T}_{N}

  • T=inf{t0:Yt(L,L)d}T=\inf\{t\geq 0:Y_{t}\not\in(-L,L)^{d}\}, the first exit time from (L,L)d(-L,L)^{d}

  • S=inf{0:Yn(L,L)d}S=\inf\{\ell\geq 0:Y_{n\ell}\not\in(-L,L)^{d}\}, so that the first multiple of nn when the rescaled point Yn/NY_{n\ell}/N is not in (m,m)d(-m,m)^{d} equals SnS\cdot n

We omit the dependence on dd and NN from some notation above for simplicity.

2.2 Some auxiliary results on random walk

In this section, we collect some known results required for the proof of Theorem 1.1. We will rely heavily on the Local Central Limit Theorem (LCLT) [9, Chapter 2], with error term, and the Martingale maximal inequality [9, Eqn. (12.12) of Corollary 12.2.7]. We will also use Equation (6.31) in [9], that relates Cap(K)\mathrm{Cap}(K) to the probability that a random walk started from the boundary of a large ball with radius nn hits the set KK before exiting the ball. In estimating some error terms in our arguments, sometimes we will use the Gaussian upper and lower bounds [5]. We also need to derive a lemma related to the mixing property on the torus [10, Theorem 5.6] to show that the starting positions of different stretches are not far from uniform on the torus; see Lemma 2.1.

We recall the LCLT from [9, Chapter 2]. The following is a specialisation of [9, Theorem 2.3.11] to lazy simple random walk. The covariance matrix Γ\Gamma and the square root J(x)J^{*}(x) of the associated quadratic form are given by

Γ=(2d)1I,J(x)=(2d)12|x|,\displaystyle\Gamma=(2d)^{-1}I,\quad J^{*}(x)=(2d)^{\frac{1}{2}}|x|,

where II is the (d×d)(d\times d)-unit matrix.

Let p¯t(x)\bar{p}_{t}(x^{\prime}) denote the estimate of pt(x)p_{t}(x^{\prime}) that one obtains by the LCLT, for lazy simple random walk. We have

p¯t(x)\displaystyle\bar{p}_{t}(x^{\prime}) =1(2πt)d/2detΓexp(J(x)22t)\displaystyle=\frac{1}{(2\pi t)^{d/2}\sqrt{\det\Gamma}}\exp\left(-\frac{J^{*}(x^{\prime})^{2}}{2t}\right)
=1(2πt)d/2(2d)d/2exp(2d|x|22t)\displaystyle=\frac{1}{(2\pi t)^{d/2}(2d)^{-d/2}}\exp\left(-\frac{2d\,|x^{\prime}|^{2}}{2t}\right)
=C¯td/2exp(d|x|2t).\displaystyle=\frac{\bar{C}}{t^{d/2}}\exp\left(-\frac{d\,|x^{\prime}|^{2}}{t}\right).

The lazy simple random walk (Yt)t0(Y_{t})_{t\geq 0} in d\mathbb{Z}^{d} is aperiodic, irreducible with mean zero, finite second moment, and finite exponential moments. All joint third moments of the components of Y1Y_{1} vanish.

Theorem 2.1 ([9], Theorem 2.3.11).

For lazy simple random walk (Yt)t0(Y_{t})_{t\geq 0} in d\mathbb{Z}^{d}, there exists ρ>0\rho>0 such that for all t1t\geq 1 and all xdx^{\prime}\in\mathbb{Z}^{d} with |x|<ρt|x^{\prime}|<\rho t,

pt(x)=p¯t(x)exp{O(1t+|x|4t3)}.\displaystyle p_{t}(x^{\prime})=\bar{p}_{t}(x^{\prime})\exp\left\{O\left(\frac{1}{t}+\frac{|x^{\prime}|^{4}}{t^{3}}\right)\right\}.

The Martingale maximal inequality in [9, Eqn. (12.12) of Corollary 12.2.7] is stated as follows. Let (Yt(i))t0(Y^{(i)}_{t})_{t\geq 0} denote the ii-th coordinate of (Yt)t0(Y_{t})_{t\geq 0} (1id1\leq i\leq d). The standard deviation σ\sigma of Y1(i)Y^{(i)}_{1} is given by σ2=(2d)1\sigma^{2}=(2d)^{-1}. For all t1t\geq 1 and all r>0r>0 we have

𝐏o[max0jtYj(i)rσt]er2/2exp{O(r3t)}.\mathbf{P}_{o}\left[\max_{0\leq j\leq t}Y^{(i)}_{j}\geq r\sigma\sqrt{t}\right]\leq e^{-r^{2}/2}\exp\left\{O\left(\frac{r^{3}}{\sqrt{t}}\right)\right\}. (3)

Now we state the result of [9, Eqn. (6.31)]. Recall that Br(o)B_{r}(o) is the discrete ball centred at oo with radius rr. Let for any subset BdB\subset\mathbb{Z}^{d}

ξB=inf{t1:YtB)}.\xi_{B}=\inf\{t\geq 1:Y_{t}\not\in B)\}.

Let Br(o)={ydBr(o):xBr(o) such that |xy|=1}\partial B_{r}(o)=\{y^{\prime}\in\mathbb{Z}^{d}\setminus B_{r}(o):\text{$\exists x^{\prime}\in B_{r}(o)$ such that $|x^{\prime}-y^{\prime}|=1$}\}. For a given finite set KdK\subseteq\mathbb{Z}^{d}, let HKH_{K} denote the hitting time

HK=inf{t1:YtK}.H_{K}=\inf\{t\geq 1:Y_{t}\in K\}.

Then we have

12Cap(K)=limryBr(o)𝐏y[HK<ξBr(o)].\frac{1}{2}\mathrm{Cap}(K)=\lim_{r\to\infty}\sum_{y^{\prime}\in\partial B_{r}(o)}\mathbf{P}_{y^{\prime}}[H_{K}<\xi_{B_{r}(o)}]. (4)

Here Cap(K)\mathrm{Cap}(K) is the capacity of KK; see [9, Section 6.5], which states the analogous statement for the simple random walk. Since we consider the lazy random walk, this introduces a factor of 1/21/2.

In estimating some error terms in our arguments, sometimes we will use the Gaussian upper and lower bounds [5]: there exist constants C=C(d)C=C(d) and c=c(d)c=c(d) such that

pt(x,y)Ctd/2exp(c|yx|2t),for x,yd and t1;pt(x,y)ctd/2exp(C|yx|2t),for |yx|ct.\begin{split}p_{t}(x^{\prime},y^{\prime})&\leq\frac{C}{t^{d/2}}\exp\left(-c\frac{|y^{\prime}-x^{\prime}|^{2}}{t}\right),\quad\text{for $x^{\prime},y^{\prime}\in\mathbb{Z}^{d}$ and $t\geq 1$;}\\ p_{t}(x^{\prime},y^{\prime})&\geq\frac{c}{t^{d/2}}\exp\left(-C\frac{|y^{\prime}-x^{\prime}|^{2}}{t}\right),\quad\text{for $|y^{\prime}-x^{\prime}|\leq ct$.}\end{split} (5)

Recall that the norm |||\cdot| refers to the Euclidean norm.

Regarding mixing times, recall that lazy simple random walk on the NN-torus mixes in time N2N^{2} [10, Theorem 5.6]. With this in mind we derive the following simple lemma.

Recall that 2<δ<d2<\delta<d and n=Nδn=\lfloor N^{\delta}\rfloor.

Lemma 2.1.

There exists C=C(d)C=C(d) such that for any N1N\geq 1 and any tnt\geq n we have

𝐏𝐨[𝐘t=𝐱]CNd,𝐱𝕋N.\mathbf{P}_{\mathbf{o}}[\mathbf{Y}_{t}=\mathbf{x}]\leq\frac{C}{N^{d}},\quad\mathbf{x}\in\mathbb{T}_{N}.
Proof.

Using the Gaussian upper bound, the left-hand side can be bounded by

xdpt(o,xN+𝐱)Ctd/2xdexp(c|xN+𝐱|2t)Ctd/2xdexp(c|xN|2t)Ctd/2k=0(k+1)d1td/2Ndexp(ck2)CNdk=0(k+1)d1exp(ck2)CNd.\begin{split}\sum_{x\in\mathbb{Z}^{d}}p_{t}(o,xN+\mathbf{x})&\leq\frac{C}{t^{d/2}}\sum_{x\in\mathbb{Z}^{d}}\exp\left(-c\frac{|xN+\mathbf{x}|^{2}}{t}\right)\leq\frac{C}{t^{d/2}}\sum_{x\in\mathbb{Z}^{d}}\exp\left(-c\frac{|xN|^{2}}{t}\right)\\ &\leq\frac{C}{t^{d/2}}\sum_{k=0}^{\infty}(k+1)^{d-1}\frac{t^{d/2}}{N^{d}}\exp(-ck^{2})\\ &\leq\frac{C}{N^{d}}\sum_{k=0}^{\infty}(k+1)^{d-1}\exp(-ck^{2})\leq\frac{C}{N^{d}}.\end{split}

Here we bounded the number of xx in d\mathbb{Z}^{d} satisfying kt/N|x|<(k+1)t/Nk\sqrt{t}/N\leq|x|<(k+1)\sqrt{t}/N by C(k+1)d1td/2/NdC(k+1)^{d-1}t^{d/2}/N^{d}, where k=0,1,2,k=0,1,2,\dots.

2.3 Key propositions

In this section we state some propositions to be used in Section 3 to prove Theorem 1.1. The propositions will be proved in Section 4.

The strategy of the proof is to consider stretches of length nn of the walk, and estimate the small probability in each stretch that 𝐊\mathbf{K} is hit by the projection. What makes this strategy work is that we can estimate, conditionally on the starting and end-points of a stretch, the probability that 𝐊\mathbf{K} is hit, and this event asymptotically decouples from the number of stretches. The number of stretches will be the random variable SS. Since nSTnS\approx T, and TT is Θ(Nd)\Theta(N^{d}) in probability, we have that SS is Θ(Nd/n)\Theta(N^{d}/n). In Lemma 2.2 below we show a somewhat weaker estimate for SS (which suffices for our needs).

The main part of the proof will be to show that during a fixed stretch, 𝐊\mathbf{K} is not hit with probability

112Cap(𝐊)nNd(1+o(1)).1-\frac{1}{2}\mathrm{Cap}(\mathbf{K})\frac{n}{N^{d}}(1+o(1)). (6)

Heuristically, conditionally on SS this results in the probability

(112Cap(𝐊)nNd(1+o(1)))Sexp(12Cap(𝐊)nNdS),\left(1-\frac{1}{2}\mathrm{Cap}(\mathbf{K})\frac{n}{N^{d}}(1+o(1))\right)^{S}\approx\exp\left(-\frac{1}{2}\mathrm{Cap}(\mathbf{K})\frac{n}{N^{d}}S\right),

and we will conclude by showing that (n/Nd)S(n/N^{d})S converges in distribution to a constant multiple of the Brownian exit time σ1\sigma_{1}.

The factor n/Ndn/N^{d} in (6) arises as the expected time spent by the projected walk at a fixed point of the torus during a given stretch. The capacity term arises as we pass from expected time to hitting probability.

For the above approach to work, we need a small probability event on which the number of stretches or end-points of stretches are not sufficiently well-behaved. First, we will need to restrict to realizations where (loglogn)1(Nd/n)SlogN(Nd/n)(\sqrt{\log\log n})^{-1}(N^{d}/n)\leq S\leq\log N(N^{d}/n), which occurs with high probability as NN\to\infty (see Lemma 2.2 below). Second, suppose that the \ell-th stretch starts at the point y1y^{\prime}_{\ell-1} and ends at the point yy^{\prime}_{\ell}, that is, y1y^{\prime}_{\ell-1} and yy^{\prime}_{\ell} are realizations of Y(1)nY_{(\ell-1)n} and YnY_{\ell n}. In order to have a good estimate of the probability that 𝐊\mathbf{K} is hit during this stretch, we will need to impose a few conditions on y1y^{\prime}_{\ell-1} and yy^{\prime}_{\ell}. One of these is that the displacement |yy1||y^{\prime}_{\ell}-y^{\prime}_{\ell-1}| is not too large: we will require that for all stretches, it is at most f(n)nf(n)\sqrt{n}, for a function to be chosen later that increases to infinity with NN. We will be able to choose f(n)f(n) of the form f(n)=C1logNf(n)=C_{1}\sqrt{\log N} in such a way that this restriction holds for all stretches with high probability. A third condition we need to impose, that will also hold with high probability, is that y1y^{\prime}_{\ell-1} is at least a certain distance NζN^{\zeta} from φ1(𝐊)\varphi^{-1}(\mathbf{K}) for a parameter 0<ζ<10<\zeta<1 (this will only be required for 1\ell\geq 1, and is not needed for the first stretch starting with y0=oy^{\prime}_{0}=o). The reason we need this is to be able to appeal to (4) to extract the Cap(𝐊)\mathrm{Cap}(\mathbf{K}) contribution, when we know that 𝐊\mathbf{K} is hit from a long distance (we will take r=Nζr=N^{\zeta} in (4)). The larger the value of ζ\zeta, the better error bound we get on the approach to Cap(𝐊)\mathrm{Cap}(\mathbf{K}). On the other hand, ζ\zeta should not be too close to 11, because we want the separation of y1y^{\prime}_{\ell-1} from 𝐊\mathbf{K} to occur with high enough probability.

The set 𝒢ζ,C1\mathcal{G}_{\zeta,C_{1}} defined below represents realizations of SS and the sequence Yn,Y2n,,YSnY_{n},Y_{2n},\dots,Y_{Sn} satisfying the above restrictions. Proposition 2.1 below implies that these restrictions hold with high probability. First, we will need 2<δ<d2<\delta<d to satisfy the inequality

δ2δd>dδ2δ>d2d1.\delta-\frac{2\delta}{d}>d-\delta\qquad\Leftrightarrow\qquad 2\delta>\frac{d^{2}}{d-1}. (7)

This can be satisfied if d3d\geq 3 and δ\delta is sufficiently close to dd, say δ=78d\delta=\frac{7}{8}d. Since the left-hand side of the left-hand inequality in (7) equals (δ/d)(d2)(\delta/d)(d-2), we can subsequently choose ζ\zeta such that we also have

0<ζ<δd,ζ(d2)>dδ.0<\zeta<\frac{\delta}{d},\qquad\qquad\zeta(d-2)>d-\delta. (8)

With the parameter ζ\zeta fixed satisfying the above, we now define:

𝒢ζ,C1={(τ,(y,𝐲)=1τ): (loglogn)-1NdnτlogNNdn; yD, yTNB(x0,Nζ) for 1<τ; yτDc and yτTNB(x0,Nζ); |-yy-1|f(n)n12 for 1τ },\mathcal{G}_{\zeta,C_{1}}=\left\{\left(\tau,(y_{\ell},\mathbf{y}_{\ell})_{\ell=1}^{\tau}\right):\parbox{199.16928pt}{$(\sqrt{\log\log n})^{-1}\frac{N^{d}}{n}\leq\tau\leq\log N\frac{N^{d}}{n}$; \\ $y_{\ell}\in D$, $\mathbf{y}_{\ell}\in\mathbb{T}_{N}\setminus B(\mathbf{x_{0}},N^{\zeta})$ for $1\leq\ell<\tau$; \\ $y_{\tau}\in D^{c}$ and $\mathbf{y}_{\tau}\in\mathbb{T}_{N}\setminus B(\mathbf{x_{0}},N^{\zeta})$; \\ $|y^{\prime}_{\ell}-y^{\prime}_{\ell-1}|\leq f(n)n^{\frac{1}{2}}$ for $1\leq\ell\leq\tau$}\right\}, (9)

where f(n)=C1logNf(n)=C_{1}\,\sqrt{\log N} and recall that we write y=yN+𝐲y^{\prime}_{\ell}=y_{\ell}N+\mathbf{y}_{\ell} and y1=y1N+𝐲1y^{\prime}_{\ell-1}=y_{\ell-1}N+\mathbf{y}_{\ell-1}, and we define y0=oy^{\prime}_{0}=o. The time τ\tau is corresponding to a particular value of the exit time SS, so yDy_{\ell}\in D for 1<τ1\leq\ell<\tau and yτDy_{\tau}\notin D. The parameter C1C_{1} will be chosen in the course of the proof. See Figure 1 for a visual illustration of the sequence y0,y1,y^{\prime}_{0},y^{\prime}_{1},\dots in the definition of 𝒢ζ,C1\mathcal{G}_{\zeta,C_{1}}.

LLooNNy1y^{\prime}_{1}y2y^{\prime}_{2}y3y^{\prime}_{3}y4=yτy^{\prime}_{4}=y^{\prime}_{\tau}
Figure 1: This figure explains the properties of the set 𝒢ζ,C1\mathcal{G}_{\zeta,C_{1}} (not to scale). The shaded regions represent the balls of radius NζN^{\zeta} in each copy of the torus. None of the yy^{\prime}_{\ell}’s, for 1\ell\geq 1, is in a shaded region.

The next lemma shows that the restriction made on the time-parameter τ\tau in the definition of 𝒢ζ,C1\mathcal{G}_{\zeta,C_{1}} holds with high probability for SS.

Lemma 2.2.

We have 𝐏o[{SnNd<(loglogn)1}{SnNd>logN}]0\mathbf{P}_{o}\left[\left\{\frac{Sn}{N^{d}}<(\sqrt{\log\log n})^{-1}\right\}\cup\left\{\frac{Sn}{N^{d}}>\log N\right\}\right]\to 0 as NN\to\infty.

Proof.

By the definitions of SS and TT, we first notice that

𝐏o[S<(loglogn)1Ndn]𝐏o[T<(loglogn)1Nd]1id(𝐏o[max0j(loglogn)1NdYj(i)L]+𝐏o[max0j(loglogn)1NdYj(i)L]),\begin{split}&\mathbf{P}_{o}\left[S<(\sqrt{\log\log n})^{-1}\frac{N^{d}}{n}\right]\\ &\quad\leq\mathbf{P}_{o}\left[T<(\sqrt{\log\log n})^{-1}N^{d}\right]\\ &\quad\leq\sum_{1\leq i\leq d}\left(\mathbf{P}_{o}\left[\max_{0\leq j\leq(\sqrt{\log\log n})^{-1}N^{d}}Y^{(i)}_{j}\geq L\right]+\mathbf{P}_{o}\left[\max_{0\leq j\leq(\sqrt{\log\log n})^{-1}N^{d}}-Y^{(i)}_{j}\geq L\right]\right),\end{split}

where Y(i)Y^{(i)} denotes the ii-th coordinate of the dd-dimensional lazy random walk.

We are going to use (3). Setting t=(loglogn)1Ndt=(\sqrt{\log\log n})^{-1}N^{d} and rσt=Lr\sigma\sqrt{t}=L, we can evaluate each term (similarly for the event with Yj(i)-Y^{(i)}_{j}) in the sum

𝐏o[max0j(loglogn)1NdYj(i)L]exp{12L2σ2(loglogn)1Nd+O(L3σ3(loglogn)2N2d)}.\begin{split}&\mathbf{P}_{o}\left[\max_{0\leq j\leq(\sqrt{\log\log n})^{-1}N^{d}}Y^{(i)}_{j}\geq L\right]\\ &\quad\leq\exp\left\{-\frac{1}{2}\frac{L^{2}}{\sigma^{2}(\sqrt{\log\log n})^{-1}N^{d}}+O\left(\frac{L^{3}}{\sigma^{3}(\sqrt{\log\log n})^{-2}N^{2d}}\right)\right\}.\end{split} (10)

Recall that L2ANdL^{2}\sim AN^{d} and σ2=1/2d\sigma^{2}=1/2d. For the main term in the exponential in (10), we have the upper bound

exp((1+o(1))122dANd(loglogn)1Nd)=exp((1+o(1))Adloglogn)0,as N.\begin{split}\exp\left(-(1+o(1))\frac{1}{2}\frac{2d\cdot AN^{d}}{(\sqrt{\log\log n})^{-1}N^{d}}\right)&=\exp\left(-(1+o(1))Ad\sqrt{\log\log n}\right)\\ &\to 0,\quad\text{as $N\to\infty$}.\end{split}

The big OO term in the exponential in (10) produces an error term because

exp{O((ANd)3/2σ3(loglogn)2N2d)}=exp{O(Nd/2(loglogn))}=1+o(1),as N.\begin{split}\exp\left\{O\left(\frac{(AN^{d})^{3/2}}{\sigma^{3}(\sqrt{\log\log n})^{-2}N^{2d}}\right)\right\}&=\exp\left\{O\left(N^{-d/2}(\log\log n)\right)\right\}\\ &=1+o(1),\quad\text{as $N\to\infty$.}\end{split}

Coming to the second event {SnNd>logN}\left\{\frac{Sn}{N^{d}}>\log N\right\}, observe that the Central Limit Theorem applied to (YknNd/n)+tYknNd/n)t0(Y_{kn\lfloor N^{d}/n\rfloor)+t}-Y_{kn\lfloor N^{d}/n\rfloor})_{t\geq 0} implies that

𝐏o[S>(k+1)Ndn|S>kNdn]maxz(L,L)d(1𝐏z[YnNd/n(2L,2L)d])1c\mathbf{P}_{o}\Big{[}S>(k+1)\Big{\lfloor}\frac{N^{d}}{n}\Big{\rfloor}\,\Big{|}\,S>k\Big{\lfloor}\frac{N^{d}}{n}\Big{\rfloor}\Big{]}\leq\max_{z\in(-L,L)^{d}}(1-\mathbf{P}_{z}[Y_{n\lfloor N^{d}/n\rfloor}\not\in(-2L,2L)^{d}])\leq 1-c

for some c>0c>0. Hence we have 𝐏o[S>kNdn]eck\mathbf{P}_{o}\left[S>k\frac{N^{d}}{n}\right]\leq e^{-ck} for all k0k\geq 0.

Applying this with k=logNk=\log N,

𝐏o[S>logNNdn]eclogN0\mathbf{P}_{o}\left[S>\log N\frac{N^{d}}{n}\right]\leq e^{-c\log N}\to 0

as required.

The starting point for the proof of Theorem 1.1 is the following proposition that decomposes the probability we are interested in into terms involving single stretches of duration nn.

Proposition 2.1.

For a sufficiently large value of C1C_{1}, we have that

𝐏o[(S,(Yn,φ(Yn)=1S)𝒢ζ,C1]=o(1)as N.\mathbf{P}_{o}\left[\left(S,(Y_{\ell n},\varphi(Y_{\ell n})_{\ell=1}^{S}\right)\not\in\mathcal{G}_{\zeta,C_{1}}\right]=o(1)\quad\text{as $N\to\infty$}. (11)

Furthermore,

𝐏o[Ytφ1(𝐊), 0t<T]=(τ,(y,𝐲)=1τ)𝒢ζ,C1=1τ𝐏y1[Yn=y,Ytφ1(𝐊) for 0t<n]+o(1),\begin{split}&\mathbf{P}_{o}\left[Y_{t}\not\in\varphi^{-1}(\mathbf{K}),\,0\leq t<T\right]\\ &\quad=\sum_{(\tau,(y_{\ell},\mathbf{y}_{\ell})_{\ell=1}^{\tau})\in\mathcal{G}_{\zeta,C_{1}}}\prod_{\ell=1}^{\tau}\mathbf{P}_{y^{\prime}_{\ell-1}}\left[Y_{n}=y^{\prime}_{\ell},\,\text{$Y_{t}\not\in\varphi^{-1}(\mathbf{K})$ for $0\leq t<n$}\right]+o(1),\end{split} (12)

where o(1)0o(1)\to 0 as NN\to\infty.

Central to the proof of Theorem 1.1 is the following proposition, that estimates the probability of hitting a copy of 𝐊\mathbf{K} during a “good stretch” where the displacement |yy1||y^{\prime}_{\ell}-y^{\prime}_{\ell-1}| is almost of order n\sqrt{n}. This will not hold for all stretches with high probability, but the fraction of stretches for which it fails will vanish asymptotically.

Proposition 2.2.

There exists a sufficiently large value of C1C_{1} such that the following holds. Let (τ,(y,𝐲)=1τ)𝒢ζ,C1(\tau,(y_{\ell},\mathbf{y}_{\ell})_{\ell=1}^{\tau})\in\mathcal{G}_{\zeta,C_{1}}. Then for all 2τ2\leq\ell\leq\tau such that |yy1|10nloglogn|y^{\prime}_{\ell}-y^{\prime}_{\ell-1}|\leq 10\sqrt{n}\log\log n we have

𝐏y1[Yn=y,Ytφ1(𝐊) for 0t<n]=𝐏y1[Yn=y](112Cap(𝐊)nNd(1+o(1))).\begin{split}&\mathbf{P}_{y^{\prime}_{\ell-1}}\left[Y_{n}=y^{\prime}_{\ell},\,\text{$Y_{t}\not\in\varphi^{-1}(\mathbf{K})$ for $0\leq t<n$}\right]\\ &\qquad=\mathbf{P}_{y^{\prime}_{\ell-1}}\left[Y_{n}=y^{\prime}_{\ell}\right]\,\left(1-\frac{1}{2}\,\frac{\mathrm{Cap}(\mathbf{K})\,n}{N^{d}}\,(1+o(1))\right).\end{split} (13)

In addition to the above proposition (that we prove in Section 4.2), we will need a weaker version for the remaining “bad stretches” that have less restriction on the distance |yy1||y^{\prime}_{\ell}-y^{\prime}_{\ell-1}|. This will be needed to estimate error terms arising from the “bad stretches”, and it will also be useful to demonstrate some of our proof ideas for other error terms arising later in the paper. It will be proved in Section 4.1.

Proposition 2.3.

Let (τ,(y,𝐲)=1τ)𝒢ζ,C1(\tau,(y_{\ell},\mathbf{y}_{\ell})_{\ell=1}^{\tau})\in\mathcal{G}_{\zeta,C_{1}}. For all 2τ2\leq\ell\leq\tau we have

𝐏y1[Yn=y,Ytφ1(𝐊) for all 0t<n]=𝐏y1[Yn=y](1O(nNd)).\mathbf{P}_{y^{\prime}_{\ell-1}}\left[Y_{n}=y^{\prime}_{\ell},\,\text{$Y_{t}\not\in\varphi^{-1}(\mathbf{K})$ for all $0\leq t<n$}\right]=\mathbf{P}_{y^{\prime}_{\ell-1}}\left[Y_{n}=y^{\prime}_{\ell}\right]\,\left(1-O\left(\frac{n}{N^{d}}\right)\right). (14)

and for the first stretch we have

𝐏o[Yn=y1,Ytφ1(𝐊) for all 0t<n]=𝐏o[Yn=y1](1o(1)).\mathbf{P}_{o}\left[Y_{n}=y^{\prime}_{1},\,\text{$Y_{t}\not\in\varphi^{-1}(\mathbf{K})$ for all $0\leq t<n$}\right]=\mathbf{P}_{o}\left[Y_{n}=y^{\prime}_{1}\right]\,\left(1-o(1)\right). (15)

Here the O(n/Nd)-O(n/N^{d}) and o(1)-o(1) error terms are negative.

Our final proposition is needed to estimate the number of stretches that are “bad”.

Proposition 2.4.

We have

𝐏o[#{1NdnC1logN:|YnYn(1)|>10nloglogn}Ndn1loglogn]0,\begin{split}&\mathbf{P}_{o}\left[\#\left\{1\leq\ell\leq\frac{N^{d}}{n}C_{1}\log N:|Y_{n\ell}-Y_{n(\ell-1)}|>10\sqrt{n}\log\log n\right\}\geq\frac{N^{d}}{n}\frac{1}{\log\log n}\right]\\ &\qquad\to 0,\end{split} (16)

as NN\to\infty.

3 Proof of the main theorem assuming the key propositions

This section is the proof of Theorem 1.1.

Proof of Theorem 1.1 assuming Propositions 2.12.4.

First, given any (τ,(y,𝐲)=1τ)𝒢ζ,C1(\tau,(y_{\ell},\mathbf{y}_{\ell})_{\ell=1}^{\tau})\in\mathcal{G}_{\zeta,C_{1}}, we define

={2τ:|yy1|10nloglogn}={2τ:|yy1|>10nloglogn}.\begin{split}\mathcal{L}&=\{2\leq\ell\leq\tau:|y^{\prime}_{\ell}-y^{\prime}_{\ell-1}|\leq 10\sqrt{n}\log\log n\}\\ \mathcal{L}^{\prime}&=\{2\leq\ell\leq\tau:|y^{\prime}_{\ell}-y^{\prime}_{\ell-1}|>10\sqrt{n}\log\log n\}.\end{split} (17)

Thus we have

{1,,τ}={1}.\{1,\dots,\tau\}=\{1\}\cup\mathcal{L}\cup\mathcal{L}^{\prime}.

We further denote by

𝒢ζ,C1={(τ,(y,𝐲)=1τ)𝒢ζ,C1:||Ndn1loglogn}.\mathcal{G}^{\prime}_{\zeta,C_{1}}=\left\{(\tau,(y_{\ell},\mathbf{y}_{\ell})_{\ell=1}^{\tau})\in\mathcal{G}_{\zeta,C_{1}}:|\mathcal{L}^{\prime}|\leq\frac{N^{d}}{n}\frac{1}{\log\log n}\right\}.

We have by Proposition 2.1 that

𝐏o[Ytφ1(𝐊), 0t<T]=o(1)+(τ,(y,𝐲)=1τ)𝒢ζ,C1=1τ𝐏y1[Yn=y,Ytφ1(𝐊) for 0t<n].\begin{split}&\mathbf{P}_{o}\left[Y_{t}\not\in\varphi^{-1}(\mathbf{K}),\,0\leq t<T\right]\\ &=o(1)+\sum_{(\tau,(y_{\ell},\mathbf{y}_{\ell})_{\ell=1}^{\tau})\in\mathcal{G}_{\zeta,C_{1}}}\prod_{\ell=1}^{\tau}\mathbf{P}_{y^{\prime}_{\ell-1}}[Y_{n}=y^{\prime}_{\ell},\,\text{$Y_{t}\not\in\varphi^{-1}(\mathbf{K})$ for $0\leq t<n$}].\end{split} (18)

By Proposition 2.4, we can replace the summation over elements of 𝒢ζ,C1\mathcal{G}_{\zeta,C_{1}} by summation over just elements of 𝒢ζ,C1\mathcal{G}^{\prime}_{\zeta,C_{1}}, at the cost of a o(1)o(1) term. That is,

𝐏o[Ytφ1(𝐊), 0t<T]=o(1)+(τ,(y,𝐲)=1τ)𝒢ζ,C1=1τ𝐏y1[Yn=y,Ytφ1(𝐊) for 0t<n].\begin{split}&\mathbf{P}_{o}\left[Y_{t}\not\in\varphi^{-1}(\mathbf{K}),\,0\leq t<T\right]\\ &=o(1)+\sum_{(\tau,(y_{\ell},\mathbf{y}_{\ell})_{\ell=1}^{\tau})\in\mathcal{G}^{\prime}_{\zeta,C_{1}}}\prod_{\ell=1}^{\tau}\mathbf{P}_{y^{\prime}_{\ell-1}}[Y_{n}=y^{\prime}_{\ell},\,\text{$Y_{t}\not\in\varphi^{-1}(\mathbf{K})$ for $0\leq t<n$}].\end{split} (19)

Applying Proposition 2.2 for the factors \ell\in\mathcal{L} and Proposition 2.3 for the factors {1}\ell\in\{1\}\cup\mathcal{L}^{\prime} we get

𝐏o[Ytφ1(𝐊), 0t<T]=o(1)+(τ,(y,𝐲)=1τ)𝒢ζ,C1=1τ𝐏y1[Yn=y]×(1o(1))(112Cap(𝐊)nNd(1+o(1)))(1O(nNd)).\begin{split}&\mathbf{P}_{o}\left[Y_{t}\not\in\varphi^{-1}(\mathbf{K}),\,0\leq t<T\right]\\ &=o(1)+\sum_{(\tau,(y_{\ell},\mathbf{y}_{\ell})_{\ell=1}^{\tau})\in\mathcal{G}^{\prime}_{\zeta,C_{1}}}\prod_{\ell=1}^{\tau}\mathbf{P}_{y^{\prime}_{\ell-1}}[Y_{n}=y^{\prime}_{\ell}]\\ &\qquad\times(1-o(1))\,\prod_{\ell\in\mathcal{L}}\left(1-\frac{1}{2}\mathrm{Cap}(\mathbf{K})\frac{n}{N^{d}}(1+o(1))\right)\,\prod_{\ell\in\mathcal{L}^{\prime}}\left(1-O\left(\frac{n}{N^{d}}\right)\right).\end{split} (20)

Note that since the summation is over elements of 𝒢ζ,C1\mathcal{G}^{\prime}_{\zeta,C_{1}} only, we have

||Ndn1loglogn.|\mathcal{L}^{\prime}|\leq\frac{N^{d}}{n}\frac{1}{\log\log n}. (21)

By (21), we can lower bound the last product in (20) by

exp(O(nNd)Ndn1loglogn)=eo(1)=(1+o(1)).\exp\left(-O\left(\frac{n}{N^{d}}\right)\frac{N^{d}}{n}\frac{1}{\log\log n}\right)=e^{o(1)}=(1+o(1)).

Since the product is also at most 11, it equals 1+o(1)1+o(1).

Also, due to (21), we have

τ1Ndn1loglogn||τ.\tau-1-\frac{N^{d}}{n}\frac{1}{\log\log n}\leq|\mathcal{L}|\leq\tau.

Since τNdn(loglogn)1\tau\geq\frac{N^{d}}{n}(\sqrt{\log\log n})^{-1}, we have ||=(1+o(1))τ|\mathcal{L}|=(1+o(1))\tau. This implies that the penultimate product in (20) equals

(112Cap(𝐊)nNd(1+o(1)))(1+o(1))τ=exp(12Cap(𝐊)nNdτ(1+o(1))).\left(1-\frac{1}{2}\mathrm{Cap}(\mathbf{K})\frac{n}{N^{d}}(1+o(1))\right)^{(1+o(1))\tau}=\exp\left(-\frac{1}{2}\mathrm{Cap}(\mathbf{K})\frac{n}{N^{d}}\tau(1+o(1))\right). (22)

Recall that S=inf{0:Yn(L,L)d}S=\inf\{\ell\geq 0:Y_{n\ell}\notin(-L,L)^{d}\}. By summing over (y,𝐲)=1τ(y_{\ell},\mathbf{y}_{\ell})_{\ell=1}^{\tau} and appealing to (11), we get that (20) equals

o(1)+τ𝐄[𝟏S=τexp(12Cap(𝐊)nNdτ(1+o(1)))],o(1)+\sideset{}{{}^{\prime}}{\sum}_{\tau}\mathbf{E}\left[\mathbf{1}_{S=\tau}\exp\left(-\frac{1}{2}\mathrm{Cap}(\mathbf{K})\frac{n}{N^{d}}\tau(1+o(1))\right)\right], (23)

where the primed summation denotes restriction to Ndn(loglogn)1τ(logN)Ndn\frac{N^{d}}{n}(\sqrt{\log\log n})^{-1}\leq\tau\leq(\log N)\frac{N^{d}}{n}. Since due to Lemma 2.2, SS satisfies the bounds on τ\tau with probability going to 11, the latter expression equals

o(1)+𝐄[e12Cap(𝐊)nNdS].o(1)+\mathbf{E}\left[e^{-\frac{1}{2}\mathrm{Cap}(\mathbf{K})\frac{n}{N^{d}}S}\right]. (24)

Let Γn\Gamma_{n} denote the covariance matrix for YnY_{n}, so that Γn=n2dI\Gamma_{n}=\frac{n}{2d}I. Let Z1=2dnYnZ_{1}=\sqrt{\frac{2d}{n}}\,Y_{n}, with the covariance matrix ΓZ=I\Gamma_{Z}=I. Let Z=2dnYnZ_{\ell}=\sqrt{\frac{2d}{n}}\,Y_{n\ell} for 0\ell\geq 0.

Since L2ANdL^{2}\sim A\,N^{d}, the event {Yn(L,L)d}\{Y_{n\ell}\notin(-L,L)^{d}\} is the same as {Yn((1+o(1))ANd/2,(1+o(1))ANd/2)d}\{Y_{n\ell}\notin(-(1+o(1))\sqrt{A}N^{d/2},(1+o(1))\sqrt{A}N^{d/2})^{d}\}. Converting to events in terms of ZZ we have

Z(2dA(1+o(1))(Nd/n)1/2,2dA(1+o(1))(Nd/n)1/2)d.Z_{\ell}\notin\left(-\sqrt{2dA(1+o(1))}(N^{d}/n)^{1/2},\sqrt{2dA(1+o(1))}(N^{d}/n)^{1/2}\right)^{d}.

Now we can write SS as

S=inf{0:Z(2dA(1+o(1))(Nd/n)1/2,2dA(1+o(1))(Nd/n)1/2)d}.S=\inf\{\ell\geq 0:Z_{\ell}\notin(-\sqrt{2dA(1+o(1))}(N^{d}/n)^{1/2},\sqrt{2dA(1+o(1))}(N^{d}/n)^{1/2})^{d}\}.

Let σ1=inf{t>0:Bt(1,1)d}\sigma_{1}=\inf\{t>0:B_{t}\notin(-1,1)^{d}\} be the exit time of Brownian motion from (1,1)d(-1,1)^{d}. By Donsker’s Theorem [4, Theorem 8.1.5] we have

𝐏[S2dA(1+o(1))Ndnt]𝐏[σ1t].\mathbf{P}\left[S\leq 2dA(1+o(1))\frac{N^{d}}{n}t\right]\to\mathbf{P}[\sigma_{1}\leq t].

Then we have that nNdS\frac{n}{N^{d}}S converges in distribution to cσ1c\sigma_{1}, with c=2dAc=2dA. This completes the proof. ∎

4 Proofs of the key propositions

4.1 Proof of Proposition 2.3

In the proof of the proposition we will need the following lemma that bounds the probability of hitting some copy of 𝐊\mathbf{K} in terms of the Green’s function of the random walk. Recall that the Green’s function is defined by

G(x,y)=t=0pt(x,y),G(x^{\prime},y^{\prime})=\sum_{t=0}^{\infty}p_{t}(x^{\prime},y^{\prime}),

and in all d3d\geq 3 satisfies the bound [9]

G(x,y)CG|yx|d2G(x^{\prime},y^{\prime})\leq\frac{C_{G}}{|y^{\prime}-x^{\prime}|^{d-2}}

for a constant CG=CG(d)C_{G}=C_{G}(d). For part (ii) of the lemma recall that 𝐊φ(Bg(N)(o))=\mathbf{K}\cap\varphi(B_{g(N)}(o))=\emptyset. We also define diam(𝐊)\mathrm{diam}(\mathbf{K}) as the maximum Euclidean distance between two points in 𝐊\mathbf{K}.

Lemma 4.1.

Let d3d\geq 3. Assume that NN is large enough so that Nζdiam(𝐊)N^{\zeta}\geq\mathrm{diam}(\mathbf{K}).
(i) If ydy^{\prime}\in\mathbb{Z}^{d} satisfies φ(y)B(𝐱0,Nζ)\varphi(y^{\prime})\not\in B(\mathbf{x}_{0},N^{\zeta}), then for all sufficiently small ε>0\varepsilon>0 we have

t=0N2+6εxdx𝐊+xNpt(y,x)CNζ(d2).\sum_{t=0}^{N^{2+6\varepsilon}}\sum_{x\in\mathbb{Z}^{d}}\sum_{x^{\prime}\in\mathbf{K}+xN}p_{t}(y^{\prime},x^{\prime})\leq\frac{C}{N^{\zeta(d-2)}}. (25)

(ii) If g(N)Nζg(N)\leq N^{\zeta}, then for all sufficiently small ε>0\varepsilon>0 we have

t=0N2+6εxdx𝐊+xNpt(o,x)Cg(N)(d2).\sum_{t=0}^{N^{2+6\varepsilon}}\sum_{x\in\mathbb{Z}^{d}}\sum_{x^{\prime}\in\mathbf{K}+xN}p_{t}(o,x^{\prime})\leq\frac{C}{g(N)^{(d-2)}}. (26)

(iii) If ydy^{\prime}\in\mathbb{Z}^{d} satisfies φ(y)B(𝐱0,Nζ)\varphi(y^{\prime})\not\in B(\mathbf{x}_{0},N^{\zeta}), then for all sufficiently small ε>0\varepsilon>0 we have

xdx𝐊+xN|xy|n12+εG(y,x)CNdδ2δε.\sum_{x\in\mathbb{Z}^{d}}\sum_{\begin{subarray}{c}x^{\prime}\in\mathbf{K}+xN\\ |x^{\prime}-y^{\prime}|\leq n^{\frac{1}{2}+\varepsilon}\end{subarray}}G(y^{\prime},x^{\prime})\leq\frac{C}{N^{d-\delta-2\delta\varepsilon}}. (27)
Proof.

(i) We split the sum according to whether |yx|>N1+4ε|y^{\prime}-x^{\prime}|>N^{1+4\varepsilon} or N1+4ε\leq N^{1+4\varepsilon}. In the first case we use (5) and write r=|xy|r=\lfloor|x^{\prime}-y^{\prime}|\rfloor to get

t=0N2+6εxφ1(𝐊)|xy|>N1+4εpt(y,x)t=0N2+6εr=N1+4εCrd1exp(cr2N2+6ε)N2+6εr=N1+4εCrd1exp(cr2N2+6ε)NO(1)exp(cN2ε).\begin{split}\sum_{t=0}^{N^{2+6\varepsilon}}\sum_{\begin{subarray}{c}x^{\prime}\in\varphi^{-1}(\mathbf{K})\\ |x^{\prime}-y^{\prime}|>N^{1+4\varepsilon}\end{subarray}}p_{t}(y^{\prime},x^{\prime})&\leq\sum_{t=0}^{N^{2+6\varepsilon}}\sum_{r=\lfloor N^{1+4\varepsilon}\rfloor}^{\infty}C\,r^{d-1}\exp\left(-\frac{c\,r^{2}}{N^{2+6\varepsilon}}\right)\\ &\leq N^{2+6\varepsilon}\sum_{r=\lfloor N^{1+4\varepsilon}\rfloor}^{\infty}C\,r^{d-1}\exp\left(-\frac{c\,r^{2}}{N^{2+6\varepsilon}}\right)\\ &\leq N^{O(1)}\exp(-cN^{2\varepsilon{}}).\end{split}

For the remaining terms, we have the upper bound

t=0N2+6εxφ1(𝐊)|xy|N1+4εpt(y,x)xφ1(𝐊)|xy|N1+4εG(y,x).\sum_{t=0}^{N^{2+6\varepsilon}}\sum_{\begin{subarray}{c}x^{\prime}\in\varphi^{-1}(\mathbf{K})\\ |x^{\prime}-y^{\prime}|\leq N^{1+4\varepsilon}\end{subarray}}p_{t}(y^{\prime},x^{\prime})\leq\sum_{\begin{subarray}{c}x^{\prime}\in\varphi^{-1}(\mathbf{K})\\ |x^{\prime}-y^{\prime}|\leq N^{1+4\varepsilon}\end{subarray}}G(y^{\prime},x^{\prime}).

Let Q(kN)Q(kN) be the cube with radius kNkN centred at oo and then y+(Q(kN)\Q((k1)N))y^{\prime}+\left(Q(kN)\backslash Q((k-1)N)\right) are disjoint annuli for k=1,2,k=1,2,\dots. We decompose the sum over xx^{\prime} according to which annulus xx^{\prime} falls into. For k2k\geq 2 we have

xφ1(𝐊)xyQ(kN)\Q((k1)N)CG|yx|d2|𝐊|Ckd1CG(Nk)2d|𝐊|CkN2d,\displaystyle\sum_{\begin{subarray}{c}x^{\prime}\in\varphi^{-1}(\mathbf{K})\\ x^{\prime}-y^{\prime}\in Q(kN)\backslash Q\left((k-1)N\right)\end{subarray}}\frac{C_{G}}{|y^{\prime}-x^{\prime}|^{d-2}}\leq|\mathbf{K}|Ck^{d-1}C_{G}(Nk)^{2-d}\leq|\mathbf{K}|CkN^{2-d},

where CGC_{G} is the Green’s function constant. The contribution from any copy of 𝐊\mathbf{K} in y+Q(N)y^{\prime}+Q(N) will be of order N2dN^{2-d} if its distance from yy^{\prime} is at least N/3N/3, say. Note that there is at most one copy of 𝐊\mathbf{K} within distance N/3N/3 of yy^{\prime}, which may have a distance as small as NζN^{\zeta}.

We have to sum over the following values of kk:

k=1,,N1+4εN=N4ε.\displaystyle k=1,\dots,\frac{N^{1+4\varepsilon}}{N}=N^{4\varepsilon}.

Since xφ1(𝐊)x^{\prime}\in\varphi^{-1}(\mathbf{K}) and yφ1(B(𝐱0,Nζ))y^{\prime}\notin\varphi^{-1}\left(B(\mathbf{x}_{0},N^{\zeta})\right) for 𝐱0𝐊\mathbf{x}_{0}\in\mathbf{K}, the distance between xx^{\prime} and yy^{\prime} is at least NζN^{\zeta}. Therefore, we get the upper bound as follows:

xφ1(𝐊)|xy|N1+4εG(y,x)\displaystyle\sum_{\begin{subarray}{c}x^{\prime}\in\varphi^{-1}(\mathbf{K})\\ |x^{\prime}-y^{\prime}|\leq N^{1+4\varepsilon}\end{subarray}}G(y^{\prime},x^{\prime}) |𝐊|Nζ(2d)+k=1N4ε|𝐊|CkN2d\displaystyle\leq|\mathbf{K}|N^{\zeta(2-d)}+\sum_{k=1}^{N^{4\varepsilon}}|\mathbf{K}|CkN^{2-d}
|𝐊|Nζ(2d)+C|𝐊|N2d×N8εC|𝐊|Nζ(2d).\displaystyle\leq|\mathbf{K}|N^{\zeta(2-d)}+C|\mathbf{K}|N^{2-d}\times N^{8\varepsilon}\leq C|\mathbf{K}|N^{\zeta(2-d)}.

Here the last inequality follows from the choice of ζ\zeta, (8), for sufficiently small ε>0\varepsilon>0.

(ii) The proof is essentially the same, except for the contribution of the ”nearest” copy of 𝐊\mathbf{K}, which is now C|𝐊|g(N)2dC|\mathbf{K}|g(N)^{2-d}.

(iii) The proof is very similar to that in part (i). Recall that n=Nδn=\lfloor N^{\delta}\rfloor. This time we need to sum over k=1,,n12+ε/Nk=1,\dots,n^{\frac{1}{2}+\varepsilon}/N, which results in the bound

C|𝐊|Nζ(d2)+C|𝐊|N2d×Nδ+2δε2=C|𝐊|[Nζ(d2)+Nδd+2δε].C|\mathbf{K}|N^{-\zeta(d-2)}+C|\mathbf{K}|N^{2-d}\times N^{\delta+2\delta\varepsilon-2}=C|\mathbf{K}|\left[N^{-\zeta(d-2)}+N^{\delta-d+2\delta\varepsilon}\right].

Here, for ε>0\varepsilon>0 small enough, the second term dominates due to the choice of ζ\zeta; see (8). ∎

Proof of Proposition 2.3.

Since

𝐏y1[Yn=y,Ytφ1(𝐊) for 0t<n]=𝐏y1[Yn=y]𝐏y1[Yn=y,Ytφ1(𝐊) for some 0t<n],\begin{split}&\mathbf{P}_{y^{\prime}_{\ell-1}}\left[Y_{n}=y^{\prime}_{\ell},\,\text{$Y_{t}\not\in\varphi^{-1}(\mathbf{K})$ for $0\leq t<n$}\right]\\ &\qquad=\mathbf{P}_{y^{\prime}_{\ell-1}}\left[Y_{n}=y^{\prime}_{\ell}\right]-\mathbf{P}_{y^{\prime}_{\ell-1}}\left[Y_{n}=y^{\prime}_{\ell},\,\text{$Y_{t}\in\varphi^{-1}(\mathbf{K})$ for some $0\leq t<n$}\right],\end{split}

we need to show that

𝐏y1[Yn=y,Ytφ1(𝐊) for some 0t<n]=O(nNd)𝐏y1[Yn=y].\mathbf{P}_{y^{\prime}_{\ell-1}}\left[Y_{n}=y^{\prime}_{\ell},\,\text{$Y_{t}\in\varphi^{-1}(\mathbf{K})$ for some $0\leq t<n$}\right]=O\left(\frac{n}{N^{d}}\right)\mathbf{P}_{y^{\prime}_{\ell-1}}\left[Y_{n}=y^{\prime}_{\ell}\right].

Define A(x)={Yn=y, YtxN+𝐊 for some 0t<n}A(x)=\left\{Y_{n}=y^{\prime}_{\ell},\text{ $Y_{t}\in xN+\mathbf{K}$ for some $0\leq t<n$}\right\}, so that

𝐏y1[Yn=y,Ytφ1(𝐊) for some 0t<n]xd𝐏y1[A(x)].\mathbf{P}_{y^{\prime}_{\ell-1}}\left[Y_{n}=y^{\prime}_{\ell},\,\text{$Y_{t}\in\varphi^{-1}(\mathbf{K})$ for some $0\leq t<n$}\right]\leq\sum_{x\in\mathbb{Z}^{d}}\mathbf{P}_{y^{\prime}_{\ell-1}}[A(x)]. (28)

We have

𝐏y1[A(x)]n1+n2=nx𝐊+xNpn1(y1,x)pn2(x,y).\begin{split}\mathbf{P}_{y^{\prime}_{\ell-1}}[A(x)]\leq\sum_{n_{1}+n_{2}=n}\sum_{x^{\prime}\in\mathbf{K}+xN}p_{n_{1}}(y^{\prime}_{\ell-1},x^{\prime})p_{n_{2}}(x^{\prime},y^{\prime}_{\ell}).\end{split} (29)

We bound this by splitting up the sum into different contributions. Let ε>0\varepsilon>0 that will be chosen sufficiently small in the course of the proof.

Case 1. n1,n2N2+6εn_{1},n_{2}\geq N^{2+6\varepsilon} and |y1x|n112+ε|y^{\prime}_{\ell-1}-x^{\prime}|\leq n^{\frac{1}{2}+\varepsilon}_{1}, |xy|n212+ε|x^{\prime}-y^{\prime}_{\ell}|\leq n^{\frac{1}{2}+\varepsilon}_{2}. By the LCLT we have that

pn1(y1,x)Cpn1(y1,u)for any u𝕋N+xN,pn2(x,y)Cpn2(u,y)for any u𝕋N+xN.\begin{split}p_{n_{1}}(y^{\prime}_{\ell-1},x^{\prime})&\leq Cp_{n_{1}}(y^{\prime}_{\ell-1},u^{\prime})\quad\text{for any $u^{\prime}\in\mathbb{T}_{N}+xN$},\\ p_{n_{2}}(x^{\prime},y^{\prime}_{\ell})&\leq Cp_{n_{2}}(u^{\prime},y^{\prime}_{\ell})\quad\text{for any $u^{\prime}\in\mathbb{T}_{N}+xN$}.\end{split} (30)

For this note that we have

|d|y1x|2n1d|y1u|2n1|d|xu|2n1+2d|xu,y1x|n1CN2n1+CNn112+εn1,\begin{split}\left|\frac{d|y^{\prime}_{\ell-1}-x^{\prime}|^{2}}{n_{1}}-\frac{d|y^{\prime}_{\ell-1}-u^{\prime}|^{2}}{n_{1}}\right|&\leq\frac{d|x^{\prime}-u^{\prime}|^{2}}{n_{1}}+\frac{2d|\langle x^{\prime}-u^{\prime},y^{\prime}_{\ell-1}-x^{\prime}\rangle|}{n_{1}}\\ &\leq C\frac{N^{2}}{n_{1}}+\frac{CN\cdot n_{1}^{\frac{1}{2}+\varepsilon}}{n_{1}},\end{split}

where the first term tends to 0 and the rest equals

CNn112+εCNN(2+6ε)(12+ε)=CNε+6ε20,as N.\begin{split}CNn_{1}^{-\frac{1}{2}+\varepsilon}\leq CN\cdot N^{(2+6\varepsilon)(-\frac{1}{2}+\varepsilon)}=CN^{-\varepsilon+6\varepsilon^{2}}\to 0,\quad\text{as $N\to\infty$.}\end{split}

Here we choose ε\varepsilon so that ε+6ε2<0-\varepsilon+6\varepsilon^{2}<0. A similar observation shows the estimate for pn2(x,y)p_{n_{2}}(x^{\prime},y^{\prime}_{\ell}).

The way we are going to use (30) is to replace the summation over xx^{\prime} by a summation over all u𝕋N+xNu^{\prime}\in\mathbb{T}_{N}+xN, and at the same time inserting a factor |𝐊|/Nd|\mathbf{K}|/N^{d}. Hence the contribution of the values of n1,n2n_{1},n_{2} and xx in Case 1 to the right-hand side of (28) is at most

C|𝐊|Ndn1+n2=nudpn1(y1,u)pn2(u,y)=C|𝐊|Ndn1+n2=npn(y1,y)C|𝐊|nNdpn(y1,y).\begin{split}\frac{C|\mathbf{K}|}{N^{d}}\sum_{n_{1}+n_{2}=n}\sum_{u^{\prime}\in\mathbb{Z}^{d}}p_{n_{1}}(y^{\prime}_{\ell-1},u^{\prime})p_{n_{2}}(u^{\prime},y^{\prime}_{\ell})&=\frac{C|\mathbf{K}|}{N^{d}}\sum_{n_{1}+n_{2}=n}p_{n}(y^{\prime}_{\ell-1},y^{\prime}_{\ell})\\ &\leq\frac{C|\mathbf{K}|n}{N^{d}}p_{n}(y^{\prime}_{\ell-1},y^{\prime}_{\ell}).\end{split}

This completes the bound in Case 1. For future use, note that if εn0\varepsilon_{n}\to 0 is any sequence, and we add the restriction n1εnnn_{1}\leq\varepsilon_{n}n to the conditions in Case 1, we obtain the upper bound

C|𝐊|εnnNdpn(y1,y)=o(1)nNdpn(y1,y).\begin{split}C|\mathbf{K}|\frac{\varepsilon_{n}n}{N^{d}}p_{n}(y^{\prime}_{\ell-1},y^{\prime}_{\ell})=o(1)\frac{n}{N^{d}}p_{n}(y^{\prime}_{\ell-1},y^{\prime}_{\ell}).\end{split} (31)

Case 2a. n1,n2N2+6εn_{1},n_{2}\geq N^{2+6\varepsilon} but |xy1|>n112+ε|x^{\prime}-y^{\prime}_{\ell-1}|>n_{1}^{\frac{1}{2}+\varepsilon}. In this case we bound pn2(x,y)1p_{n_{2}}(x^{\prime},y^{\prime}_{\ell})\leq 1 and have that the contribution of this case to the right-hand side of (28) is at most

n1+n2=nn1,n2N2+6ε𝐏y1[|Yn1y1|>n11/2+ε]n1+n2=nn1,n2N2+6εCexp(cn12ε)Cnexp(cN4ε)=o(nNd)pn(y1,y),\begin{split}\sum_{\begin{subarray}{c}n_{1}+n_{2}=n\\ n_{1},n_{2}\geq N^{2+6\varepsilon}\end{subarray}}\mathbf{P}_{y^{\prime}_{\ell-1}}[|Y_{n_{1}}-y^{\prime}_{\ell-1}|>n_{1}^{1/2+\varepsilon}]&\leq\sum_{\begin{subarray}{c}n_{1}+n_{2}=n\\ n_{1},n_{2}\geq N^{2+6\varepsilon}\end{subarray}}C\exp(-cn_{1}^{2\varepsilon})\\ &\leq Cn\exp(-cN^{4\varepsilon})=o\left(\frac{n}{N^{d}}\right)p_{n}(y^{\prime}_{\ell-1},y^{\prime}_{\ell}),\end{split}

where in the first step we used (3) and in the last step we used the Gaussian lower bound (5) for pnp_{n}. Indeed, the requirement for the Gaussian lower bound is satisfied for sufficiently large NN because |yy+1|C1lognncn|y^{\prime}_{\ell}-y^{\prime}_{\ell+1}|\leq C_{1}\sqrt{\log n}\sqrt{n}\leq c\,n. Therefore, we have

pn(y1,y)cnd/2exp(C|yy1|2n)cnd/2exp(Clogn).p_{n}(y^{\prime}_{\ell-1},y^{\prime}_{\ell})\geq\frac{c}{n^{d/2}}\exp\left(-\frac{C|y^{\prime}_{\ell}-y^{\prime}_{\ell-1}|^{2}}{n}\right)\geq\frac{c}{n^{d/2}}\exp\left(-C\,\log n\right). (32)

Then we have

Cnexp(cN4ε)cnd/2exp(Clogn)Cn1+d/2exp(cN4ε+Clogn)=o(nNd),as N.\begin{split}\frac{Cn\exp(-cN^{4\varepsilon})}{cn^{-d/2}\exp\left(-C\,\log n\right)}\leq Cn^{1+d/2}\exp\left(-cN^{4\varepsilon}+C\,\log n\right)=o\left(\frac{n}{N^{d}}\right),\,\text{as $N\to\infty$.}\end{split}

Case 2b. n1,n2N2+6εn_{1},n_{2}\geq N^{2+6\varepsilon} but |yx|>n21/2+ε|y^{\prime}_{\ell}-x^{\prime}|>n_{2}^{1/2+\varepsilon}. This case can be handled very similarly to Case 2a.

Case 3a. n1<N2+6εn_{1}<N^{2+6\varepsilon} and |xy1|Nδ2ε|x^{\prime}-y^{\prime}_{\ell-1}|\leq N^{\frac{\delta}{2}-\varepsilon}. By the LCLT we have

pn2(x,y)=Cn2d/2exp(d|yx|2n2)(1+o(1))pn(y1,y)=Cnd/2exp(d|yy1|2n)(1+o(1)).\begin{split}p_{n_{2}}(x^{\prime},y^{\prime}_{\ell})&=\frac{C}{n_{2}^{d/2}}\exp\left(-\frac{d|y^{\prime}_{\ell}-x^{\prime}|^{2}}{n_{2}}\right)(1+o(1))\\ p_{n}(y^{\prime}_{\ell-1},y^{\prime}_{\ell})&=\frac{C}{n^{d/2}}\exp\left(-\frac{d|y^{\prime}_{\ell}-y^{\prime}_{\ell-1}|^{2}}{n}\right)(1+o(1)).\end{split}

We claim that

pn2(x,y)Cpn(y1,y).p_{n_{2}}(x^{\prime},y^{\prime}_{\ell})\leq C\,p_{n}(y^{\prime}_{\ell-1},y^{\prime}_{\ell}). (33)

We first note that n2=nn1=n(1+o(1))n_{2}=n-n_{1}=n(1+o(1)), then we deduce that n2d/2=O(nd/2)n_{2}^{-d/2}=O(n^{-d/2}) and

exp(d|yy1|2n)exp(d|yy1|2n2).\exp\left(-\frac{d|y^{\prime}_{\ell}-y^{\prime}_{\ell-1}|^{2}}{n}\right)\geq\exp\left(-\frac{d|y^{\prime}_{\ell}-y^{\prime}_{\ell-1}|^{2}}{n_{2}}\right).

Since we have |xy1|Nδ2ε|x^{\prime}-y^{\prime}_{\ell-1}|\leq N^{\frac{\delta}{2}-\varepsilon} in the exponent, we have, as NN\to\infty,

|xy1|2n2Nδ2εn20\frac{|x^{\prime}-y^{\prime}_{\ell-1}|^{2}}{n_{2}}\leq\frac{N^{\delta-2\varepsilon}}{n_{2}}\to 0

and

|yy1||xy1|n2n12C1lognNδ2εn20.\frac{|y^{\prime}_{\ell}-y^{\prime}_{\ell-1}||x^{\prime}-y^{\prime}_{\ell-1}|}{n_{2}}\leq\frac{n^{\frac{1}{2}}C_{1}\sqrt{\log n}N^{\frac{\delta}{2}-\varepsilon}}{n_{2}}\to 0.

These imply that

||yy1|2|yx|2n2|||yy1|2|(yy1)+(y1x)|2n2||xy1|2n2+2|yy1||xy1|n20.\begin{split}\left|\frac{|y^{\prime}_{\ell}-y^{\prime}_{\ell-1}|^{2}-|y^{\prime}_{\ell}-x^{\prime}|^{2}}{n_{2}}\right|&\leq\left|\frac{|y^{\prime}_{\ell}-y^{\prime}_{\ell-1}|^{2}-|(y^{\prime}_{\ell}-y^{\prime}_{\ell-1})+(y^{\prime}_{\ell-1}-x^{\prime})|^{2}}{n_{2}}\right|\\ &\leq\frac{|x^{\prime}-y^{\prime}_{\ell-1}|^{2}}{n_{2}}+\frac{2|y^{\prime}_{\ell}-y^{\prime}_{\ell-1}||x^{\prime}-y^{\prime}_{\ell-1}|}{n_{2}}\to 0.\end{split}

Thus (33) follows from comparing the LCLT approximations of the two sides.

We now have that the contribution of this case to the right-hand side of (28) is at most

Cpn(y1,y)n1<N2+6εxdx𝐊+xNpn1(y1,x)CNζ(d2)pn(y1,y)o(1)nNdpn(y1,y),\begin{split}Cp_{n}(y^{\prime}_{\ell-1},y^{\prime}_{\ell})\sum_{\begin{subarray}{c}n_{1}<N^{2+6\varepsilon}\end{subarray}}\sum_{x\in\mathbb{Z}^{d}}\sum_{x^{\prime}\in\mathbf{K}+xN}p_{n_{1}}(y^{\prime}_{\ell-1},x^{\prime})&\leq\frac{C}{N^{\zeta(d-2)}}p_{n}(y^{\prime}_{\ell-1},y^{\prime}_{\ell})\\ &\leq o(1)\frac{n}{N^{d}}p_{n}(y^{\prime}_{\ell-1},y^{\prime}_{\ell}),\end{split}

where in the first step we used Lemma 4.1(i) and the last step holds for the value of ζ\zeta we chose; cf. (8).

Case 3b. n1<N2+6εn_{1}<N^{2+6\varepsilon} but |xy1|>Nδ2ε|x^{\prime}-y^{\prime}_{\ell-1}|>N^{\frac{\delta}{2}-\varepsilon}. Use the Gaussian upper bound (5) to bound pn1p_{n_{1}} and bound the sum over all xdx^{\prime}\in\mathbb{Z}^{d} of pn2p_{n_{2}} by 1 using symmetry of pn2p_{n_{2}} to get

n1+n2=nn1<N2+6εxdx𝐊+xNpn1(y1,x)pn2(x,y)n1<N2+6εCn1d/2exp(Nδ2εN2+6ε)xdpnn1(x,y)n1<N2+6εCn1d/2exp(Nδ2εN2+6ε)CNO(1)exp(Nδ28ε)=o(nNd)pn(y1,y),as N.\begin{split}&\sum_{\begin{subarray}{c}n_{1}+n_{2}=n\\ n_{1}<N^{2+6\varepsilon}\end{subarray}}\sum_{x\in\mathbb{Z}^{d}}\sum_{x^{\prime}\in\mathbf{K}+xN}p_{n_{1}}(y^{\prime}_{\ell-1},x^{\prime})p_{n_{2}}(x^{\prime},y^{\prime}_{\ell})\\ &\qquad\leq\sum_{n_{1}<N^{2+6\varepsilon}}\frac{C}{n_{1}^{d/2}}\exp\left(-\frac{N^{\delta-2\varepsilon}}{N^{2+6\varepsilon}}\right)\sum_{x^{\prime}\in\mathbb{Z}^{d}}p_{n-n_{1}}(x^{\prime},y^{\prime}_{\ell})\\ &\qquad\leq\sum_{n_{1}<N^{2+6\varepsilon}}\frac{C}{n_{1}^{d/2}}\exp\left(-\frac{N^{\delta-2\varepsilon}}{N^{2+6\varepsilon}}\right)\\ &\qquad\leq CN^{O(1)}\exp(-N^{\delta-2-8\varepsilon})=o\left(\frac{n}{N^{d}}\right)p_{n}(y^{\prime}_{\ell-1},y^{\prime}_{\ell}),\quad\text{as $N\to\infty$.}\end{split}

In the last step we used a Gaussian lower bound for pnp_{n}; cf. (32).

Case 4a. n2<N2+6εn_{2}<N^{2+6\varepsilon} and |yx|Nδ2ε|y^{\prime}_{\ell}-x^{\prime}|\leq N^{\frac{\delta}{2}-\varepsilon}. This case can be handled very similarly to Case 3a.

Case 4b. n2<N2+6εn_{2}<N^{2+6\varepsilon} and |yx|>Nδ2ε|y^{\prime}_{\ell}-x^{\prime}|>N^{\frac{\delta}{2}-\varepsilon}. This case can be handled very similarly to Case 3b.

Therefore, we discussed all possible cases and proved statement (14) of the proposition as required.

The proof of (15) is similar to the first part with only a few modifications. In this part we have to show that

𝐏o[Yn=y1,Ytφ1(𝐊) for some 0t<n]=o(1)𝐏o[Yn=y1].\mathbf{P}_{o}\left[Y_{n}=y^{\prime}_{1},\,\text{$Y_{t}\in\varphi^{-1}(\mathbf{K})$ for some $0\leq t<n$}\right]=o(1)\mathbf{P}_{o}\left[Y_{n}=y^{\prime}_{1}\right].

Define A0(x)={Yn=y1, YtxN+𝐊 for some 0t<n}A_{0}(x)=\left\{Y_{n}=y^{\prime}_{1},\text{ $Y_{t}\in xN+\mathbf{K}$ for some $0\leq t<n$}\right\}, so that

𝐏o[Yn=y1,Ytφ1(𝐊) for some 0t<n]xd𝐏o[A0(x)].\mathbf{P}_{o}\left[Y_{n}=y^{\prime}_{1},\,\text{$Y_{t}\in\varphi^{-1}(\mathbf{K})$ for some $0\leq t<n$}\right]\leq\sum_{x\in\mathbb{Z}^{d}}\mathbf{P}_{o}[A_{0}(x)]. (34)

We have

𝐏o[A0(x)]n1+n2=nx𝐊+xNpn1(o,x)pn2(x,y1).\begin{split}\mathbf{P}_{o}[A_{0}(x)]\leq\sum_{n_{1}+n_{2}=n}\sum_{x^{\prime}\in\mathbf{K}+xN}p_{n_{1}}(o,x^{\prime})p_{n_{2}}(x^{\prime},y^{\prime}_{1}).\end{split} (35)

We bound the term above by splitting up the sum into the same cases as in the proof of (14). The different cases can be handled very similarly to the first part. The difference is only in Case 3a while applying the Green’s function bound Lemma 4.1.

In Case 3a, by the LCLT, we can deduce that

pn2(x,y1)Cpn(o,y1).p_{n_{2}}(x^{\prime},y^{\prime}_{1})\leq C\,p_{n}(o,y^{\prime}_{1}).

If g(N)>Nζg(N)>N^{\zeta}, the bound of Lemma 4.1(i) can be used as before. If g(N)Nζg(N)\leq N^{\zeta}, by Lemma 4.1(ii), we have that the contribution of this case to the right-hand side of (34) is at most

Cpn(o,y1)n1<N2+6εxdx𝐊+xNpn1(o,x)Cg(N)d2pn(o,y1)=o(1)pn(o,y1).\begin{split}Cp_{n}(o,y^{\prime}_{1})\sum_{\begin{subarray}{c}n_{1}<N^{2+6\varepsilon}\end{subarray}}\sum_{x\in\mathbb{Z}^{d}}\sum_{x^{\prime}\in\mathbf{K}+xN}p_{n_{1}}(o,x^{\prime})\leq\frac{C}{g(N)^{d-2}}p_{n}(o,y^{\prime}_{1})=o(1)p_{n}(o,y^{\prime}_{1}).\end{split}

Here we used that g(N)g(N)\to\infty.

Note that Case 4a can be handled in the same way as in the proof of (14), since the distance between y1y^{\prime}_{1} and any copy of 𝐊\mathbf{K} is at least NζN^{\zeta}.

Therefore, we discussed all possible cases and proved (15) as required. ∎

For future use, we extract a few corollaries of the proof of Proposition 2.3.

Corollary 4.1.

Assume that y,y′′dy^{\prime},y^{\prime\prime}\in\mathbb{Z}^{d} are points such that |y′′y|2C1lognn|y^{\prime\prime}-y^{\prime}|\leq 2C_{1}\sqrt{\log n}\sqrt{n}. Then for all n/2mnn/2\leq m\leq n we have

n1+n2=mxdx𝐊+xN|yx|,|y′′x|>Nζpn1(y,x)pn2(x,y′′)=O(nNd)pm(y,y′′).\sum_{n_{1}+n_{2}=m}\sum_{x\in\mathbb{Z}^{d}}\sum_{\begin{subarray}{c}x^{\prime}\in\mathbf{K}+xN\\ |y^{\prime}-x^{\prime}|,|y^{\prime\prime}-x^{\prime}|>N^{\zeta}\end{subarray}}p_{n_{1}}(y^{\prime},x^{\prime})p_{n_{2}}(x^{\prime},y^{\prime\prime})=O\left(\frac{n}{N^{d}}\right)p_{m}(y^{\prime},y^{\prime\prime}). (36)
Proof.

In the course of the proof of Proposition 2.3, we established the above with m=nm=n, where y=y1y^{\prime}=y^{\prime}_{\ell-1} and y′′=yy^{\prime\prime}=y^{\prime}_{\ell}, and with C1C_{1} in place of 2C12C_{1} in the upper bound on the displacement |y′′y||y^{\prime\prime}-y^{\prime}|. Note that in this case the restriction |yx|,|y′′x|>Nζ|y^{\prime}-x^{\prime}|,|y^{\prime\prime}-x^{\prime}|>N^{\zeta} in the summation holds for all xx, due to conditions imposed on y1y^{\prime}_{\ell-1} and yy^{\prime}_{\ell} in the definition of 𝒢ζ,C1\mathcal{G}_{\zeta,C_{1}}.

The arguments when n/2m<nn/2\leq m<n and with the upper bound increased by a factor of 22 are essentially the same. The information that y1y^{\prime}_{\ell-1} and yy^{\prime}_{\ell} are at least distance NζN^{\zeta} from φ1(𝐊)\varphi^{-1}(\mathbf{K}) was only used in Cases 3a and 4a to handle terms xx^{\prime} close to these points. Since in (36) we exclude such xx^{\prime} from the summation, the statement follows without restricting the location of y,y′′y^{\prime},y^{\prime\prime}. ∎

The following is merely a restatement of what was observed in (31) (with part (ii) below holding by symmetry).

Corollary 4.2.

(i) For 2\ell\geq 2 and any sequence εn0\varepsilon_{n}\to 0, we have

n1+n2=nn1εnnn1,n2N2+6εxdx𝐊+xN:|y1x|n112+ε|xy|n212+εpn1(y1,x)pn2(x,y)=o(1)nNdpn(y1,y).\begin{split}\sum_{\begin{subarray}{c}n_{1}+n_{2}=n\\ n_{1}\leq\varepsilon_{n}n\\ n_{1},n_{2}\geq N^{2+6\varepsilon}\end{subarray}}\ \sum_{x\in\mathbb{Z}^{d}}\ \sum_{\begin{subarray}{c}x^{\prime}\in\mathbf{K}+xN:\\ |y^{\prime}_{\ell-1}-x^{\prime}|\leq n_{1}^{\frac{1}{2}+\varepsilon}\\ |x^{\prime}-y^{\prime}_{\ell}|\leq n_{2}^{\frac{1}{2}+\varepsilon}\end{subarray}}p_{n_{1}}(y^{\prime}_{\ell-1},x^{\prime})p_{n_{2}}(x^{\prime},y^{\prime}_{\ell})=o(1)\frac{n}{N^{d}}p_{n}(y^{\prime}_{\ell-1},y^{\prime}_{\ell}).\end{split} (37)

(ii) The same right-hand side expression is valid if we replace the restriction n1εnnn_{1}\leq\varepsilon_{n}n by n2εnnn_{2}\leq\varepsilon_{n}n.

The following is a restatement of the bounds of Cases 2a and 2b.

Corollary 4.3.

For 2\ell\geq 2 we have
(i)

n1+n2=nn1,n2N2+6εxdx𝐊+xN:|y1x|>n112+εpn1(y1,x)pn2(x,y)=o(1)nNdpn(y1,y).\begin{split}\sum_{\begin{subarray}{c}n_{1}+n_{2}=n\\ n_{1},n_{2}\geq N^{2+6\varepsilon}\end{subarray}}\ \sum_{x\in\mathbb{Z}^{d}}\ \sum_{\begin{subarray}{c}x^{\prime}\in\mathbf{K}+xN:\\ |y^{\prime}_{\ell-1}-x^{\prime}|>n_{1}^{\frac{1}{2}+\varepsilon}\end{subarray}}p_{n_{1}}(y^{\prime}_{\ell-1},x^{\prime})p_{n_{2}}(x^{\prime},y^{\prime}_{\ell})=o(1)\frac{n}{N^{d}}p_{n}(y^{\prime}_{\ell-1},y^{\prime}_{\ell}).\end{split} (38)

(ii)

n1+n2=nn1,n2N2+6εxdx𝐊+xN:|xy|>n212+εpn1(y1,x)pn2(x,y)=o(1)nNdpn(y1,y).\begin{split}\sum_{\begin{subarray}{c}n_{1}+n_{2}=n\\ n_{1},n_{2}\geq N^{2+6\varepsilon}\end{subarray}}\ \sum_{x\in\mathbb{Z}^{d}}\ \sum_{\begin{subarray}{c}x^{\prime}\in\mathbf{K}+xN:\\ |x^{\prime}-y^{\prime}_{\ell}|>n_{2}^{\frac{1}{2}+\varepsilon}\end{subarray}}p_{n_{1}}(y^{\prime}_{\ell-1},x^{\prime})p_{n_{2}}(x^{\prime},y^{\prime}_{\ell})=o(1)\frac{n}{N^{d}}p_{n}(y^{\prime}_{\ell-1},y^{\prime}_{\ell}).\end{split} (39)

The following is a restatement of the bounds of Cases 3a and 3b combined.

Corollary 4.4.

For 2\ell\geq 2 we have

n1+n2=nn1<N2+6εxdx𝐊+xNpn1(y1,x)pn2(x,y)=o(1)nNdpn(y1,y).\begin{split}\sum_{\begin{subarray}{c}n_{1}+n_{2}=n\\ n_{1}<N^{2+6\varepsilon}\end{subarray}}\ \sum_{x\in\mathbb{Z}^{d}}\ \sum_{\begin{subarray}{c}x^{\prime}\in\mathbf{K}+xN\end{subarray}}p_{n_{1}}(y^{\prime}_{\ell-1},x^{\prime})p_{n_{2}}(x^{\prime},y^{\prime}_{\ell})=o(1)\frac{n}{N^{d}}p_{n}(y^{\prime}_{\ell-1},y^{\prime}_{\ell}).\end{split} (40)

4.2 Proof of Proposition 2.2

In this section we need C1C_{1} large enough so that we have

edf(n)2Ndn1+3d/20.e^{-df(n)^{2}}N^{d}n^{1+3d/2}\to 0. (41)

We have

𝐏y1[Yn=y,Ytφ1(𝐊) for some 0t<n]=𝐏y1[xdA(x)],\mathbf{P}_{y^{\prime}_{\ell-1}}[Y_{n}=y^{\prime}_{\ell},\,\text{$Y_{t}\in\varphi^{-1}(\mathbf{K})$ for some $0\leq t<n$}]=\mathbf{P}_{y^{\prime}_{\ell-1}}\left[\cup_{x\in\mathbb{Z}^{d}}A(x)\right],

where

A(x)={Yn=y,YtxN+𝐊 for some 0t<n}.A(x)=\left\{Y_{n}=y^{\prime}_{\ell},\text{$Y_{t}\in xN+\mathbf{K}$ for some $0\leq t<n$}\right\}.

The strategy is to estimate the probability via the Bonferroni inequalities:

x𝐏y1[A(x)]x1x2𝐏y1[A(x1)A(x2)]𝐏y1[xdA(x)]x𝐏y1[A(x)].\begin{split}\sum_{x}\mathbf{P}_{y^{\prime}_{\ell-1}}[A(x)]-\sum_{x_{1}\not=x_{2}}\mathbf{P}_{y^{\prime}_{\ell-1}}[A(x_{1})\cap A(x_{2})]&\leq\mathbf{P}_{y^{\prime}_{\ell-1}}\left[\cup_{x\in\mathbb{Z}^{d}}A(x)\right]\\ &\leq\sum_{x}\mathbf{P}_{y^{\prime}_{\ell-1}}[A(x)].\end{split} (42)

We are going to use a parameter AnA_{n} that we choose as An=10loglognA_{n}=10\log\log n so that in particular AnA_{n}\to\infty.

4.2.1 The main contribution

In this section, we consider only stretches with |yy1|Ann|y^{\prime}_{\ell}-y^{\prime}_{\ell-1}|\leq A_{n}\sqrt{n}. We will show that the main contribution in (42) comes from xx in the set:

G={xd:|y1xN|An2n,|xNy|An2n}.G=\left\{x\in\mathbb{Z}^{d}:|y^{\prime}_{\ell-1}-xN|\leq A_{n}^{2}\sqrt{n},\,|xN-y^{\prime}_{\ell}|\leq A_{n}^{2}\sqrt{n}\right\}.

We first examine 𝐏y1[A(x)]\mathbf{P}_{y^{\prime}_{\ell-1}}[A(x)] for xGx\in G. Putting B0,x=B(𝐱0+xN,Nζ)B_{0,x}=B(\mathbf{x}_{0}+xN,N^{\zeta}), let n1n_{1} be the time of the last visit to B0,x\partial B_{0,x} before hitting 𝐊+xN\mathbf{K}+xN, let n1+n2n_{1}+n_{2} be the time of the first hit of 𝐊+xN\mathbf{K}+xN, and let n3=nn1n2n_{3}=n-n_{1}-n_{2}. See Figure 2 for an illustration of this decomposition.

B0,xB_{0,x}𝐊+xN\mathbf{K}+xNxx^{\prime}zz^{\prime}y1y_{\ell-1}^{\prime}yy_{\ell}^{\prime}
Figure 2: The decomposition of a path hitting a copy of 𝐊\mathbf{K} into three subpaths (not to scale).

Then we can write:

𝐏y1[A(x)]=n1+n2+n3=nzB0,xx𝐊+xNp~n1(x)(y1,z)×𝐏z[H𝐊+xN=n2<ξB0,x,YH𝐊+xN=x]pn3(x,y),\begin{split}\mathbf{P}_{y^{\prime}_{\ell-1}}[A(x)]&=\sum_{n_{1}+n_{2}+n_{3}=n}\sum_{z^{\prime}\in\partial B_{0,x}}\sum_{x^{\prime}\in\mathbf{K}+xN}\widetilde{p}^{(x)}_{n_{1}}(y^{\prime}_{\ell-1},z^{\prime})\\ &\qquad\quad\times\mathbf{P}_{z^{\prime}}[H_{\mathbf{K}+xN}=n_{2}<\xi_{B_{0,x}},\,Y_{H_{\mathbf{K}+xN}}=x^{\prime}]\,p_{n_{3}}(x^{\prime},y^{\prime}_{\ell}),\end{split} (43)

where

p~n1(x)(y1,z)=𝐏y1[Yn1=z,Yt𝐊+xN for 0tn1].\widetilde{p}^{(x)}_{n_{1}}(y^{\prime}_{\ell-1},z^{\prime})=\mathbf{P}_{y^{\prime}_{\ell-1}}[Y_{n_{1}}=z^{\prime},\,\text{$Y_{t}\not\in\mathbf{K}+xN$ for $0\leq t\leq n_{1}$}].

We are going to use another parameter εn\varepsilon_{n} that will need to go to 0 slowly. We choose it as εn=(10loglogn)10\varepsilon_{n}=(10\log\log n)^{-1}\to 0. The main contribution to (43) will be when n1εnnn_{1}\geq\varepsilon_{n}n, n3εnnn_{3}\geq\varepsilon_{n}n and n2N2δ/dn2/dn_{2}\leq N^{2\delta/d}\sim n^{2/d}. Therefore, we split the sum over n1,n2,n3n_{1},n_{2},n_{3} in (43) into a main contribution I(x)I(x) and an error term II(x)II(x). In order to define these, let

F(n1,n2,n3,x,y1,y)=zB0,xx𝐊+xNp~n1(x)(y1,z)𝐏z[H𝐊+xN=n2<ξB0,x,YH𝐊+xN=x]pn3(x,y).\begin{split}&F(n_{1},n_{2},n_{3},x,y^{\prime}_{\ell-1},y^{\prime}_{\ell})\\ &\quad=\sum_{z^{\prime}\in\partial B_{0,x}}\sum_{x^{\prime}\in\mathbf{K}+xN}\widetilde{p}^{(x)}_{n_{1}}(y^{\prime}_{\ell-1},z^{\prime})\mathbf{P}_{z^{\prime}}[H_{\mathbf{K}+xN}=n_{2}<\xi_{B_{0,x}},\,Y_{H_{\mathbf{K}+xN}}=x^{\prime}]p_{n_{3}}(x^{\prime},y^{\prime}_{\ell}).\end{split} (44)

Then with

I(x):=n1+n2+n3=nn1,n3εnn,n2N2δ/dF(n1,n2,n3,x,y1,y)II(x):=n1+n2+n3=nn1<εnn or n3<εnnor n2>N2δ/dF(n1,n2,n3,x,y1,y)\begin{split}I(x)&:=\sum_{\begin{subarray}{c}n_{1}+n_{2}+n_{3}=n\\ n_{1},n_{3}\geq\varepsilon_{n}n,\,n_{2}\leq N^{2\delta/d}\end{subarray}}F(n_{1},n_{2},n_{3},x,y^{\prime}_{\ell-1},y^{\prime}_{\ell})\\ II(x)&:=\sum_{\begin{subarray}{c}n_{1}+n_{2}+n_{3}=n\\ \text{$n_{1}<\varepsilon_{n}n$ or $n_{3}<\varepsilon_{n}n$}\\ \text{or $n_{2}>N^{2\delta/d}$}\end{subarray}}F(n_{1},n_{2},n_{3},x,y^{\prime}_{\ell-1},y^{\prime}_{\ell})\end{split} (45)

we have

𝐏y1[A(x)]=I(x)+II(x).\begin{split}\mathbf{P}_{y^{\prime}_{\ell-1}}[A(x)]=I(x)+II(x).\end{split}
Lemma 4.2.

When xGx\in G and n3εnnn_{3}\geq\varepsilon_{n}n, we have

pn3(x,y)=(1+o(1))pn3(u,y)for all x𝐊+xN and all u𝕋N+xN,p_{n_{3}}(x^{\prime},y^{\prime}_{\ell})=(1+o(1))p_{n_{3}}(u^{\prime},y^{\prime}_{\ell})\quad\text{for all $x^{\prime}\in\mathbf{K}+xN$ and all $u^{\prime}\in\mathbb{T}_{N}+xN$},

with the o(1)o(1) term uniform in xx^{\prime} and uu^{\prime}.

Proof.

By the LCLT, we have

pn3(x,y)=Cn3d/2exp(d|yx|2n3)(1+o(1)),pn3(u,y)=Cn3d/2exp(d|yu|2n3)(1+o(1)).\begin{split}p_{n_{3}}(x^{\prime},y^{\prime}_{\ell})&=\frac{C}{n_{3}^{d/2}}\exp\left(-\frac{d|y^{\prime}_{\ell}-x^{\prime}|^{2}}{n_{3}}\right)(1+o(1)),\\ p_{n_{3}}(u^{\prime},y^{\prime}_{\ell})&=\frac{C}{n_{3}^{d/2}}\exp\left(-\frac{d|y^{\prime}_{\ell}-u^{\prime}|^{2}}{n_{3}}\right)(1+o(1)).\end{split}

We compare the exponents

|d|yx|2n3d|yu|2n3|d|xu|2n3+2d|xu,yx|n3CN2n3+CNAn2nn30,\begin{split}\left|\frac{d|y^{\prime}_{\ell}-x^{\prime}|^{2}}{n_{3}}-\frac{d|y^{\prime}_{\ell}-u^{\prime}|^{2}}{n_{3}}\right|&\leq\frac{d|x^{\prime}-u^{\prime}|^{2}}{n_{3}}+\frac{2d|\langle x^{\prime}-u^{\prime},y^{\prime}_{\ell}-x^{\prime}\rangle|}{n_{3}}\\ &\leq C\frac{N^{2}}{n_{3}}+\frac{CN\cdot A_{n}^{2}\sqrt{n}}{n_{3}}\to 0,\end{split}

as NN\to\infty. ∎

Lemma 4.3.

When xGx\in G and n1εnnn_{1}\geq\varepsilon_{n}n, we have

p~n1(x)(y1,z)=(1+o(1))pn1(y1,u)for all zB0,x and all u𝕋N+xN,\widetilde{p}^{(x)}_{n_{1}}(y^{\prime}_{\ell-1},z^{\prime})=(1+o(1))p_{n_{1}}(y^{\prime}_{\ell-1},u^{\prime})\quad\text{for all $z^{\prime}\in\partial B_{0,x}$ and all $u^{\prime}\in\mathbb{T}_{N}+xN$},

with the o(1)o(1) term uniform in zz^{\prime} and uu^{\prime}.

Proof.

The statement will follow if we show the following claim:

𝐏y1[Yn1=z,Yt𝐊+xN for some 0tn1]=o(1)pn1(y1,z).\mathbf{P}_{y^{\prime}_{\ell-1}}[Y_{n_{1}}=z^{\prime},\,\text{$Y_{t}\in\mathbf{K}+xN$ for some $0\leq t\leq n_{1}$}]=o(1)p_{n_{1}}(y^{\prime}_{\ell-1},z^{\prime}).

For this, observe that by (5) we have

pn1(y1,z)cn1d/2exp(C|zy1|2n1)cnd/2exp(cAn2n+Nζεnn)cnd/2exp(C(loglogn)O(1))=nd/2+o(1).\begin{split}p_{n_{1}}(y^{\prime}_{\ell-1},z^{\prime})&\geq\frac{c}{n_{1}^{d/2}}\exp\left(-C\frac{|z^{\prime}-y^{\prime}_{\ell-1}|^{2}}{n_{1}}\right)\\ &\geq\frac{c}{n^{d/2}}\exp\left(-c\frac{A_{n}^{2}n+N^{\zeta}}{\varepsilon_{n}n}\right)\\ &\geq\frac{c}{n^{d/2}}\exp\left(-C(\log\log n)^{O(1)}\right)\\ &=n^{-d/2+o(1)}.\end{split} (46)

On the other hand, using the Markov property, (5), and the fact that for x𝐊+xNx^{\prime}\in\mathbf{K}+xN we have |y1x|cNζ|y^{\prime}_{\ell-1}-x^{\prime}|\geq cN^{\zeta} and |xz|cNζ|x^{\prime}-z^{\prime}|\geq cN^{\zeta}, we get

𝐏y1[Yn1=z,Yt𝐊+xN for some 0tn1]1mn11x𝐊+xNpm(y1,x)pn1m(x,z)C1mn111md/21(n1m)d/2exp(cN2ζm)exp(cN2ζn1m)C1mn1/21md/21(n1m)d/2exp(cN2ζm)exp(cN2ζn1m).\begin{split}&\mathbf{P}_{y^{\prime}_{\ell-1}}[Y_{n_{1}}=z^{\prime},\,\text{$Y_{t}\in\mathbf{K}+xN$ for some $0\leq t\leq n_{1}$}]\\ &\qquad\leq\sum_{1\leq m\leq n_{1}-1}\sum_{x^{\prime}\in\mathbf{K}+xN}p_{m}(y^{\prime}_{\ell-1},x^{\prime})\,p_{n_{1}-m}(x^{\prime},z^{\prime})\\ &\qquad\leq C\sum_{1\leq m\leq n_{1}-1}\frac{1}{m^{d/2}}\frac{1}{(n_{1}-m)^{d/2}}\exp\left(-c\frac{N^{2\zeta}}{m}\right)\exp\left(-c\frac{N^{2\zeta}}{n_{1}-m}\right)\\ &\qquad\leq C\sum_{1\leq m\leq n_{1}/2}\frac{1}{m^{d/2}}\frac{1}{(n_{1}-m)^{d/2}}\exp\left(-c\frac{N^{2\zeta}}{m}\right)\exp\left(-c\frac{N^{2\zeta}}{n_{1}-m}\right).\end{split} (47)

We note here that the sum over 1mn1/21\leq m\leq n_{1}/2 and the sum over n1/2mn11n_{1}/2\leq m\leq n_{1}-1 are symmetric. Bounding the sum over 1mn1/21\leq m\leq n_{1}/2 gives

Cn1d/21mn1/21md/2exp(cN2ζm)=Cn1d/2[1mN2ζ1md/2exp(cN2ζm)+N2ζ<mn1/21md/2exp(cN2ζm)].\begin{split}&\frac{C}{n_{1}^{d/2}}\sum_{1\leq m\leq n_{1}/2}\frac{1}{m^{d/2}}\exp\left(-c\frac{N^{2\zeta}}{m}\right)\\ &\quad=\frac{C}{n_{1}^{d/2}}\left[\sum_{1\leq m\leq N^{2\zeta}}\frac{1}{m^{d/2}}\exp\left(-c\frac{N^{2\zeta}}{m}\right)+\sum_{N^{2\zeta}<m\leq n_{1}/2}\frac{1}{m^{d/2}}\exp\left(-c\frac{N^{2\zeta}}{m}\right)\right].\end{split} (48)

In the second sum we can bound the exponential by 11, and get the upper bound

Cn1d/2Nζ(2d)=o(nd/2+o(1)).\frac{C}{n_{1}^{d/2}}N^{\zeta(2-d)}=o(n^{-d/2+o(1)}).

In the first sum, we group terms on dyadic scales kk so that 2kN2ζ/m2k+12^{k}\leq N^{2\zeta}/m\leq 2^{k+1}, k=0,,log2N2ζ+1k=0,\dots,\lfloor\log_{2}N^{2\zeta}\rfloor+1. This gives the bound

Cn1d/2k=0log2N2ζ+1(2k+1)d/2(N2ζ)d/2exp(c2k)Cn1d/21Nζd,\frac{C}{n_{1}^{d/2}}\sum_{k=0}^{\lfloor\log_{2}N^{2\zeta}\rfloor+1}\frac{(2^{k+1})^{d/2}}{(N^{2\zeta})^{d/2}}\exp\left(-c2^{k}\right)\leq\frac{C}{n_{1}^{d/2}}\frac{1}{N^{\zeta d}},

which is also o(nd/2+o(1))o(n^{-d/2+o(1)}). ∎

In order to apply the previous two lemmas to analyze I(x)I(x) in (45), we first define a modification of FF in (LABEL:e:F-def), where zz^{\prime} and xx^{\prime} are both replaced by a vertex u𝕋N+xNu^{\prime}\in\mathbb{T}_{N}+xN. That is, we define

F~(n1,n2,n3,u,x,y1,y)=zB0,xx𝐊+xNpn1(y1,u)𝐏z[H𝐊+xN=n2<ξB0,x,YH𝐊+xN=x]pn3(u,y).\begin{split}&\widetilde{F}(n_{1},n_{2},n_{3},u^{\prime},x,y^{\prime}_{\ell-1},y^{\prime}_{\ell})\\ &\quad=\sum_{z^{\prime}\in\partial B_{0,x}}\sum_{x^{\prime}\in\mathbf{K}+xN}p_{n_{1}}(y^{\prime}_{\ell-1},u^{\prime})\mathbf{P}_{z^{\prime}}[H_{\mathbf{K}+xN}=n_{2}<\xi_{B_{0,x}},\,Y_{H_{\mathbf{K}+xN}}=x^{\prime}]p_{n_{3}}(u^{\prime},y^{\prime}_{\ell}).\end{split}

Then the Lemmas 4.2 and 4.3 allow us to write, for xGx\in G, the main term I(x)I(x) in (45) as

I(x)=n1+n2+n3=nn1,n3εnn,n2N2δ/dF(n1,n2,n3,x,y1,y)=1+o(1)Ndu𝕋N+xNn1+n2+n3=nn1,n3εnn,n2N2δ/dF~(n1,n2,n3,u,x,y1,y)=1+o(1)Ndu𝕋N+xNn1+n2+n3=nn1,n3εnnn2N2δ/dpn1(y1,u)pn3(u,y)×zB0,x𝐏z[H𝐊+xN=n2<ξB0,x],\begin{split}I(x)&=\sum_{\begin{subarray}{c}n_{1}+n_{2}+n_{3}=n\\ n_{1},n_{3}\geq\varepsilon_{n}n,\,n_{2}\leq N^{2\delta/d}\end{subarray}}F(n_{1},n_{2},n_{3},x,y^{\prime}_{\ell-1},y^{\prime}_{\ell})\\ &=\frac{1+o(1)}{N^{d}}\sum_{u^{\prime}\in\mathbb{T}_{N}+xN}\sum_{\begin{subarray}{c}n_{1}+n_{2}+n_{3}=n\\ n_{1},n_{3}\geq\varepsilon_{n}n,\,n_{2}\leq N^{2\delta/d}\end{subarray}}\widetilde{F}(n_{1},n_{2},n_{3},u^{\prime},x,y^{\prime}_{\ell-1},y^{\prime}_{\ell})\\ &=\frac{1+o(1)}{N^{d}}\sum_{u^{\prime}\in\mathbb{T}_{N}+xN}\sum_{\begin{subarray}{c}n_{1}+n_{2}+n_{3}=n\\ n_{1},n_{3}\geq\varepsilon_{n}n\\ n_{2}\leq N^{2\delta/d}\end{subarray}}p_{n_{1}}(y^{\prime}_{\ell-1},u^{\prime})\,p_{n_{3}}(u^{\prime},y^{\prime}_{\ell})\\ &\qquad\times\sum_{z^{\prime}\in\partial B_{0,x}}\mathbf{P}_{z^{\prime}}[H_{\mathbf{K}+xN}=n_{2}<\xi_{B_{0,x}}],\end{split} (49)

where the sum over xx^{\prime} is removed since

x𝐊+xN𝐏z[H𝐊+xN=n2<ξB0,x,YH𝐊+xN=x]=𝐏z[H𝐊+xN=n2<ξB0,x].\sum_{x^{\prime}\in\mathbf{K}+xN}\mathbf{P}_{z^{\prime}}[H_{\mathbf{K}+xN}=n_{2}<\xi_{B_{0,x}},\,Y_{H_{\mathbf{K}+xN}}=x^{\prime}]=\mathbf{P}_{z^{\prime}}[H_{\mathbf{K}+xN}=n_{2}<\xi_{B_{0,x}}].
Lemma 4.4.

Assume that n1,n3εnnn_{1},n_{3}\geq\varepsilon_{n}n and n2N2δ/dn_{2}\leq N^{2\delta/d}.
(i) We have

pn1+n3(y1,y)=(1+o(1))pn(y1,y).p_{n_{1}+n_{3}}(y^{\prime}_{\ell-1},y^{\prime}_{\ell})=(1+o(1))p_{n}(y^{\prime}_{\ell-1},y^{\prime}_{\ell}).

(ii) We have

xGu𝕋N+xNpn1(y1,u)pn3(u,y)=(1+o(1))pn1+n3(y1,y).\sum_{x\in G}\sum_{u^{\prime}\in\mathbb{T}_{N}+xN}p_{n_{1}}(y^{\prime}_{\ell-1},u^{\prime})\,p_{n_{3}}(u^{\prime},y^{\prime}_{\ell})=(1+o(1))p_{n_{1}+n_{3}}(y^{\prime}_{\ell-1},y^{\prime}_{\ell}). (50)
Proof.

(i) When n2N2δ/dn2/dn_{2}\leq N^{2\delta/d}\sim n^{2/d}, we have

n1+n3=n(1O(n1+2/d)).n_{1}+n_{3}=n\left(1-O\left(n^{-1+2/d}\right)\right).

Hence the exponential term in the LCLT for pn1+n3(y1,y)p_{n_{1}+n_{3}}(y^{\prime}_{\ell-1},y^{\prime}_{\ell}) is

exp(|yy1|2n(1+O(n1+2/d)))=(1+o(1))exp(|yy1|2n),\exp\left(-\frac{|y^{\prime}_{\ell}-y^{\prime}_{\ell-1}|^{2}}{n}\left(1+O(n^{-1+2/d})\right)\right)=(1+o(1))\,\exp\left(-\frac{|y^{\prime}_{\ell}-y^{\prime}_{\ell-1}|^{2}}{n}\right),

where we used that An=10loglognA_{n}=10\log\log n, and hence |yy1|2An2n=no(n12/d)|y^{\prime}_{\ell}-y^{\prime}_{\ell-1}|^{2}\leq A_{n}^{2}n=n\,o(n^{1-2/d}).

(ii) If we summed over all xdx\in\mathbb{Z}^{d}, we would get exactly pn1+n3(y1,y)p_{n_{1}+n_{3}}(y^{\prime}_{\ell-1},y^{\prime}_{\ell}). Thus the claim amounts to showing that

xdGu𝕋N+xNpn1(y1,u)pn3(u,y)=o(1)pn1+n3(y1,y).\sum_{x\in\mathbb{Z}^{d}\setminus G}\sum_{u^{\prime}\in\mathbb{T}_{N}+xN}p_{n_{1}}(y^{\prime}_{\ell-1},u^{\prime})\,p_{n_{3}}(u^{\prime},y^{\prime}_{\ell})=o(1)p_{n_{1}+n_{3}}(y^{\prime}_{\ell-1},y^{\prime}_{\ell}). (51)

First, note that from the Local CLT we have

pn1+n3(y1,y)=(1+o(1))p¯n1+n3(y1,y).p_{n_{1}+n_{3}}(y^{\prime}_{\ell-1},y^{\prime}_{\ell})=(1+o(1))\overline{p}_{n_{1}+n_{3}}(y^{\prime}_{\ell-1},y^{\prime}_{\ell}).

In order to estimate the left-hand side of (51), using (5), the contribution of {xdG:max{|y1xN|,|xNy1|}>An2n}\{x\in\mathbb{Z}^{d}\setminus G:\max\{|y^{\prime}_{\ell-1}-xN|,|xN-y^{\prime}_{\ell-1}|\}>A_{n}^{2}\sqrt{n}\} can be estimated as follows. First, we have

pn1+n3(y1,y)cnd/2exp(CAn2(1+o(1)))cnd/2exp(C(loglogn)2).p_{n_{1}+n_{3}}(y^{\prime}_{\ell-1},y^{\prime}_{\ell})\geq\frac{c}{n^{d/2}}\exp(-CA_{n}^{2}(1+o(1)))\geq\frac{c}{n^{d/2}}\exp(-C(\log\log n)^{2}).

Here we used |yy1|2An2n|y^{\prime}_{\ell}-y^{\prime}_{\ell-1}|^{2}\leq A_{n}^{2}n and n1+n3=n(1o(1))n_{1}+n_{3}=n(1-o(1)).

On the other hand, note that either n1n/3n_{1}\geq n/3 or n3n/3n_{3}\geq n/3. Without loss of generality, assume that n3n/3n_{3}\geq n/3. Then the contribution to the left-hand side of (51), using (5), and by summing in dyadic shells with radii 2kAn2n2^{k}A_{n}^{2}\sqrt{n}, k=0,1,2,k=0,1,2,\dots we get the bound

k=0C(An2n)d2dkCn1d/2exp(c22kAn4n/n1)1n3d/2exp(c22kAn4n/n3)k=0CAn2d 2dk1εnd/2exp(c22kAn4)1nd/2exp(c22kAn4)Cnd/2An2dεnd/2k=0exp(c22k(loglogn)4+dklog2)=Cnd/2o(exp(100(loglogn)2)).\begin{split}&\sum_{k=0}^{\infty}C(A_{n}^{2}\sqrt{n})^{d}2^{dk}\,\frac{C}{n_{1}^{d/2}}\,\exp(-c2^{2k}A_{n}^{4}n/n_{1})\,\frac{1}{n_{3}^{d/2}}\,\exp(-c2^{2k}A_{n}^{4}n/n_{3})\\ &\qquad\leq\sum_{k=0}^{\infty}C\,A_{n}^{2d}\,2^{dk}\,\frac{1}{\varepsilon_{n}^{d/2}}\,\exp(-c2^{2k}A_{n}^{4})\frac{1}{n^{d/2}}\,\exp(-c2^{2k}A_{n}^{4})\\ &\qquad\leq\frac{C}{n^{d/2}}\,\frac{A_{n}^{2d}}{\varepsilon_{n}^{d/2}}\,\sum_{k=0}^{\infty}\exp(-c2^{2k}(\log\log n)^{4}+dk\log 2)\\ &\qquad=\frac{C}{n^{d/2}}\,o\left(\exp(-100(\log\log n)^{2})\right).\end{split} (52)

The above lemma allows us to write

xGI(x)=1+o(1)Ndpn(y1,y)n1+n2+n3=nn1,n3εnnn2N2δ/dzB0,x𝐏z[H𝐊+xN=n2<ξB0,x]=(1+o(1))nNdpn(y1,y)n2N2δ/dzB0,x𝐏z[H𝐊+xN=n2<ξB0,x].\begin{split}\sum_{x\in G}I(x)&=\frac{1+o(1)}{N^{d}}\,p_{n}(y^{\prime}_{\ell-1},y^{\prime}_{\ell})\,\sum_{\begin{subarray}{c}n_{1}+n_{2}+n_{3}=n\\ n_{1},n_{3}\geq\varepsilon_{n}n\\ n_{2}\leq N^{2\delta/d}\end{subarray}}\sum_{z^{\prime}\in\partial B_{0,x}}\mathbf{P}_{z^{\prime}}[H_{\mathbf{K}+xN}=n_{2}<\xi_{B_{0,x}}]\\ &=\frac{(1+o(1))\,n}{N^{d}}\,p_{n}(y^{\prime}_{\ell-1},y^{\prime}_{\ell})\,\sum_{n_{2}\leq N^{2\delta/d}}\sum_{z^{\prime}\in\partial B_{0,x}}\mathbf{P}_{z^{\prime}}[H_{\mathbf{K}+xN}=n_{2}<\xi_{B_{0,x}}].\end{split} (53)

The next lemma will help us extract the Cap(K)\mathrm{Cap}(K) contribution from the right-hand side of (43).

Lemma 4.5.

We have

n2=0N2δ/dzB0,x𝐏z[H𝐊+xN=n2<ξB0,x]=12Cap(K)(1+o(1)).\begin{split}\sum_{n_{2}=0}^{N^{2\delta/d}}\sum_{z^{\prime}\in\partial B_{0,x}}\mathbf{P}_{z^{\prime}}[H_{\mathbf{K}+xN}=n_{2}<\xi_{B_{0,x}}]=\frac{1}{2}\mathrm{Cap}(K)\,(1+o(1)).\end{split} (54)
Proof.

Performing the sum over n2n_{2} allows us to re-write the expression in the left-hand side of (54) as

(zB0,x𝐏z[H𝐊+xN<ξB0,x])(zB0,x𝐏z[N2δ/d<H𝐊+xN<ξB0,x])=12Cap(K)+o(1)zB0,x𝐱𝐊𝐏z[N2δ/d<H𝐊+xN<ξB0,x,YH𝐊+xN=𝐱+xN].\begin{split}&\left(\sum_{z^{\prime}\in\partial B_{0,x}}\mathbf{P}_{z^{\prime}}[H_{\mathbf{K}+xN}<\xi_{B_{0,x}}]\right)-\left(\sum_{z^{\prime}\in\partial B_{0,x}}\mathbf{P}_{z^{\prime}}[N^{2\delta/d}<H_{\mathbf{K}+xN}<\xi_{B_{0,x}}]\right)\\ &\qquad=\frac{1}{2}\mathrm{Cap}(K)+o(1)-\sum_{z^{\prime}\in\partial B_{0,x}}\sum_{\mathbf{x}\in\mathbf{K}}\mathbf{P}_{z^{\prime}}[N^{2\delta/d}<H_{\mathbf{K}+xN}<\xi_{B_{0,x}},\,Y_{H_{\mathbf{K}+xN}}=\mathbf{x}+xN].\end{split}

Here the 1/21/2 before Cap(K)\mathrm{Cap}(K) comes from the random walk being lazy; see (4). Using time-reversal for the summand in the last term we get the expression

=12Cap(K)+o(1)𝐱𝐊zB0,x𝐏𝐱+xN[N2δ/d<ξB0,x<H𝐊+xN,YξB0,x=z]=12Cap(K)+o(1)𝐱𝐊𝐏𝐱+xN[N2δ/d<ξB0,x<H𝐊+xN].\begin{split}&=\frac{1}{2}\mathrm{Cap}(K)+o(1)-\sum_{\mathbf{x}\in\mathbf{K}}\sum_{z^{\prime}\in\partial B_{0,x}}\mathbf{P}_{\mathbf{x}+xN}[N^{2\delta/d}<\xi_{B_{0,x}}<H_{\mathbf{K}+xN},\,Y_{\xi_{B_{0,x}}}=z^{\prime}]\\ &\qquad=\frac{1}{2}\mathrm{Cap}(K)+o(1)-\sum_{\mathbf{x}\in\mathbf{K}}\mathbf{P}_{\mathbf{x}+xN}[N^{2\delta/d}<\xi_{B_{0,x}}<H_{\mathbf{K}+xN}].\end{split} (55)

The subtracted term in the right-hand side of (55) is at most

|𝐊|max𝐱𝐊𝐏𝐱+xN[ξB0,x>N2δ/d].|\mathbf{K}|\max_{\mathbf{x}\in\mathbf{K}}\mathbf{P}_{\mathbf{x}+xN}[\xi_{B_{0,x}}>N^{2\delta/d}].

Since ζ<δ/d\zeta<\delta/d, this expression is o(1)o(1). ∎

From the above lemma we get that the main contribution equals

xGI(x)=(1+o(1))nNd12Cap(K)pn(y1,y).\sum_{x\in G}I(x)=(1+o(1))\frac{n}{N^{d}}\,\frac{1}{2}\,\mathrm{Cap}(K)\,p_{n}(y^{\prime}_{\ell-1},y^{\prime}_{\ell}). (56)

It is left to estimate all the error terms.

4.2.2 The error terms

Lemma 4.6.

We have

xGII(x)=o(1)nNdpn(y1,y).\sum_{x\in G}II(x)=o(1)\frac{n}{N^{d}}p_{n}(y^{\prime}_{\ell-1},y^{\prime}_{\ell}).
Proof.

We split the estimates according to which condition is violated in the sum. Recall that in the proof of Proposition 2.3 we chose ε>0\varepsilon>0 such that ε+6ε2<0-\varepsilon+6\varepsilon^{2}<0. Here we make the further restriction that ε<2δ/d2ζ\varepsilon<2\delta/d-2\zeta.

Case 1. n2>N2δ/dn_{2}>N^{2\delta/d}. We claim that

𝐏z[H𝐊+xN=n2<ξB0,x,YH𝐊+xN=x]Cexp(Nε/2)pn2(z,x).\mathbf{P}_{z^{\prime}}[H_{\mathbf{K}+xN}=n_{2}<\xi_{B_{0,x}},Y_{H_{\mathbf{K}+xN}}=x^{\prime}]\leq C\exp(-N^{\varepsilon/2})p_{n_{2}}(z^{\prime},x^{\prime}). (57)

Since in every time interval of duration N2ζN^{2\zeta}, the walk has a positive chance to exit the ball B0,xB_{0,x}, we have

𝐏z[H𝐊+xN=n2<ξB0,x,YH𝐊+xN=x]𝐏z[ξB0,x>N2δ/d]Cexp(cN2δ/dN2ζ)Cexp(Nε).\begin{split}\mathbf{P}_{z^{\prime}}[H_{\mathbf{K}+xN}=n_{2}<\xi_{B_{0,x}},Y_{H_{\mathbf{K}+xN}}=x^{\prime}]&\leq\mathbf{P}_{z^{\prime}}[\xi_{B_{0,x}}>N^{2\delta/d}]\leq C\exp(-c\frac{N^{2\delta/d}}{N^{2\zeta}})\\ &\leq C\exp(-N^{\varepsilon}).\end{split}

By (5) on pn2p_{n_{2}} and since ζ<δ/d\zeta<\delta/d and N2δ/d<n2<nN^{2\delta/d}<n_{2}<n we have

pn2(z,x)cn2d/2exp(CN2ζn2)cexp(Nε/2).p_{n_{2}}(z^{\prime},x^{\prime})\geq\frac{c}{n_{2}^{d/2}}\exp\left(-C\frac{N^{2\zeta}}{n_{2}}\right)\geq c\exp(-N^{\varepsilon/2}).

Here we lower bounded exp(CN2ζn2)\exp\left(-C\frac{N^{2\zeta}}{n_{2}}\right) by cc. The claim (57) is proved.

We also have the bound

p~n1(x)(y1,z)pn1(y1,z).\widetilde{p}_{n_{1}}^{(x)}(y^{\prime}_{\ell-1},z^{\prime})\leq p_{n_{1}}(y^{\prime}_{\ell-1},z^{\prime}).

We then get (summing over zz^{\prime} and xx^{\prime}) that the contribution to xdII(x)\sum_{x\in\mathbb{Z}^{d}}II(x) from Case 1 is at most

n1+n2+n3=nzdxdpn1(y1,z)Cexp(Nε/2)pn2(z,x)pn3(x,y)Cexp(Nε/2)n1+n2+n3=npn(y1,y)Cn2exp(Nε/2)pn(y1,y)=o(1)nNdpn(y1,y).\begin{split}&\sum_{n_{1}+n_{2}+n_{3}=n}\sum_{z^{\prime}\in\mathbb{Z}^{d}}\sum_{x^{\prime}\in\mathbb{Z}^{d}}p_{n_{1}}(y^{\prime}_{\ell-1},z^{\prime})\,C\exp(-N^{\varepsilon/2})p_{n_{2}}(z^{\prime},x^{\prime})\,p_{n_{3}}(x^{\prime},y^{\prime}_{\ell})\\ &\qquad\leq C\exp(-N^{\varepsilon/2})\sum_{n_{1}+n_{2}+n_{3}=n}p_{n}(y^{\prime}_{\ell-1},y^{\prime}_{\ell})\\ &\qquad\leq Cn^{2}\exp(-N^{\varepsilon/2})p_{n}(y^{\prime}_{\ell-1},y^{\prime}_{\ell})\\ &\qquad=o(1)\frac{n}{N^{d}}p_{n}(y^{\prime}_{\ell-1},y^{\prime}_{\ell}).\end{split}

Case 2. n2N2δ/dn_{2}\leq N^{2\delta/d} and n1<εnnn_{1}<\varepsilon_{n}n. Note that since n2N2εnnn_{2}\leq N^{2}\leq\varepsilon_{n}n for large enough NN, if we put n1=n1+n2n^{\prime}_{1}=n_{1}+n_{2} and n2=n3n^{\prime}_{2}=n_{3}, we can upper bound the contribution of this case by

n1+n2=nn12εnnxdx𝐊+xNpn1(y1,x)pn2(x,y).\sum_{\begin{subarray}{c}n^{\prime}_{1}+n^{\prime}_{2}=n\\ n^{\prime}_{1}\leq 2\varepsilon_{n}n\end{subarray}}\sum_{x\in\mathbb{Z}^{d}}\sum_{x^{\prime}\in\mathbf{K}+xN}p_{n^{\prime}_{1}}(y^{\prime}_{\ell-1},x^{\prime})p_{n^{\prime}_{2}}(x^{\prime},y^{\prime}_{\ell}).

Now we can make use of the corollaries stated after the proof of Proposition 2.3 as follows.

Case 2–(i). N2+6εn12εnnN^{2+6\varepsilon}\leq n^{\prime}_{1}\leq 2\varepsilon_{n}n and |y1x|(n1)12+ε|y^{\prime}_{\ell-1}-x^{\prime}|\leq(n^{\prime}_{1})^{\frac{1}{2}+\varepsilon} and |xy|(n2)12+ε|x^{\prime}-y^{\prime}_{\ell}|\leq(n^{\prime}_{2})^{\frac{1}{2}+\varepsilon}. Note that for large enough NN we have n2(n2εnn)N2+6εn^{\prime}_{2}\geq(n-2\varepsilon_{n}n)\geq N^{2+6\varepsilon}. Hence due to Corollary 4.2(i) (with εn\varepsilon_{n} there replaced by 2εn2\varepsilon_{n}) the contribution of this case is

o(1)nNdpn(y1,y).\begin{split}o(1)\frac{n}{N^{d}}p_{n}(y^{\prime}_{\ell-1},y^{\prime}_{\ell}).\end{split}

Case 2–(ii). N2+6εn12εnnN^{2+6\varepsilon}\leq n^{\prime}_{1}\leq 2\varepsilon_{n}n but either |y1x|>(n1)12+ε|y^{\prime}_{\ell-1}-x^{\prime}|>(n^{\prime}_{1})^{\frac{1}{2}+\varepsilon} or |xy|>(n2)12+ε|x^{\prime}-y^{\prime}_{\ell}|>(n^{\prime}_{2})^{\frac{1}{2}+\varepsilon}. Again, we have n2N2+6εn^{\prime}_{2}\geq N^{2+6\varepsilon}. Hence, neglecting the requirement n12εnnn^{\prime}_{1}\leq 2\varepsilon_{n}n, Corollary 4.3 immediately implies that the contribution of this case is

o(1)nNdpn(y1,y).\begin{split}o(1)\frac{n}{N^{d}}p_{n}(y^{\prime}_{\ell-1},y^{\prime}_{\ell}).\end{split}

Case 2–(iii). n1<N2+6εn^{\prime}_{1}<N^{2+6\varepsilon}. It follows immediately from Corollary 4.4 that the contribution of this case is

o(1)nNdpn(y1,y).\begin{split}o(1)\frac{n}{N^{d}}p_{n}(y^{\prime}_{\ell-1},y^{\prime}_{\ell}).\end{split}

Case 3. n2N2δ/dn_{2}\leq N^{2\delta/d} and n3<εnnn_{3}<\varepsilon_{n}n. Due to symmetry, this case can be handled very similarly to Case 2. ∎

Lemma 4.7.

We have

xdG𝐏[A(x)]=o(1)nNdpn(y1,y).\sum_{x\in\mathbb{Z}^{d}\setminus G}\mathbf{P}[A(x)]=o(1)\frac{n}{N^{d}}p_{n}(y^{\prime}_{\ell-1},y^{\prime}_{\ell}).
Proof.

By the same arguments as in Lemma 4.4(ii), we have

pn(y1,y)Cnd/2exp(100(loglogn)2).p_{n}(y^{\prime}_{\ell-1},y^{\prime}_{\ell})\geq\frac{C}{n^{d/2}}\exp\left(-100(\log\log n)^{2}\right).

For xdGx\in\mathbb{Z}^{d}\setminus G, let kk be the dyadic scale that satisfies

2kAn2n|xy1|<2k+1An2n.2^{k}A_{n}^{2}\sqrt{n}\leq|x^{\prime}-y^{\prime}_{\ell-1}|<2^{k+1}A_{n}^{2}\sqrt{n}.

The same bounds hold up to constants for |xy||x^{\prime}-y^{\prime}_{\ell}|.

Then we have

𝐏[A(x)]1mn1x𝐊+xNpm(y1,x)pnm(x,y)C|𝐊|1mn11md/21(nm)d/2exp(c22kAn4nm)exp(c22kAn4nnm).\begin{split}\mathbf{P}[A(x)]&\leq\sum_{1\leq m\leq n-1}\sum_{x^{\prime}\in\mathbf{K}+xN}p_{m}(y^{\prime}_{\ell-1},x^{\prime})p_{n-m}(x^{\prime},y^{\prime}_{\ell})\\ &\leq C|\mathbf{K}|\sum_{1\leq m\leq n-1}\frac{1}{m^{d/2}}\frac{1}{(n-m)^{d/2}}\exp\left(-c\frac{2^{2k}A_{n}^{4}n}{m}\right)\exp\left(-c\frac{2^{2k}A_{n}^{4}n}{n-m}\right).\end{split}

Due to symmetry of the right-hand side, it is enough to consider the contribution of 1mn/21\leq m\leq n/2, which is bounded by

Cnd/2exp(c22kAn4)1mn/21md/2exp(c22kAn4nm)Cnd/2exp(c22kAn4)k=1log2nm:2kn/m<2k+12kd/2nd/2exp(c22kAn42k)Cndexp(c22kAn4)k=1n2kexp(c22kAn42k+kd/2log2)Cnndexp(c22kAn4).\begin{split}&\frac{C}{n^{d/2}}\exp\left(-c2^{2k}A_{n}^{4}\right)\sum_{1\leq m\leq n/2}\frac{1}{m^{d/2}}\exp\left(-c\frac{2^{2k}A_{n}^{4}n}{m}\right)\\ &\qquad\leq\frac{C}{n^{d/2}}\exp\left(-c2^{2k}A_{n}^{4}\right)\sum_{k^{\prime}=1}^{\lfloor\log_{2}n\rfloor}\sum_{m:2^{k^{\prime}}\leq n/m<2^{k^{\prime}+1}}\frac{2^{k^{\prime}d/2}}{n^{d/2}}\exp\left(-c2^{2k}A_{n}^{4}2^{k^{\prime}}\right)\\ &\qquad\leq\frac{C}{n^{d}}\exp\left(-c2^{2k}A_{n}^{4}\right)\sum_{k^{\prime}=1}^{\infty}\frac{n}{2^{k^{\prime}}}\exp\left(-c2^{2k}A_{n}^{4}2^{k^{\prime}}+k^{\prime}d/2\log 2\right)\\ &\qquad\leq\frac{Cn}{n^{d}}\exp\left(-c2^{2k}A_{n}^{4}\right).\end{split}

Now summing over xdGx\in\mathbb{Z}^{d}\setminus G we have that the number of the copies of the torus at dyadic scale 2kAn2n2^{k}A_{n}^{2}\sqrt{n} is at most C1Nd(2kAn2n)dC\frac{1}{N^{d}}\left(2^{k}A_{n}^{2}\sqrt{n}\right)^{d}. Hence

xdG𝐏[A(x)]Cnndk=01Nd(2kAn2n)dexp(c22kAn4)Cnd/2nNdk=0exp(c22kAn4+kdlog2+2dlogAn)=o(1)1nd/2nNdexp(100(loglogn)2).\begin{split}\sum_{x\in\mathbb{Z}^{d}\setminus G}\mathbf{P}[A(x)]&\leq\frac{Cn}{n^{d}}\sum_{k=0}^{\infty}\frac{1}{N^{d}}\left(2^{k}A_{n}^{2}\sqrt{n}\right)^{d}\exp\left(-c2^{2k}A_{n}^{4}\right)\\ &\leq\frac{C}{n^{d/2}}\frac{n}{N^{d}}\sum_{k=0}^{\infty}\exp\left(-c2^{2k}A_{n}^{4}+kd\log 2+2d\log A_{n}\right)\\ &=o(1)\frac{1}{n^{d/2}}\frac{n}{N^{d}}\exp\left(-100(\log\log n)^{2}\right).\end{split}

Lemma 4.8.

We have

x1x2d𝐏[A(x1)A(x2)]=o(1)nNdpn(y1,y).\sum_{x_{1}\not=x_{2}\in\mathbb{Z}^{d}}\mathbf{P}[A(x_{1})\cap A(x_{2})]=o(1)\frac{n}{N^{d}}p_{n}(y^{\prime}_{\ell-1},y^{\prime}_{\ell}).
Proof.

The summand on the left-hand side is bounded above by

𝐏[A(x1)A(x2)]m1+m2+m3=nx1𝐊+x1Nx2𝐊+x2N[pm1(y1,x1)pm2(x1,x2)pm3(x2,y)+pm1(y1,x2)pm2(x2,x1)pm3(x1,y)].\begin{split}\mathbf{P}[A(x_{1})\cap A(x_{2})]\leq\sum_{m_{1}+m_{2}+m_{3}=n}\sum_{\begin{subarray}{c}x^{\prime}_{1}\in\mathbf{K}+x_{1}N\\ x^{\prime}_{2}\in\mathbf{K}+x_{2}N\end{subarray}}&\left[p_{m_{1}}(y^{\prime}_{\ell-1},x^{\prime}_{1})p_{m_{2}}(x^{\prime}_{1},x^{\prime}_{2})p_{m_{3}}(x^{\prime}_{2},y^{\prime}_{\ell})\right.\\ &\left.+p_{m_{1}}(y^{\prime}_{\ell-1},x^{\prime}_{2})p_{m_{2}}(x^{\prime}_{2},x^{\prime}_{1})p_{m_{3}}(x^{\prime}_{1},y^{\prime}_{\ell})\right].\end{split}

Due to symmetry it is enough to consider the first term inside the summation. The estimates are again modelled on the proof of Proposition 2.3.

Case 1. m1+m2n/2m_{1}+m_{2}\geq n/2 and |x2y1|2C1nlogn|x^{\prime}_{2}-y^{\prime}_{\ell-1}|\leq 2C_{1}\sqrt{n}\sqrt{\log n}. In this case we can use Corollary 4.1 with y=y1y^{\prime}=y^{\prime}_{\ell-1} and y′′=x2y^{\prime\prime}=x^{\prime}_{2} to perform the summation over x1x^{\prime}_{1} and x1x_{1} and get the upper bound:

CnNdm1+m2=nx2dx2𝐊+x2Npm1(y1,x2)pm2(x2,y),C\frac{n}{N^{d}}\sum_{m^{\prime}_{1}+m^{\prime}_{2}=n}\sum_{x_{2}\in\mathbb{Z}^{d}}\sum_{x^{\prime}_{2}\in\mathbf{K}+x_{2}N}p_{m^{\prime}_{1}}(y^{\prime}_{\ell-1},x^{\prime}_{2})p_{m^{\prime}_{2}}(x^{\prime}_{2},y^{\prime}_{\ell}), (58)

where we have written m1=m1+m2m^{\prime}_{1}=m_{1}+m_{2} and m2=m3m^{\prime}_{2}=m_{3}. Using again Corollary 4.1, this time with y=y1y^{\prime}=y^{\prime}_{\ell-1} and y′′=yy^{\prime\prime}=y^{\prime}_{\ell} yields the upper bound

C(nNd)2pn(y1,y)=o(1)nNdpn(y1,y).C\left(\frac{n}{N^{d}}\right)^{2}p_{n}(y^{\prime}_{\ell-1},y^{\prime}_{\ell})=o(1)\frac{n}{N^{d}}p_{n}(y^{\prime}_{\ell-1},y^{\prime}_{\ell}). (59)

Case 2. m1+m2n/2m_{1}+m_{2}\geq n/2 and 2C1nlogn<|x2y1|n12+ε2C_{1}\sqrt{n}\sqrt{\log n}<|x^{\prime}_{2}-y^{\prime}_{\ell-1}|\leq n^{\frac{1}{2}+\varepsilon}. We are going to use that ε1\varepsilon\leq 1, which we can clearly assume. First sum over all x1dx^{\prime}_{1}\in\mathbb{Z}^{d} to get the upper bound

Cnm1+m2=nx2dx2𝐊+x2Npm1(y1,x2)pm2(x2,y),Cn\sum_{m^{\prime}_{1}+m^{\prime}_{2}=n}\sum_{x_{2}\in\mathbb{Z}^{d}}\sideset{}{{}^{\prime}}{\sum}_{x^{\prime}_{2}\in\mathbf{K}+x_{2}N}p_{m^{\prime}_{1}}(y^{\prime}_{\ell-1},x^{\prime}_{2})p_{m^{\prime}_{2}}(x^{\prime}_{2},y^{\prime}_{\ell}), (60)

where the primed summation denotes the restriction 2C1nlogn<|x2y1|n12+ε2C_{1}\sqrt{n}\sqrt{\log n}<|x^{\prime}_{2}-y^{\prime}_{\ell-1}|\leq n^{\frac{1}{2}+\varepsilon}. The choice of C1C_{1} (recall (41)) implies that pm1p_{m^{\prime}_{1}} is o(1/n1+3d/2Nd)o(1/n^{1+3d/2}N^{d}). Due to the triangle inequality we also have |yx2|>C1nlogn|y^{\prime}_{\ell}-x^{\prime}_{2}|>C_{1}\sqrt{n}\sqrt{\log n}. Using the LCLT for pm2p_{m^{\prime}_{2}} we get that

pm2(x2,y)C(m2)d/2exp(dC12nlogn/m2)Cnd/2exp(dC12logn)Cpn(y1,y).p_{m^{\prime}_{2}}(x^{\prime}_{2},y^{\prime}_{\ell})\leq\frac{C}{(m^{\prime}_{2})^{d/2}}\exp(-dC_{1}^{2}n\log n/m^{\prime}_{2})\leq\frac{C}{n^{d/2}}\exp(-dC_{1}^{2}\log n)\leq Cp_{n}(y^{\prime}_{\ell-1},y^{\prime}_{\ell}). (61)

Substituting this bound and pm1=o(1/n1+3d/2Nd)p_{m^{\prime}_{1}}=o(1/n^{1+3d/2}N^{d}) into (60) we get

Cno(1)(1nn3d/2Nd)m1+m2=nx2pn(y1,y)o(1)(nNd)pn(y1,y)x21(n1/2+ε)d=o(nNd)pn(y1,y).\begin{split}&Cn\,o(1)\left(\frac{1}{n\cdot n^{3d/2}\cdot N^{d}}\right)\sum_{m^{\prime}_{1}+m^{\prime}_{2}=n}\sideset{}{{}^{\prime}}{\sum}_{x^{\prime}_{2}}p_{n}(y^{\prime}_{\ell-1},y^{\prime}_{\ell})\\ &\qquad\leq o(1)\left(\frac{n}{N^{d}}\right)p_{n}(y^{\prime}_{\ell-1},y^{\prime}_{\ell})\sideset{}{{}^{\prime}}{\sum}_{x^{\prime}_{2}}\frac{1}{(n^{1/2+\varepsilon})^{d}}\\ &\qquad=o\left(\frac{n}{N^{d}}\right)p_{n}(y^{\prime}_{\ell-1},y^{\prime}_{\ell}).\end{split}

Case 3. m1+m2n/2m_{1}+m_{2}\geq n/2 and |x2y1|>n12+ε|x^{\prime}_{2}-y^{\prime}_{\ell-1}|>n^{\frac{1}{2}+\varepsilon}. Summing over all x1dx^{\prime}_{1}\in\mathbb{Z}^{d}, we get the transition probability pm1+m2(y1,x2)p_{m_{1}+m_{2}}(y^{\prime}_{\ell-1},x^{\prime}_{2}). This is stretched-exponentially small, and hence this case satisfies the required bound.

Case 4. m2+m3n/2m_{2}+m_{3}\geq n/2. Due to symmetry, this case can be handled analogously to Cases 1–3.

4.3 Proof of Proposition 2.1

Proof of Proposition 2.1..

We start with the proof of the second claim. We denote the error term in (12) as EE, which we claim to satisfy |E|E1+E2+E3+E4|E|\leq E_{1}+E_{2}+E_{3}+E_{4}, with

E1=𝐏o[{SnNd<(loglogn)1}{SnNd>logN}]E2=𝐏o[: 1logNNdn such that Ynφ1(B(𝐱𝟎,Nζ))]E3=𝐏o[t:Tt<Sn such that Ytφ1(𝐊)].E4=𝐏o[: 1logNNdn such that |YnY(1)n|>f(n)n12].\begin{split}E_{1}&=\mathbf{P}_{o}\left[\left\{\frac{Sn}{N^{d}}<(\sqrt{\log\log n})^{-1}\right\}\cup\left\{\frac{Sn}{N^{d}}>\log N\right\}\right]\\ E_{2}&=\mathbf{P}_{o}\left[\text{$\exists\ell:\ 1\leq\ell\leq\log N\frac{N^{d}}{n}$ such that $Y_{\ell n}\in\varphi^{-1}(B(\mathbf{x_{0}},N^{\zeta}))$}\right]\\ E_{3}&=\mathbf{P}_{o}\left[\text{$\exists\ t:T\leq t<Sn$ such that $Y_{t}\in\varphi^{-1}(\mathbf{K})$}\right].\\ E_{4}&=\mathbf{P}_{o}\left[\text{$\exists\ell:\ 1\leq\ell\leq\log N\frac{N^{d}}{n}$ such that $|Y_{\ell n}-Y_{(\ell-1)n}|>f(n)n^{\frac{1}{2}}$}\right].\end{split}

Since TSnT\leq Sn, we have

|𝐏o[Ytφ1(𝐊), 0t<T]𝐏o[Ytφ1(𝐊), 0t<Sn]|E3.\displaystyle\left|\mathbf{P}_{o}\left[Y_{t}\not\in\varphi^{-1}(\mathbf{K}),\,0\leq t<T\right]-\mathbf{P}_{o}\left[Y_{t}\not\in\varphi^{-1}(\mathbf{K}),\,0\leq t<Sn\right]\right|\leq E_{3}.

By the Markov property, for (τ,(y,𝐲)=1τ)𝒢ζ,C1(\tau,(y_{\ell},\mathbf{y}_{\ell})_{\ell=1}^{\tau})\in\mathcal{G}_{\zeta,C_{1}},

=1τ𝐏y1[Yn=y,Ytφ1(𝐊) for 0t<n]=𝐏o[Yn=y for 0τYtφ1(𝐊) for 0t<τnYnφ1(B(𝐱𝟎,Nζ)) for 0<τ].\begin{split}&\prod_{\ell=1}^{\tau}\mathbf{P}_{y^{\prime}_{\ell-1}}\left[Y_{n}=y^{\prime}_{\ell},\,\text{$Y_{t}\not\in\varphi^{-1}(\mathbf{K})$ for $0\leq t<n$}\right]\\ &=\mathbf{P}_{o}\left[\parbox{256.0748pt}{$Y_{n\ell}=y^{\prime}_{\ell}$ for $0\leq\ell\leq\tau$; $Y_{t}\not\in\varphi^{-1}(\mathbf{K})$ for $0\leq t<\tau n$; $Y_{n\ell}\not\in\varphi^{-1}(B(\mathbf{x_{0}},N^{\zeta}))$ for $0<\ell\leq\tau$}\right].\end{split}

We denote the probability on the right-hand side by p(τ,(y,𝐲)=1τ)p(\tau,(y_{\ell},\mathbf{y}_{\ell})_{\ell=1}^{\tau}). On the event of the right-hand side, since yτDcy_{\tau}\in D^{c}, we have S=τS=\tau. We claim that

|(τ,(y,𝐲)=1τ)𝒢ζ,C1p(τ,(y,𝐲)=1τ)𝐏o[Ytφ1(𝐊) for 0t<Sn]|E1+E2+E4.\begin{split}\left|\sum_{(\tau,(y_{\ell},\mathbf{y}_{\ell})_{\ell=1}^{\tau})\in\mathcal{G}_{\zeta,C_{1}}}p(\tau,(y_{\ell},\mathbf{y}_{\ell})_{\ell=1}^{\tau})-\mathbf{P}_{o}\left[\text{$Y_{t}\not\in\varphi^{-1}(\mathbf{K})$ for $0\leq t<Sn$}\right]\right|\leq E_{1}+E_{2}+E_{4}.\end{split} (62)

Let 1,2,4\mathcal{E}_{1},\mathcal{E}_{2},\mathcal{E}_{4} be the events in the definitions of E1,E2,E4E_{1},E_{2},E_{4} respectively. Let

A(τ,(y,𝐲)=1τ):={Yn=y for 0τYtφ1(𝐊) for 0t<τnYnφ1(B(𝐱𝟎,Nζ)) for 0<τ}.A(\tau,(y_{\ell},\mathbf{y}_{\ell})_{\ell=1}^{\tau}):=\left\{\parbox{256.0748pt}{$Y_{n\ell}=y^{\prime}_{\ell}$ for $0\leq\ell\leq\tau$; $Y_{t}\not\in\varphi^{-1}(\mathbf{K})$ for $0\leq t<\tau n$; $Y_{n\ell}\not\in\varphi^{-1}(B(\mathbf{x_{0}},N^{\zeta}))$ for $0<\ell\leq\tau$}\right\}.

Then, we have

𝐏o[{Ytφ1(𝐊) for 0t<Sn}1c2c4c]=NdnloglognτNdlogNn𝐏o[{S=τ}{Ytφ1(𝐊) for 0t<τn}2c4c]=(τ,(y,𝐲)=1τ)𝒢ζ,C1𝐏o[A(τ,(y,𝐲)=1τ)2c4c].\begin{split}&\mathbf{P}_{o}\left[\left\{\text{$Y_{t}\not\in\varphi^{-1}(\mathbf{K})$ for $0\leq t<Sn$}\right\}\cap\mathcal{E}^{c}_{1}\cap\mathcal{E}^{c}_{2}\cap\mathcal{E}^{c}_{4}\right]\\ &\quad=\sum_{\frac{N^{d}}{n\sqrt{\log\log n}}\leq\tau\leq\frac{N^{d}\log N}{n}}\mathbf{P}_{o}\left[\left\{S=\tau\right\}\cap\left\{\text{$Y_{t}\not\in\varphi^{-1}(\mathbf{K})$ for $0\leq t<\tau n$}\right\}\cap\mathcal{E}^{c}_{2}\cap\mathcal{E}^{c}_{4}\right]\\ &\quad=\sum_{(\tau,(y_{\ell},\mathbf{y}_{\ell})_{\ell=1}^{\tau})\in\mathcal{G}_{\zeta,C_{1}}}\mathbf{P}_{o}\left[A(\tau,(y_{\ell},\mathbf{y}_{\ell})_{\ell=1}^{\tau})\cap\mathcal{E}^{c}_{2}\cap\mathcal{E}^{c}_{4}\right].\end{split}

From above, the claim (62) follows.

The proof of the second claim of Proposition 2.1 follows subject to Lemmas 4.9, 4.11 and 4.12 below that show Ej0E_{j}\to 0, for j=2,3,4j=2,3,4. We have already shown E10E_{1}\to 0 in Lemma 2.2.

Similarly, the first claim of the Proposition follows from Lemmas 2.2, 4.9, 4.12. ∎

We bound E2,E3E_{2},E_{3} and E4E_{4} in the following lemmas.

Lemma 4.9.

We have E2ClogNNdζnE_{2}\leq C\log N\frac{N^{d\zeta}}{n} for some CC. Consequently, E20E_{2}\to 0.

Proof.

Since the number of points in B(𝐱𝟎,Nζ)B(\mathbf{x_{0}},N^{\zeta}) is O(Ndζ)O(N^{d\zeta}), and since we are considering times after the first stretch, the random walk is well mixed, so by Lemma 2.1 the probability to visit any point in the torus is O(1/Nd)O(1/N^{d}). Using a union bound we have

E2ClogNNdnNdζ1Nd=ClogNNdζn,\displaystyle E_{2}\leq C\log N\frac{N^{d}}{n}N^{d\zeta}\frac{1}{N^{d}}=C\log N\frac{N^{d\zeta}}{n},

Since ζ<δ/d\zeta<\delta/d, we have

E20,as N.E_{2}\to 0,\quad\text{as $N\to\infty$}.

Before we bound the error term E3E_{3}, we first introduce the following lemma. Let T0(i)=inf{t0:Yt(i)=0}T^{(i)}_{0}=\inf\{t\geq 0:Y^{(i)}_{t}=0\}, where recall that Y(i)Y^{(i)} denotes the ii-th coordinate of the dd-dimensional lazy random walk. We will denote by t0t_{0} an instance of T0(i)T^{(i)}_{0}.

Lemma 4.10.

For all 1id1\leq i\leq d, for any integer yy such that t0y<0-t_{0}\leq y<0 and 0<t0n0<t_{0}\leq n we have

𝐄y[Yn(i)|T0(i)=t0,Yn(i)>0]Cn12𝐄y[(Yn(i))2|T0(i)=t0,Yn(i)>0]Cn.\begin{split}\mathbf{E}_{y}\left[Y^{(i)}_{n}\,|\,T^{(i)}_{0}=t_{0},\,Y_{n}^{(i)}>0\right]&\leq Cn^{\frac{1}{2}}\\ \mathbf{E}_{y}\left[(Y^{(i)}_{n})^{2}\,|\,T^{(i)}_{0}=t_{0},\,Y_{n}^{(i)}>0\right]&\leq Cn.\end{split}
Proof.

Using the Markov property at time t0t_{0}, we get

𝐄y[Yn(i)|T0(i)=t0,Yn(i)>0]\displaystyle\mathbf{E}_{y}\left[Y^{(i)}_{n}\,|\,T^{(i)}_{0}=t_{0},Y_{n}^{(i)}>0\right] =𝐄0[Ynt0(i)|Ynt0(i)>0]=𝐄0[Ynt0(i)𝟏{Ynt0(i)>0}]𝐏0[Ynt0(i)>0]\displaystyle=\mathbf{E}_{0}\left[Y_{n-t_{0}}^{(i)}\,|\,Y_{n-t_{0}}^{(i)}>0\right]=\frac{\mathbf{E}_{0}\left[Y^{(i)}_{n-t_{0}}\mathbf{1}_{\{Y^{(i)}_{n-t_{0}}>0\}}\right]}{\mathbf{P}_{0}\left[Y^{(i)}_{n-t_{0}}>0\right]}
C0(𝐄0[(Ynt0(i))2])12C(nt0)12Cn12,\displaystyle\leq C_{0}\left(\mathbf{E}_{0}\left[(Y^{(i)}_{n-t_{0}})^{2}\right]\right)^{\frac{1}{2}}\leq C(n-t_{0})^{\frac{1}{2}}\leq Cn^{\frac{1}{2}},

where the third step is due to Cauchy-Schwarz inequality and 𝐏0[Ynt0(i)>0]c0\mathbf{P}_{0}\left[Y^{(i)}_{n-t_{0}}>0\right]\geq c_{0} for some c0>0c_{0}>0, and the second to last step is due to 𝐄0[(Ynt0(i))2]=(nt0)/2d\mathbf{E}_{0}\left[(Y^{(i)}_{n-t_{0}})^{2}\right]=(n-t_{0})/2d.

We can similarly bound the conditional expectation of (Yn(i))2(Y^{(i)}_{n})^{2} as follows:

𝐄y[(Yn(i))2|T0(i)=t0,Yn(i)>0]=𝐄0[(Ynt0(i))2|Ynt0(i)>0]\displaystyle\mathbf{E}_{y}\left[(Y^{(i)}_{n})^{2}|T^{(i)}_{0}=t_{0},Y^{(i)}_{n}>0\right]=\mathbf{E}_{0}\left[(Y^{(i)}_{n-t_{0}})^{2}|Y^{(i)}_{n-t_{0}}>0\right]
=𝐄0[(Ynt0(i))2𝟏{Ynt0(i)>0}]𝐏0[Ynt0(i)>0]C0(𝐄0[(Ynt0(i))2])C(nt0)Cn.\displaystyle\qquad=\frac{\mathbf{E}_{0}\left[(Y^{(i)}_{n-t_{0}})^{2}\mathbf{1}_{\{Y^{(i)}_{n-t_{0}}>0\}}\right]}{\mathbf{P}_{0}\left[Y^{(i)}_{n-t_{0}}>0\right]}\leq C_{0}\left(\mathbf{E}_{0}\left[(Y^{(i)}_{n-t_{0}})^{2}\right]\right)\leq C(n-t_{0})\leq Cn.

Lemma 4.11.

We have E30E_{3}\to 0 as NN\to\infty.

Proof.

First we are going to bound the time difference between TT and SnSn. We are going to consider separately the cases when YTY_{T} is in each face of the cube (L,L)d(-L,L)^{d}. Assume that we have YT(i)=LY^{(i)}_{T}=L for some 1id1\leq i\leq d. (The arguments needed are very similar when YT(i)=LY^{(i)}_{T}=-L for some 1id1\leq i\leq d, and these will not be given.)

Let us consider the lazy random walk (Yt)t0(Y_{t})_{t\geq 0} in multiples of nn steps. Let

s1=min{n:nT}T,s_{1}=\min\{\ell n:\ell n\geq T\}-T,

and similarly, let

sr+1=rn+min{n:nT}T,r1.s_{r+1}=rn+\min\{\ell n:\ell n\geq T\}-T,\quad r\geq 1.

We let M0=LYT+s1(i)M_{0}=L-Y^{(i)}_{T+s_{1}} and Mr=LYT+sr+1(i)M_{r}=L-Y^{(i)}_{T+s_{r+1}} for r1r\geq 1. We have that (Mr)r0(M_{r})_{r\geq 0} is a martingale. Let S~=inf{r0:Mr0}\tilde{S}=\inf\{r\geq 0:M_{r}\leq 0\}, and we are going to bound 𝐏[S~>Nε1]\mathbf{P}[\tilde{S}>N^{\varepsilon_{1}}] for some small ε1\varepsilon_{1} that we are going to choose in the course of the proof. We are going to adapt an argument in [10, Proposition 17.19] to this purpose.

Define

Th=inf{r0:Mr0 or Mrh},T_{h}=\inf\{r\geq 0:\text{$M_{r}\leq 0$ or $M_{r}\geq h$}\},

where we set h=nNε1h=\sqrt{n}\sqrt{N^{\varepsilon_{1}}}. Let (r)r0(\mathcal{F}_{r})_{r\geq 0} denote the filtration generated by (Mr)r0(M_{r})_{r\geq 0}. We have

Var(Mr+1|r)=nσ2for all r0,\mathrm{Var}(M_{r+1}\,|\,\mathcal{F}_{r})=n\sigma^{2}\quad\text{for all $r\geq 0$}, (63)

where recall that σ2\sigma^{2} is the variance of Y1(i)Y^{(i)}_{1}.

We first estimate 𝐄[M0|S~>0]\mathbf{E}[M_{0}\,|\,\tilde{S}>0]. Since 0s1<n0\leq s_{1}<n, by the same argument as in Lemma 4.10 we have that

𝐄[M0|YT+s1(i)<L]=𝐄[LYT+s1(i)|LYT+s1(i)>0]Cn12.\begin{split}\mathbf{E}[M_{0}\,|\,Y^{(i)}_{T+s_{1}}<L]&=\mathbf{E}[L-Y^{(i)}_{T+s_{1}}\,|\,L-Y^{(i)}_{T+s_{1}}>0]\leq Cn^{\frac{1}{2}}.\end{split}

We first bound 𝐏[MThh|M0]\mathbf{P}[M_{T_{h}}\geq h\,|\,M_{0}]. Since (MrTh)(M_{r\wedge T_{h}}) is bounded, by the Optional Stopping Theorem, we have

M0=𝐄[MTh|M0]=𝐄[MTh𝟏{MTh0}|M0]+𝐄[MTh𝟏{MThh}|M0]=m(h)+𝐄[MTh𝟏{MThh}|M0]m(h)+h𝐏[MThh|M0],\begin{split}M_{0}&=\mathbf{E}[M_{T_{h}}|M_{0}]=\mathbf{E}\left[M_{T_{h}}\mathbf{1}_{\{M_{T_{h}}\leq 0\}}|M_{0}\right]+\mathbf{E}\left[M_{T_{h}}\mathbf{1}_{\{M_{T_{h}}\geq h\}}|M_{0}\right]\\ &=-m_{-}(h)+\mathbf{E}\left[M_{T_{h}}\mathbf{1}_{\{M_{T_{h}}\geq h\}}|M_{0}\right]\\ &\geq-m_{-}(h)+h\mathbf{P}\left[M_{T_{h}}\geq h|M_{0}\right],\end{split}

where we denote 𝐄[MTh𝟏{MTh0}|M0]\mathbf{E}[M_{T_{h}}\mathbf{1}_{\{M_{T_{h}}\leq 0\}}|M_{0}] by m(h)0-m_{-}(h)\leq 0 and the last step is due to MTh𝟏{MThh}h𝟏{MThh}M_{T_{h}}\mathbf{1}_{\{M_{T_{h}}\geq h\}}\geq h\mathbf{1}_{\{M_{T_{h}}\geq h\}}. Hence, we have

M0+m(h)h𝐏[MThh|M0].\displaystyle M_{0}+m_{-}(h)\geq h\mathbf{P}\left[M_{T_{h}}\geq h|M_{0}\right].

We bound m(h)m_{-}(h) using Lemma 4.10

m(h)maxyL𝐄y[Yn(i)L|Yn(i)>L]Cn12.\displaystyle m_{-}(h)\leq\max_{y\leq L}\mathbf{E}_{y}\left[Y^{(i)}_{n}-L|Y^{(i)}_{n}>L\right]\leq Cn^{\frac{1}{2}}.

Hence, we have

𝐏[MThh|M0]M0h+Cn12h.\displaystyle\mathbf{P}[M_{T_{h}}\geq h\,|\,M_{0}]\leq\frac{M_{0}}{h}+\frac{Cn^{\frac{1}{2}}}{h}.

We now estimate 𝐏[Thr|M0]\mathbf{P}[T_{h}\geq r\,|\,M_{0}]. Let Gr=Mr2hMrσ2nrG_{r}=M^{2}_{r}-hM_{r}-\sigma^{2}nr. The sequence (Gr)(G_{r}) is a martingale by (63). We can bound both the ‘overshoot’ above hh and the ‘undershoot’ below 0 by Lemma 4.10. For the ‘undershoot’ below 0 we have

𝐄[(MThh)MTh|MTh0,M0]=𝐄[MTh2|MTh0,M0]+𝐄[hMTh|MTh0,M0]Cn+Chn1/2.\begin{split}\mathbf{E}[(M_{T_{h}}-h)M_{T_{h}}\,|\,M_{T_{h}}\leq 0,M_{0}]&=\mathbf{E}[M_{T_{h}}^{2}\,|\,M_{T_{h}}\leq 0,M_{0}]+\mathbf{E}[-hM_{T_{h}}\,|\,M_{T_{h}}\leq 0,M_{0}]\\ &\leq Cn+Chn^{1/2}.\end{split}

For the ‘overshoot’ above hh, write MTh=:NTh+hM_{T_{h}}=:N_{T_{h}}+h, then we have

(MThh)MTh=NTh(h+NTh),(M_{T_{h}}-h)M_{T_{h}}=N_{T_{h}}(h+N_{T_{h}}),

Hence

𝐄[(MThh)MTh|MThh,M0]=𝐄[hNTh|NTh0,M0]+𝐄[NTh2|NTh0,M0]Chn1/2+Cn.\begin{split}\mathbf{E}[(M_{T_{h}}-h)M_{T_{h}}\,|\,M_{T_{h}}\geq h,M_{0}]&=\mathbf{E}[hN_{T_{h}}\,|\,N_{T_{h}}\geq 0,M_{0}]+\mathbf{E}[N_{T_{h}}^{2}\,|\,N_{T_{h}}\geq 0,M_{0}]\\ &\leq Chn^{1/2}+Cn.\end{split}

For r<Thr<T_{h}, we have (Mrh)Mr<0(M_{r}-h)M_{r}<0. Therefore, we have

𝐄[MrTh2hMrTh|M0]\displaystyle\mathbf{E}\left[M^{2}_{r\wedge T_{h}}-hM_{r\wedge T_{h}}|M_{0}\right] Chn1/2+Cn.\displaystyle\leq Chn^{1/2}+Cn.

Since (GrTh)(G_{r\wedge T_{h}}) is a martingale

hM0G0𝐄[GrTh|M0]=𝐄[MrTh2hMrTh|M0]σ2n𝐄[rTh|M0]Cn12h+Cnσ2n𝐄[rTh|M0].\begin{split}-hM_{0}\leq G_{0}\leq\mathbf{E}[G_{r\wedge T_{h}}|M_{0}]&=\mathbf{E}[M^{2}_{r\wedge T_{h}}-hM_{r\wedge T_{h}}|M_{0}]-\sigma^{2}n\mathbf{E}[r\wedge T_{h}|M_{0}]\\ &\leq Cn^{\frac{1}{2}}h+Cn-\sigma^{2}n\mathbf{E}[r\wedge T_{h}|M_{0}].\end{split}

We conclude that 𝐄[rTh|M0]h(M0+Cn12)+Cnσ2n\mathbf{E}[r\wedge T_{h}\,|\,M_{0}]\leq\frac{h(M_{0}+Cn^{\frac{1}{2}})+Cn}{\sigma^{2}n}. Letting rr\to\infty, by the Monotone Convergence Theorem,

𝐄[Th|M0]h(M0+Cn12)+Cnσ2n,\mathbf{E}[T_{h}\,|\,M_{0}]\leq\frac{h(M_{0}+Cn^{\frac{1}{2}})+Cn}{\sigma^{2}n},

where h=nNε1h=\sqrt{n}\sqrt{N^{\varepsilon_{1}}}. This gives

𝐏[Th>Nε1|M0]1Nε1[nNε1M0+CnNε1+Cnσ2n].\begin{split}\mathbf{P}[T_{h}>N^{\varepsilon_{1}}|M_{0}]\leq\frac{1}{N^{\varepsilon_{1}}}\left[\frac{\sqrt{n}\sqrt{N^{\varepsilon_{1}}}M_{0}+Cn\sqrt{N^{\varepsilon_{1}}}+Cn}{\sigma^{2}n}\right].\end{split}

Taking expectations of both sides, we have

𝐏[Th>Nε1]1Nε1[nNε1𝐄M0+CnNε1+Cnσ2n]=𝐄M0σ2nNε1+Cσ2Nε1+Cσ2Nε1CNε1.\begin{split}\mathbf{P}[T_{h}>N^{\varepsilon_{1}}]&\leq\frac{1}{N^{\varepsilon_{1}}}\left[\frac{\sqrt{n}\sqrt{N^{\varepsilon_{1}}}\mathbf{E}M_{0}+Cn\sqrt{N^{\varepsilon_{1}}}+Cn}{\sigma^{2}n}\right]\\ &=\frac{\mathbf{E}M_{0}}{\sigma^{2}\sqrt{n}\sqrt{N^{\varepsilon_{1}}}}+\frac{C}{\sigma^{2}\sqrt{N^{\varepsilon_{1}}}}+\frac{C}{\sigma^{2}N^{\varepsilon_{1}}}\leq\frac{C}{\sqrt{N^{\varepsilon_{1}}}}.\end{split}

Combining the above bounds, we get

𝐏[S~>Nε1]\displaystyle\mathbf{P}[\tilde{S}>N^{\varepsilon_{1}}] 𝐏[MThh]+𝐏[Th>Nε1]𝐄[M0]h+Cn12h+CNε1CNε1.\displaystyle\leq\mathbf{P}[M_{T_{h}}\geq h]+\mathbf{P}[T_{h}>N^{\varepsilon_{1}}]\leq\frac{\mathbf{E}[M_{0}]}{h}+\frac{Cn^{\frac{1}{2}}}{h}+\frac{C}{\sqrt{N^{\varepsilon_{1}}}}\leq\frac{C}{\sqrt{N^{\varepsilon_{1}}}}.

We now bound the probability that a copy of 𝐊\mathbf{K} is hit between times TT and sNε1s_{N^{\varepsilon_{1}}}.

We first show that the probability that the lazy random walk on the torus is in the ball B(𝐱0,Nζ)B(\mathbf{x}_{0},N^{\zeta}) at time TT goes to 0. Indeed, we have

𝐏o[YTφ1(B(𝐱0,Nζ))]=y(L,L)dφ1(B(𝐱0,Nζ))𝐏o[YT=y]\displaystyle\mathbf{P}_{o}\left[Y_{T}\in\varphi^{-1}\left(B(\mathbf{x}_{0},N^{\zeta})\right)\right]=\sum_{y^{\prime}\in\partial(-L,L)^{d}\cap\varphi^{-1}(B(\mathbf{x}_{0},N^{\zeta}))}\mathbf{P}_{o}\left[Y_{T}=y^{\prime}\right]
CNζ(d1)Ld1Nd1CLd1=CN(ζ1)(d1),\displaystyle\qquad\leq CN^{\zeta(d-1)}\frac{L^{d-1}}{N^{d-1}}\frac{C}{L^{d-1}}=CN^{(\zeta-1)(d-1)},

where we have ζ<δ/d<1\zeta<\delta/d<1, so the last expression goes to 0. Here we used that 𝐏o[YT=y]C/Ld1\mathbf{P}_{o}[Y_{T}=y^{\prime}]\leq C/L^{d-1}, for example using a half-space Poisson kernel estimate [9, Theorem 8.1.2]. As for the number of terms in the sum, we have that there are CLd1/Nd1CL^{d-1}/N^{d-1} copies of the torus within \ell_{\infty} distance N\leq N of the boundary (L,L)d\partial(-L,L)^{d}. Considering the intersection of the union of balls φ1(B(𝐱0,Nζ))\varphi^{-1}(B(\mathbf{x}_{0},N^{\zeta})) and the boundary (L,L)d\partial(-L,L)^{d}, the worst case is that within a single copy of the torus the intersection has size at most CNζ(d1)CN^{\zeta(d-1)}.

Condition on the location yy^{\prime} of the walk at the exit time TT. For yφ1(B(𝐱0,Nζ))y^{\prime}\notin\varphi^{-1}\left(B(\mathbf{x}_{0},N^{\zeta})\right) we first bound the probability of hitting 𝐊\mathbf{K} between the times between 0 and s2s_{2}. After time s2s_{2}, the random walk is well mixed, and we can apply a simpler union bound.

We thus have the upper bound

t=0s2xφ1(𝐊)pt(y,x)𝐏[max0ts2|Yt(i)y|n12+ε]+xφ1(𝐊)|xy|n12+εG(y,x).\displaystyle\sum_{t=0}^{s_{2}}\sum_{x^{\prime}\in\varphi^{-1}(\mathbf{K})}p_{t}(y^{\prime},x^{\prime})\leq\mathbf{P}\left[\max_{0\leq t\leq s_{2}}|Y^{(i)}_{t}-y^{\prime}|\geq n^{\frac{1}{2}+\varepsilon}\right]+\sum_{\begin{subarray}{c}x^{\prime}\in\varphi^{-1}(\mathbf{K})\\ |x^{\prime}-y^{\prime}|\leq n^{\frac{1}{2}+\varepsilon}\end{subarray}}G(y^{\prime},x^{\prime}).

The first term is stretched-exponentially small due to the Martingale maximal inequality (3). The Green’s function term is bounded by Lemma 4.1(iii).

After time s2s_{2}, by Lemma 2.1, we have that

t=s2sNε1𝐏y[Ytφ1(K)]nNε1|𝐊|CNd=CNδ+ε1d.\displaystyle\sum_{t=s_{2}}^{s_{N^{\varepsilon_{1}}}}\mathbf{P}_{y}\left[Y_{t}\in\varphi^{-1}(K)\right]\leq n\cdot N^{\varepsilon_{1}}|\mathbf{K}|\frac{C}{N^{d}}=C\,N^{\delta+\varepsilon_{1}-d}.

Therefore, combining the above upper bounds, we have the required result.

E3CNε12+CNδd+2δε+CNδd+ε10,as N,\displaystyle E_{3}\leq C\cdot N^{-\frac{\varepsilon_{1}}{2}}+C\cdot N^{\delta-d+2\delta\varepsilon}+C\cdot N^{\delta-d+\varepsilon_{1}}\to 0,\quad\text{as $N\to\infty$,}

if ε\varepsilon and ε1\varepsilon_{1} are sufficiently small. ∎

Lemma 4.12.

We have E4Cecf(n)2NdlogNnE_{4}\leq Ce^{-cf(n)^{2}}\frac{N^{d}\log N}{n} for some CC. Consequently, there exists C1C_{1} such that if f(n)C1logNf(n)\geq C_{1}\sqrt{\log N}, then E40E_{4}\to 0.

Proof.

By the Martingale maximal inequality (3), we have that

E4Cecf(n)2NdlogNn.\displaystyle E_{4}\leq Ce^{-cf(n)^{2}}\frac{N^{d}\log N}{n}.

Taking say C1>d/cC_{1}>\sqrt{d/c} implies that if f(n)C1logNf(n)\geq C_{1}\sqrt{\log N}, we have

E40,as N.\displaystyle E_{4}\to 0,\quad\text{as $N\to\infty$}.

4.4 Proof of Proposition 2.4

Proof.

By Martingale maximal inequality (3) used in the second step we have

𝐏y1[|Yny1|>n(10loglogn)]=𝐏0[|Yn|>n(10loglogn)]exp(c100(loglogn)2).\begin{split}\mathbf{P}_{y^{\prime}_{\ell-1}}[|Y_{n}-y^{\prime}_{\ell-1}|>\sqrt{n}(10\log\log n)]&=\mathbf{P}_{0}[|Y_{n}|>\sqrt{n}(10\log\log n)]\\ &\leq\exp(-c100(\log\log n)^{2}).\end{split}

Hence we have

𝐄[#{1NdnC1logN:|YnYn(1)|>10nloglogn}]NdnC1logNexp(c(loglogn)2)NdnCexp((c/2)(loglogn)2).\begin{split}&\mathbf{E}\left[\#\left\{1\leq\ell\leq\frac{N^{d}}{n}C_{1}\log N:|Y_{n\ell}-Y_{n(\ell-1)}|>10\sqrt{n}\log\log n\right\}\right]\\ &\qquad\leq\frac{N^{d}}{n}C_{1}\log N\exp(-c(\log\log n)^{2})\leq\frac{N^{d}}{n}C\exp(-(c/2)(\log\log n)^{2}).\end{split}

By Markov’s inequality, it follows that

𝐏[#{1NdnC1logN:|YnYn(1)|>10nloglogn}Ndn(loglogn)1]NdnCexp((c/2)(loglogn)2)Ndn(loglogn)1Cexp((c/2)(loglogn)2)(loglogn)10,\begin{split}&\mathbf{P}\left[\#\left\{1\leq\ell\leq\frac{N^{d}}{n}C_{1}\log N:|Y_{n\ell}-Y_{n(\ell-1)}|>10\sqrt{n}\log\log n\right\}\geq\frac{N^{d}}{n}(\log\log n)^{-1}\right]\\ &\qquad\leq\frac{\frac{N^{d}}{n}C\exp(-(c/2)(\log\log n)^{2})}{\frac{N^{d}}{n}(\log\log n)^{-1}}\leq C\frac{\exp(-(c/2)(\log\log n)^{2})}{(\log\log n)^{-1}}\to 0,\end{split}

as NN\to\infty.

Acknowledgements. We thank two anonymous referees for their constructive criticism. The research of Minwei Sun was supported by an EPSRC doctoral training grant to the University of Bath with project reference EP/N509589/1/2377430.

References

  • [1] Jiří Černý and Augusto Teixeira. Random walks on torus and random interlacements: Macroscopic coupling and phase transition. Annals of Applied Probability, 26(5):2883–2914, 2016.
  • [2] Deepak Dhar. Theoretical studies of self-organized criticality. Phys. A, 369(1):29–70, 2006.
  • [3] Alexander Drewitz, Balázs Ráth, and Artëm Sapozhnikov. An Introduction to Random Interlacements. Springer Briefs in Mathematics. Springer-Verlag, 1st ed. 2014. edition, 2014.
  • [4] Rick Durrett. Probability: Theory and Examples. Cambridge Series in Statistical and Probabilistic Mathematics. Cambridge University Press, 5 edition, 2019.
  • [5] W. Hebisch and L. Saloff-Coste. Gaussian Estimates for Markov Chains and Random Walks on Groups. The Annals of Probability, 21(2):673 – 709, 1993.
  • [6] Antal A. Járai. Sandpile models. Probab. Surv., 15:243–306, 2018.
  • [7] Antal A Járai and Minwei Sun. Toppling and height probabilities in sandpiles. Journal of Statistical Mechanics: Theory and Experiment, 2019(11):113204, nov 2019.
  • [8] Gregory F Lawler. Intersections of Random Walks. Modern Birkhäuser Classics. Springer-Verlag, New York, NY, 1. aufl. edition, 2013.
  • [9] Gregory F. Lawler and Vlada Limic. Random walk : a modern introduction. Cambridge studies in advanced mathematics ; 123. Cambridge University Press, Cambridge, 2010.
  • [10] David A. Levin, Yuval Peres, and Elizabeth L. Wilmer. Markov chains and mixing times. American Mathematical Society, Providence, Rhode Island, second edition. edition, 2017.
  • [11] Frank Redig. Mathematical aspects of the abelian sandpile model. In Mathematical statistical physics, pages 657–729. Elsevier B. V., Amsterdam, 2006.
  • [12] Alain-Sol Sznitman. Random walks on discrete cylinders and random interlacements. Probability Theory and Related Fields, 145(1/2):143–175, September 2009.
  • [13] Alain-Sol Sznitman. Vacant set of random interlacements and percolation. Annals of Mathematics, 171(3):2039–2087, 2010.
  • [14] Alain-Sol Sznitman. Topics in occupation times and Gaussian free fields. Zurich Lectures in Advanced Mathematics. European Mathematical Society (EMS), Zürich, 2012.
  • [15] David Windisch. Random walk on a discrete torus and random interlacements. Electronic Communications in Probability, 13(none), 2008.

Antal A. Járai and Minwei Sun
Address: Department of Mathematical Sciences, University of Bath, Claverton Down, Bath, BA2 7AY, United Kingdom
Email: [email protected], [email protected]