This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Speed function for biased random walks with traps

Volker Betz and Matthias Meiners and Ivana Tomic
Abstract.

We consider a biased nearest-neighbor random walk on \mathbb{Z} which at each step is trapped for some random time with random, site-dependent mean. We derive a simple formula for the speed function in terms of the model parameters.

Keywords: Bouchaud’s trap model, random walk in random environment, rate of escape, speed function
Subclass: MSC: 60K37 \cdot 60F15

1. Introduction

Biased random walks on random graphs have been popular objects of study in recent years. The most relevant effect is that the random walk can run into traps, i. e. portions of the random graph that can only be exited by making a large number of steps in the direction opposite to the drift, which can take a long time. This leads to a decrease of the speed of the ballistic motion that is typical for a biased random in the absence of traps. Explicitly, if we embed the random graph into d\mathbb{R}^{d} and define the ballistic speed of the walk (Xt)t0(X_{t})_{t\geq 0} as v¯=limt1tXt\overline{\mathrm{v}}=\lim_{t\to\infty}\frac{1}{t}X_{t} (provided the limit exists almost surely), then in many models v¯\overline{\mathrm{v}} (or rather its projection in the direction of the bias) is not a monotone function of the bias, and often equals zero beyond a critical value of the bias. The latter behaviour is known to hold for the biased walk on the infinite connected cluster of supercritical bond percolation on d\mathbb{Z}^{d}, d2d\geq 2 [4, 8, 13], on the Galton-Watson tree with leaves [1, 6, 12] and on certain one-dimenisional percolation models [2, 10]. We refer to the lecture notes [3] and the introduction of [10] for further details on the models and relations to physical models.

The purpose of this note is to derive an explicit formula for the speed in a class of toy models that includes the biased Bouchaud’s trap model on the integers. In these models, a random walk (Yn)n0(Y_{n})_{n\in\mathbb{N}_{0}} runs on \mathbb{Z}, and the geometry of the random graph is replaced by a family of random average holding times. Explicitly, a random average holding time wxw_{x} is sampled for each xx\in\mathbb{Z}, and when the biased random walk hits the point xx at time kk, it waits for a random time τwx,x,k\tau_{w_{x},x,k}, mimicking the time it takes to leave a trap with entrance at xx. The stochastic processes (τr,x,k)r0(\tau_{r,x,k})_{r\geq 0}, xx\in\mathbb{Z}, k0k\in\mathbb{N}_{0} are i. i. d. (independent and identically distributed) and τr,x,k\tau_{r,x,k} has mean rr, but the wxw_{x}, xx\in\mathbb{Z} are not re-sampled during the dynamics, and thus the waiting times τwYk,Yk,k\tau_{w_{Y_{k}},Y_{k},k}, k0k\in\mathbb{N}_{0} are not independent. This means that a trivial application of the law of large numbers to compute the average waiting time per site, limn1nk=0n1τwYk,Yk,k\lim_{n\to\infty}\frac{1}{n}\sum_{k=0}^{n-1}\tau_{w_{Y_{k}},Y_{k},k}, is not possible. Under the assumption that (wx)x(w_{x})_{x\in\mathbb{Z}} is an ergodic family, we will show that the law of large numbers nevertheless holds, and derive an explicit expression for v¯\overline{\mathrm{v}} in terms of the waiting time distributions. We illustrate our results with several examples that also demonstrate how close our toy model is to various models of biased walks on random graphs.

2. Model, result and examples

For λ>0\lambda>0, let ξ,ξ1,ξ2,\xi,\xi_{1},\xi_{2},\ldots be a sequence of i. i. d. random variables, with

𝐏(ξ=1)=1𝐏(ξ=1)=eλeλ+eλ=:pλ.\mathbf{P}(\xi=1)=1-\mathbf{P}(\xi=-1)=\frac{{\rm e}^{\lambda}}{{\rm e}^{\lambda}+{\rm e}^{-\lambda}}=\vcentcolon p_{\lambda}. (2.1)

We define Yn:=k=1nξkY_{n}\vcentcolon=\sum_{k=1}^{n}\xi_{k} for n0={0,1,2,}n\in\mathbb{N}_{0}=\{0,1,2,\ldots\}. The process (Yn)n0(Y_{n})_{n\in\mathbb{N}_{0}} is a biased nearest-neighbor random walk on \mathbb{Z} starting at 0.

Let (wx)x(w_{x})_{x\in\mathbb{Z}} be an ergodic sequence of positive random variables, modeling the average waiting times at the points xx\in\mathbb{Z}. Let ((τr,x,k)r0)x,k0((\tau_{r,x,k})_{r\geq 0})_{x\in\mathbb{Z},k\in\mathbb{N}_{0}} be a family of independent random processes with 𝐄[τr,x,k]=r\mathbf{E}[\tau_{r,x,k}]=r for all r0r\geq 0 and all xx\in\mathbb{Z}, kk\in\mathbb{N}. We assume that (τr,x,k)r0(\tau_{r,x,k})_{r\geq 0} for fixed xx\in\mathbb{Z} and k0k\in\mathbb{N}_{0} is a stochastic process taking values in the Skorohod space D=D[0,)D=D[0,\infty) of right-continuous functions with existing left limits at all points in (0,)(0,\infty), see e. g. [5, Section 16]. Define (Xt)t0(X_{t})_{t\geq 0} to be the continuous-time process that follows the trajectory of the walk (Yn)n0(Y_{n})_{n\in\mathbb{N}_{0}} but following its kthk^{\mathrm{th}} step to site xx\in\mathbb{Z}, say, spends the random time τwx,x,k\tau_{w_{x},x,k} at xx before making the next step. More precisely, let T0:=0T_{0}\vcentcolon=0 and

Tn:=k=0n1τwYk,Yk,kT_{n}\vcentcolon=\sum_{k=0}^{n-1}\tau_{w_{Y_{k}},Y_{k},k} (2.2)

for nn\in\mathbb{N}. Then we define (Xt)t0(X_{t})_{t\geq 0} via

Xt:=Ykfor Tkt<Tk+1.X_{t}\vcentcolon=Y_{k}\quad\text{for }T_{k}\leq t<T_{k+1}. (2.3)

The biased Bouchaud’s trap model on \mathbb{Z} is the special case where (wx)x(w_{x})_{x\in\mathbb{Z}} is an i. i. d.  sequence, and where τr,x,k=r𝐞x,k\tau_{r,x,k}=r\mathbf{e}_{x,k}, and (𝐞x,k)x,k(\mathbf{e}_{x,k})_{x\in\mathbb{Z},\,k\in\mathbb{N}} is a family of i. i. d. exponentially distributed random variables with mean 11; see [3] for further details on Bouchaud’s trap model. Note that the exponential distribution is necessary (and sufficient) to make (Xt)t0(X_{t})_{t\geq 0} a continuous time Markov process, but this property is irrelevant for our purposes.

There is a strong law of large numbers for our model. To formulate it, we introduce the notation pesc=pesc(λ)p_{\mathrm{esc}}=p_{\mathrm{esc}}(\lambda) for the escape probability of (Yn)n0(Y_{n})_{n\in\mathbb{N}_{0}} from the origin, i.e.,

pesc:=𝐏(Yn0 for all n)=2pλ1=2eλeλ+eλ1=eλeλeλ+eλ.\displaystyle p_{\mathrm{esc}}\vcentcolon=\mathbf{P}(Y_{n}\not=0\text{ for all }n\in\mathbb{N})=2p_{\lambda}-1=\frac{2{\rm e}^{\lambda}}{{\rm e}^{\lambda}+{\rm e}^{-\lambda}}-1=\frac{{\rm e}^{\lambda}-{\rm e}^{-\lambda}}{{\rm e}^{\lambda}+{\rm e}^{-\lambda}}.
Theorem 2.1.

Let (wx)x(w_{x})_{x\in\mathbb{Z}} be ergodic under 𝐏\mathbf{P}, and let 𝐄\mathbf{E} be the expectation with respect to 𝐏\mathbf{P}. Then v¯:=limtXtt\overline{\mathrm{v}}:=\lim_{t\to\infty}\frac{X_{t}}{t} exists almost surely, and we have

v¯=pesc(λ)𝐄[w0]almost surely.\overline{\mathrm{v}}=\frac{p_{\mathrm{esc}}(\lambda)}{\mathbf{E}[w_{0}]}\quad\text{almost surely.} (2.4)

Formula (2.4) has the simple interpretation that (Yn)n0(Y_{n})_{n\in\mathbb{N}_{0}} travels, by the strong law of large numbers, with linear speed 2pλ1=pesc(λ)2p_{\lambda}-1=p_{\mathrm{esc}}(\lambda). The average duration of each visit of the walk (Xt)t0(X_{t})_{t\geq 0} to a site xx is equal to 𝐄[wx]=𝐄[w0]\mathbf{E}[w_{x}]=\mathbf{E}[w_{0}]. So on average, the walk makes a step every 1/𝐄[w0]1/\mathbf{E}[w_{0}] units of time resulting in a speed of pesc(λ)/𝐄[w0]p_{\mathrm{esc}}(\lambda)/\mathbf{E}[w_{0}].

Example 1: One-dimensional walk with vertical traps.

Consider the graph obtained by attaching below each point xx of \mathbb{Z} (pictured as lying on a horizontal line, called the backbone) a finite one-dimensional ‘branch’ (branches are pictured as lying on vertical lines, called traps, see Figure 1 below) of random length Lx0L_{x}\in\mathbb{N}_{0}. Assume that the LxL_{x}, xx\in\mathbb{Z} are i. i. d. We run a biased discrete-time random walk on this graph, with the following transition probabilities: when xx is on the backbone and the length of the trap is positive, it goes to the right with probability qpλqp_{\lambda}, to the left with probability q(1pλ)q(1-p_{\lambda}), and into the trap with probability 1q1-q for some fixed q(0,1)q\in(0,1). When in a trap, it goes down the trap with probability pλp_{\lambda} and up the trap with probability 1pλ1-p_{\lambda}, unless it is at the very bottom of the trap, at which point is goes up with probability one.

1q1-qxxq(1pλ)q(1-p_{\lambda})qpλqp_{\lambda}
Figure 1. Part of the graph, where the walk is on the backbone at the highlighted vertex xx.

This model can be embedded into our model by simply only considering the horizontal coordinate of the walk. Then, the walk spends a random amount of time on or below the vertex xx of the backbone before moving onto a neighboring vertex, the times at different vertices are independent, and have possibly different distributions, and the times spent at consecutive visits at the same vertex are also independent, but have the same distribution at every visit. All that remains to be done is to calculate wxw_{x} and (τr,x,k)r0(\tau_{r,x,k})_{r\geq 0}. To this end, assume that we have a trap of length 1\ell\geq 1 at some vertex. Let tt_{\ell} be the random time needed by the walk to exit the trap conditional on making one step into it. Although the distribution of tt_{\ell} is complicated, its expectation can be computed by electrical network methods, and the result is

𝐄[t]=2e2λ1e2λ1,\mathbf{E}[t_{\ell}]=2\frac{{\rm e}^{2\lambda\ell}-1}{{\rm e}^{2\lambda}-1},

see [9, Eq. (1.5)], and note that in their notation, β=e2λ\beta={\rm e}^{2\lambda}. Since the walk can enter a trap multiple times without moving horizontally, the distribution of the total time spent at a trap of depth \ell is equal to the distribution of S=1+k=1Nt,kS_{\ell}=1+\sum_{k=1}^{N}t_{\ell,k}, where NN is geometrically distributed with success probability qq (i. e., 𝐏(N=n)=(1q)nq\mathbf{P}(N=n)=(1-q)^{n}q, n0n\in\mathbb{N}_{0}) and where the t,kt_{\ell,k}, kk\in\mathbb{N} are independent copies of tt_{\ell}. It follows that the expected time spent at a site with a trap of length \ell is equal to

S¯:=𝐄[S]=1+1qq𝐄[t]=1+2(1q)qe2λ1e2λ1.\bar{S}_{\ell}\vcentcolon=\mathbf{E}[S_{\ell}]=1+\frac{1-q}{q}\mathbf{E}[t_{\ell}]=1+\frac{2(1-q)}{q}\frac{{\rm e}^{2\lambda\ell}-1}{{\rm e}^{2\lambda}-1}.

Therefore, with S=1S_{\ell}=1 if =0\ell=0, we must choose wxw_{x} as i. i. d.  random variables with

𝐏(wx=S¯)=𝐏(L=),\mathbf{P}(w_{x}=\bar{S}_{\ell})=\mathbf{P}(L=\ell), (2.5)

and τS¯,x,k=S,x,k\tau_{\bar{S}_{\ell},x,k}=S_{\ell,x,k}, where S,x,kS_{\ell,x,k} are independent copies of SS_{\ell}. We extend τr,x,k\tau_{r,x,k} to r{S¯:0}r\notin\{\bar{S}_{\ell}:\ell\in\mathbb{N}_{0}\} piecewise linear to make (τr,x,k)r0(\tau_{r,x,k})_{r\geq 0} (right-) continuous and to ensure 𝐄[τr,x,k]=r\mathbf{E}[\tau_{r,x,k}]=r for all r0r\geq 0. (Actually, we can choose the t,kt_{\ell,k} non-decreasing in \ell and hence (τr,x,k)r0(\tau_{r,x,k})_{r\geq 0} non-decreasing in rr, but this is not required here.) We then get

v¯=pesc(λ)𝐄[w0]=pesc(λ)=0S¯𝐏(L=)=eλeλeλ+eλ11+2(1q)q𝐄[e2λL]1e2λ1.\overline{\mathrm{v}}=\frac{p_{\mathrm{esc}}(\lambda)}{\mathbf{E}[w_{0}]}=\frac{p_{\mathrm{esc}}(\lambda)}{\sum_{\ell=0}^{\infty}\bar{S}_{\ell}\mathbf{P}(L=\ell)}=\frac{{\rm e}^{\lambda}-{\rm e}^{-\lambda}}{{\rm e}^{\lambda}+{\rm e}^{-\lambda}}\frac{1}{1+\frac{2(1-q)}{q}\frac{\mathbf{E}[{\rm e}^{2\lambda L}]-1}{{\rm e}^{2\lambda}-1}}.

We observe that the distribution of the trap length LL plays a crucial role. If LL has all exponential moments, then the speed is strictly positive for all λ>0\lambda>0, while in the case where LL has no exponential moments, the speed is always equal to zero. The phase transition from positive to zero speed occurs if LL has some but not all exponential moments. This is the case, for instance, if LL is geometrically distributed, which seems to be the case (at least approximately) in most of the important models – see Example 3 below and the discussion following it. We set 𝐏(L=)=(1eα)eα\mathbf{P}(L=\ell)=(1-{\rm e}^{-\alpha}){\rm e}^{-\alpha\ell}, 0\ell\in\mathbb{N}_{0}. In [9], precise tail estimates for the random variable k=0Lt,k\sum_{k=0}^{L}t_{\ell,k} are obtained for this choice of LL, but for our purposes, simple calculations suffice. We obtain

𝐄[e2λL]1=1eα1e2λα1=e2λ1eαe2λif λ<α/2,\mathbf{E}[{\rm e}^{2\lambda L}]-1=\frac{1-{\rm e}^{-\alpha}}{1-{\rm e}^{2\lambda-\alpha}}-1=\frac{{\rm e}^{2\lambda}-1}{{\rm e}^{\alpha}-{\rm e}^{2\lambda}}\qquad\text{if }\lambda<\alpha/2,

and equal to infinity otherwise. We conclude that for fixed α>0\alpha>0 and 0<λ<α/20<\lambda<\alpha/2,

v¯(λ)=eλeλeλ+eλeαe2λeαe2λ+2(1q)q,\overline{\mathrm{v}}(\lambda)=\frac{{\rm e}^{\lambda}-{\rm e}^{-\lambda}}{{\rm e}^{\lambda}+{\rm e}^{-\lambda}}\frac{{\rm e}^{\alpha}-{\rm e}^{2\lambda}}{{\rm e}^{\alpha}-{\rm e}^{2\lambda}+\frac{2(1-q)}{q}},

whereas v¯=0\overline{\mathrm{v}}=0 if λα/2\lambda\geq\alpha/2.

Example 2: Bouchaud’s trap model with drift-dependent holding times.

We can reduce the previous example to Bouchaud’s trap model by replacing the precise distribution of S=1+k=0Lt,kS_{\ell}=1+\sum_{k=0}^{L}t_{\ell,k} with a general heavy-tailed distribution for w0w_{0}, and by setting τr,x,k=r𝐞x,k\tau_{r,x,k}=r\mathbf{e}_{x,k} with 𝐞x,k\mathbf{e}_{x,k} exponentially distributed with mean 11. The main features are the same: inspired by the known tail behaviour of SS_{\ell} as a function of the bias [9, 11], we set

𝐏(w0t)=tαfor t1,\mathbf{P}(w_{0}\geq t)=t^{-\alpha}\quad\text{for }t\geq 1, (2.6)

for some α>0\alpha>0. We obtain

v¯(α,λ)=eλeλeλ+eλ(α1)+α.\overline{\mathrm{v}}(\alpha,\lambda)=\frac{e^{\lambda}-e^{-\lambda}}{e^{\lambda}+e^{-\lambda}}\frac{(\alpha-1)^{+}}{\alpha}.

We choose α=λcrit/λ\alpha=\mathit{\lambda}_{\mathrm{crit}}/\lambda where λcrit>0\mathit{\lambda}_{\mathrm{crit}}>0 is a parameter, and write v¯(λ)\overline{\mathrm{v}}(\lambda) for v¯(α,λ)\overline{\mathrm{v}}(\alpha,\lambda) if (2.6) holds with α=λcrit/λ\alpha=\mathit{\lambda}_{\mathrm{crit}}/\lambda. In particular, if α=λcrit/λ\alpha=\mathit{\lambda}_{\mathrm{crit}}/\lambda,

v¯(λ)=eλeλeλ+eλ(λcritλ)+λcrit.\overline{\mathrm{v}}(\lambda)=\frac{e^{\lambda}-e^{-\lambda}}{e^{\lambda}+e^{-\lambda}}\frac{(\mathit{\lambda}_{\mathrm{crit}}-\lambda)^{+}}{\mathit{\lambda}_{\mathrm{crit}}}.
12\tfrac{1}{2}11
Figure 2. The function λv¯(λ)\lambda\mapsto\overline{\mathrm{v}}(\lambda) with λcrit:=1\mathit{\lambda}_{\mathrm{crit}}\vcentcolon=1.

Example 3: One-dimensional percolation with horizontal traps.

As in Example 1, we consider a random graph with a backbone that is just \mathbb{Z}, pictured as lying on a horizontal line, and we attach traps at each point of the backbone. The difference now is while the first vertex of a trap is still below the vertex of \mathbb{Z} at which the trap is attached, all further vertices of the trap are to the right of that first vertex, below some other vertices of the backbone. We demand that no other traps of positive length can occur for 1\ell-1 steps to the right of a trap of length \ell, meaning that we can represent the random graph as a subgraph of ×{0,1}\mathbb{Z}\times\{0,1\} (with nearest-neighbor edges).

Figure 3. Part of the graph.

In fact, this is the model studied in [2, 10] with traps only or single edges to the right. If we assume that edges not in the backbone appear independently with probability eα{\rm e}^{-\alpha}, then conditionally on having only a backbone and traps, the trap length LL is geometric with success probability 1eα1-{\rm e}^{-\alpha}. Inside a trap, we assume the same dynamics as in Example 1, i.e. a bias towards the end of the trap (which is now to the right, so the bias has the same direction and strength as in the backbone). The only point where greater generality than given in Example 1 might be desirable is the probability of the final jump out of the trap, which is in the vertical direction, and one might want to assign it a different probability than eλ/(eλ+eλ){\rm e}^{-\lambda}/({\rm e}^{\lambda}+{\rm e}^{-\lambda}). We do not do this for the sake of not further complicating affairs.

The model described above can again be mapped exactly into the context of Theorem 2.1. To do this, consider a Markov chain (Zx)x(Z_{x})_{x\in\mathbb{Z}} with values in 0\mathbb{N}_{0} and with transition probabilities

p(i,j)={𝐏(L=j)if i{0,1},1if i>1,j=i1,0otherwise.p(i,j)=\begin{cases}\mathbf{P}(L=j)&\text{if }i\in\{0,1\},\\ 1&\text{if }i>1,\;j=i-1,\\ 0&\text{otherwise.}\end{cases}

The interpretation of this chain is that when Zx1Z_{x}\geq 1, then there is a trap below xx (possibly beginning to the left of xx) with ZxZ_{x} remaining vertices (including the vertex below xx). Traps of length 11 do not obstruct further traps, but traps of length 2\ell\geq 2 prevent further traps for 1\ell-1 steps. Therefore, if we define f:020f:\mathbb{N}_{0}^{2}\to\mathbb{N}_{0} with

f(i,j)={jif ji,0if j<i,f(i,j)=\begin{cases}j&\text{if }j\geq i,\\ 0&\text{if }j<i,\end{cases}

then Λx=f(Zx1,Zx)\Lambda_{x}=f(Z_{x-1},Z_{x}) models the length of the trap rooted at site xx.

The Markov chain (Zx)x(Z_{x})_{x\in\mathbb{Z}} has a unique invariant measure if and only if 𝐄[L]<\mathbf{E}[L]<\infty: from the transition probabilities, we conclude that the weight function π\pi of an invariant measure must fulfil the equations π(0)𝐏(L=0)+π(1)𝐏(L=0)=π(0)\pi(0)\mathbf{P}(L=0)+\pi(1)\mathbf{P}(L=0)=\pi(0), and

π(0)𝐏(L=j)+π(1)𝐏(L=j)+π(j+1)=π(j)for j.\pi(0)\mathbf{P}(L=j)+\pi(1)\mathbf{P}(L=j)+\pi(j+1)=\pi(j)\qquad\text{for }j\in\mathbb{N}.

The unique probability weight function solving these equations is given by

π(0)=𝐏(L=0)𝐏(L=0)+𝐄[L],π(j)=𝐏(Lj)𝐏(L=0)+𝐄[L] for j.\pi(0)=\frac{\mathbf{P}(L=0)}{\mathbf{P}(L=0)+\mathbf{E}[L]},\quad\pi(j)=\frac{\mathbf{P}(L\geq j)}{\mathbf{P}(L=0)+\mathbf{E}[L]}\quad\text{ for }j\in\mathbb{N}.

The stationary Markov chain (Zx)x(Z_{x})_{x\in\mathbb{Z}} is thus ergodic (see e.g. Theorem 5.2.6 of [7]), and by [2, Lemma 5.6(c)] and the representation Λx=f(Zx1,Zx)\Lambda_{x}=f(Z_{x-1},Z_{x}), also the process (Λx)x(\Lambda_{x})_{x\in\mathbb{Z}} is ergodic. Defining S¯\bar{S}_{\ell} as in Example 1 and setting wx=S¯Λxw_{x}=\bar{S}_{\Lambda_{x}}, we see (e.g. again by [2, Lemma 5.6(c)]) that also (wx)x(w_{x})_{x\in\mathbb{Z}} is ergodic. After defining τS¯,x,k=S,x,k\tau_{\bar{S}_{\ell},x,k}=S_{\ell,x,k} in the same way as in Example 1, we are back in the setting of Theorem 2.1. It remains to compute 𝐄[w0]\mathbf{E}[w_{0}]. We have

𝐏(Λ0=j)=k=0𝐏(f(Z1,Z0)=j|Z1=k)π(k)=𝐏(L=j)(π(0)+π(1))+δj,0(1π(0)π(1)).\mathbf{P}(\Lambda_{0}=j)=\sum_{k=0}^{\infty}\mathbf{P}(f(Z_{-1},Z_{0})=j\,|\,Z_{-1}=k)\pi(k)=\mathbf{P}(L=j)(\pi(0)+\pi(1))+\delta_{j,0}(1-\pi(0)-\pi(1)).

Since π(0)+π(1)=1𝐏(L=0)+𝐄[L]\pi(0)+\pi(1)=\frac{1}{\mathbf{P}(L=0)+\mathbf{E}[L]} and S¯0=1\bar{S}_{0}=1, we obtain

𝐄[w0]=j=0S¯j𝐏(Λ0=j)=𝐄[S¯L]𝐏(L=0)+𝐄[L]+S¯0(11𝐏(L=0)+𝐄[L])=1+𝐄[S¯L]1𝐏(L=0)+𝐄[L].\mathbf{E}[w_{0}]=\sum_{j=0}^{\infty}\bar{S}_{j}\mathbf{P}(\Lambda_{0}=j)=\frac{\mathbf{E}[\bar{S}_{L}]}{\mathbf{P}(L=0)+\mathbf{E}[L]}+\bar{S}_{0}(1-\tfrac{1}{\mathbf{P}(L=0)+\mathbf{E}[L]})=1+\frac{\mathbf{E}[\bar{S}_{L}]-1}{\mathbf{P}(L=0)+\mathbf{E}[L]}.

Assuming again that LL is geometric with success probability 1eα1-{\rm e}^{-\alpha}, and λ<α/2\lambda<\alpha/2, we know from Example 1 that

𝐄[S¯L]=1+2(1q)q1eαe2λ,\mathbf{E}[\bar{S}_{L}]=1+\frac{2(1-q)}{q}\frac{1}{{\rm e}^{\alpha}-{\rm e}^{2\lambda}},

and we calculate 𝐏(L=0)+𝐄[L]=1+e2α1eα\mathbf{P}(L=0)+\mathbf{E}[L]=1+\frac{{\rm e}^{-2\alpha}}{1-{\rm e}^{-\alpha}}. This gives

𝐄[w0]=1+2(1q)q1(eαe2λ)(1+e2α1eα),\mathbf{E}[w_{0}]=1+\frac{2(1-q)}{q}\frac{1}{({\rm e}^{\alpha}-{\rm e}^{2\lambda})(1+\frac{{\rm e}^{-2\alpha}}{1-{\rm e}^{-\alpha}})},

and finally we obtain an explicit formula for

v¯(λ)=eλeλeλ+eλ1𝐄[w0].\overline{\mathrm{v}}(\lambda)=\frac{{\rm e}^{\lambda}-{\rm e}^{-\lambda}}{{\rm e}^{\lambda}+{\rm e}^{-\lambda}}\frac{1}{\mathbf{E}[w_{0}]}.
Remark 2.2.

We have seen in Examples 1 and 3 that the tail behaviour of the geometric distribution of the trap length is crucial for the slowdown to zero of the speed for a finite bias. Proposition 1.1 of [8] shows that this tail is also present in the case of the biased walk on a percolation cluster, although in this case a direct mapping to our reduced model is probably not possible. For the full model of one-dimensional percolation on the ladder graph [2, 10], besides the traps that we treat in Example 3 there are areas that are not traps, i. e. where it is possible to go to the next trap entrance to the right without making a step to the left. In such areas, the speed of the biased random walk should be non-decreasing as a function of the bias. For an explicit formula for the speed in the ladder graph model, one would need to find the probabilities of all possible non-trap areas along with the average time it takes to travel through them, then combine this with the considerations of Example 3. Using some of the considerations in [2], this seems possible, but will be messy, so we do not do it. We anyway expect that the slowdown behavior will be of the same type, since it is dominated by the trap regions.

3. Proof of Theorem 2.1

While Theorem 2.1 is highly plausible, a rigorous proof is required. We use a coupling-from-the-past argument adapted from [2], in which a formula, not tractable for explicit calculations, for the speed of biased random walk in a one-dimensional percolation environment is derived.

On a suitable probability space, consider independent families of random variables (wx)x(w_{x})_{x\in\mathbb{Z}}, ((τr,x,k)r0)x,k0((\tau_{r,x,k})_{r\geq 0})_{x\in\mathbb{Z},k\in\mathbb{N}_{0}} and (Ux(k))x,k(U_{x}(k))_{x\in\mathbb{Z},\,k\in\mathbb{N}} where Ux(k)U_{x}(k) is uniform on (0,1)(0,1), (wx)x(w_{x})_{x\in\mathbb{Z}} is ergodic, the processes (τr,x,k)r0(\tau_{r,x,k})_{r\geq 0}, x,k0x\in\mathbb{Z},k\in\mathbb{N}_{0} are i. i. d. r,x,kr,x,k and 𝐄[τr,x,k]=r\mathbf{E}[\tau_{r,x,k}]=r for all r0r\geq 0. We note already here that by [2, Lemma 5.6(b)], the family

(𝑽x)x:=((Ux(k))k,wx,((τr,x,k)r0)k)x(\boldsymbol{V}_{x})_{x\in\mathbb{Z}}\vcentcolon=((U_{x}(k))_{k\in\mathbb{N}},w_{x},((\tau_{r,x,k})_{r\geq 0})_{k\in\mathbb{N}})_{x\in\mathbb{Z}}

is ergodic. Let us write E:=(0,1)×+×DE\vcentcolon=(0,1)^{\mathbb{N}}\times\mathbb{R}^{+}\times D^{\mathbb{N}} for the target space of the 𝑽x\boldsymbol{V}_{\!x}. By [2, Lemma 5.6(b)], for any measurable function f:E×Ef:E^{\mathbb{N}}\times E^{\mathbb{N}}\to\mathbb{R}, the sequence

(f(𝑽x,𝑽x+1,𝑽x+2,;𝑽x1,𝑽x2,))x(f(\boldsymbol{V}_{\!x},\boldsymbol{V}_{\!x+1},\boldsymbol{V}_{\!x+2},\ldots;\boldsymbol{V}_{\!x-1},\boldsymbol{V}_{\!x-2},\ldots))_{x\in\mathbb{Z}}

is ergodic. We will make extensive use of this fact.

First, for any xx\in\mathbb{Z}, we may define (Ynx)n0(Y_{n}^{x})_{n\in\mathbb{N}_{0}} as the discrete-time random walk started at xx that, when hitting a state yy\in\mathbb{Z} for the kthk^{\mathrm{th}} time, makes the next step from yy to y1y-1 if

Uy(k)eλeλ+eλ,U_{y}(k)\leq\frac{e^{-\lambda}}{e^{\lambda}+e^{-\lambda}},

and steps to y+1y+1, otherwise. A point xx\in\mathbb{Z} is called a regeneration point if Ynx>xY_{n}^{x}>x for all nn\in\mathbb{N}. Write Ix:=𝟙{Ynx>x for all n}I_{x}\vcentcolon=\mathbbm{1}_{\{Y_{n}^{x}>x\text{ for all }n\in\mathbb{N}\}} for the indicator of the event that xx is a regeneration point.

Lemma 3.1.

The sequence (Ix)x(I_{x})_{x\in\mathbb{Z}} is ergodic and

limn1nx=0n1Ix=rλalmost surely\displaystyle\lim_{n\to\infty}\frac{1}{n}\sum_{x=0}^{n-1}I_{x}=r_{\lambda}\quad\text{almost surely} (3.1)

for rλ:=𝐏(I0)>0r_{\lambda}\vcentcolon=\mathbf{P}(I_{0})>0.

We write 𝑼x\boldsymbol{U}_{\!x} for (Ux(k))k(U_{x}(k))_{k\in\mathbb{N}}.

Proof.

Notice that Ix=f(𝑼x,𝑼x+1,;𝑼x1,𝑼x2,)I_{x}=f(\boldsymbol{U}_{\!x},\boldsymbol{U}_{\!x+1},\ldots;\boldsymbol{U}_{\!x-1},\boldsymbol{U}_{\!x-2},\ldots) for some Borel measurable function f:((0,1))[0,1]f:((0,1)^{\mathbb{N}})^{\mathbb{Z}}\to[0,1] (actually, ff only depends on the variables 𝑼x,𝑼x+1,\boldsymbol{U}_{\!x},\boldsymbol{U}_{\!x+1},\ldots). Then, since the family (𝑼y)y(\boldsymbol{U}_{\!y})_{y\in\mathbb{Z}} is ergodic as a family of i.i.d. random variables on (0,1)(0,1)^{\mathbb{N}}, we conclude that (Ix)x(I_{x})_{x\in\mathbb{Z}} is also ergodic by [2, Lemma 5.6(c)].

Since λ>0\lambda>0, each of the walks (Ynx)n0(Y_{n}^{x})_{n\in\mathbb{N}_{0}} is transient to the right and, thus, rλ=𝐏(I0)>0r_{\lambda}=\mathbf{P}(I_{0})>0. Now (3.1) follows from Birkhoff’s ergodic theorem. ∎

By (3.1), 𝐏(Ix=1 for infinitely many x0)=1\mathbf{P}(I_{x}=1\text{ for infinitely many }x\in\mathbb{N}_{0})=1 and hence, by shift invariance and with some effort, we conclude 𝐏(Ix=1 for infinitely many x<0)=1\mathbf{P}(I_{x}=1\text{ for infinitely many }x<0)=1. We enumerate the random points xx\in\mathbb{Z} with Ix=1I_{x}=1 from left to right with ,R1,R0,R1,R2,\ldots,R_{-1},R_{0},R_{1},R_{2},\ldots such that Rk<Rk+1R_{k}<R_{k+1} for all kk\in\mathbb{Z} and R1<0R0<R1R_{-1}<0\leq R_{0}<R_{1}. Now notice that for any x,yx,y\in\mathbb{Z}, the trajectories of (Ynx)n0(Y_{n}^{x})_{n\in\mathbb{N}_{0}} and (Yny)n0(Y_{n}^{y})_{n\in\mathbb{N}_{0}} coincide once they hit the first RkxyR_{k}\geq x\vee y. For kk\in\mathbb{Z}, define ρk:=inf{n0:YnRk=0}\rho_{k}\vcentcolon=\inf\{n\in\mathbb{N}_{0}:Y_{n}^{R_{k}}=0\} to be the first time at which the random walk started at RkR_{k} hits 0. By the transience to the right, ρk<\rho_{k}<\infty for all k<0k<0. Also, at time ρk1ρk\rho_{k-1}-\rho_{k} the walk started at Rk1R_{k-1} hits RkR_{k} and its trajectory then coincides with the walk started at RkR_{k} for k<0k<0, i.e.,

(Yρk1ρkRk1,Yρk1ρk+1Rk1,)=(Y0Rk,Y1Rk,).(Y^{R_{k-1}}_{\rho_{k-1}-\rho_{k}},Y^{R_{k-1}}_{\rho_{k-1}-\rho_{k}+1},\ldots)=(Y^{R_{k}}_{0},Y^{R_{k}}_{1},\ldots). (3.2)

Consequently, we may set

(Yρk,Yρk+1,):=(Y0Rk,Y1Rk,)\displaystyle(Y_{-\rho_{k}}^{-\infty},Y_{-\rho_{k}+1}^{-\infty},\ldots)\vcentcolon=(Y_{0}^{R_{k}},Y_{1}^{R_{k}},\ldots)

for k<0k<0. As ρk\rho_{k}\to\infty almost surely as kk\to-\infty, this defines YnY_{n}^{-\infty} for all nn\in\mathbb{Z} almost surely, as we may choose k<0k<0 (randomly) such that ρk<n-\rho_{k}<n. Notice that when ρk<n-\rho_{k}<n, then also ρk1<n-\rho_{k-1}<n and YnY_{n}^{-\infty} is defined twice (actually infinitely often). Then (3.2) guarantees that the definitions coincide, namely,

Yρk1+nRk1=Yρk1ρk+ρk+nRk1=Yρk+nRk.Y^{R_{k-1}}_{\rho_{k-1}+n}=Y^{R_{k-1}}_{\rho_{k-1}-\rho_{k}+\rho_{k}+n}=Y^{R_{k}}_{\rho_{k}+n}.

We may assume without loss of generality that (Yn)n0=(Yn0)n0(Y_{n})_{n\in\mathbb{N}_{0}}=(Y_{n}^{0})_{n\in\mathbb{N}_{0}}. We then define (Xt)t0(X_{t})_{t\geq 0} as the continuous-time process (with right-continuous paths) starting at the origin at time 0 that follows the trajectory of (Yn)n0(Y_{n})_{n\in\mathbb{N}_{0}} but stays at xx\in\mathbb{Z} upon its kthk^{\mathrm{th}} visit to this point for time τwx,x,k\tau_{w_{x},x,k}. (This process has the same law as the process defined in Section 2.) Similarly, we define (Xt)t0(X_{t}^{-\infty})_{t\geq 0} as the continuous-time process (with right-continuous paths) that follows the trajectory of (Yn)n0(Y_{n}^{-\infty})_{n\in\mathbb{N}_{0}}, hits the origin for the first time at time 0 and stays at xx\in\mathbb{Z} upon its kthk^{\mathrm{th}} visit to this point for time τwx,x,k\tau_{w_{x},x,k}.

For xx\in\mathbb{Z}, we define

Nx:=n𝟙{Yn=x}andZx:=k=1Nxτwx,x,kN_{x}\vcentcolon=\sum_{n\in\mathbb{Z}}\mathbbm{1}_{\{Y_{n}^{-\infty}=x\}}\quad\text{and}\quad Z_{x}\vcentcolon=\sum_{k=1}^{N_{x}}\tau_{w_{x},x,k}

to be the number of visits of the walk (Yn)n(Y_{n}^{-\infty})_{n\in\mathbb{Z}} to xx and the time the continuous-time random walk (Xt)t(X_{t}^{-\infty})_{t\in\mathbb{R}} spends at xx, respectively.

Lemma 3.2.

The families (Nx)x(N_{x})_{x\in\mathbb{Z}} and (Zx)x(Z_{x})_{x\in\mathbb{Z}} are ergodic. In particular, almost surely as nn\to\infty,

1nx=0n1Nx𝐄[N0]=eλ+eλeλeλand1nx=0n1Zx𝐄[Z0]=eλ+eλeλeλ𝐄[w0].\displaystyle\frac{1}{n}\sum_{x=0}^{n-1}N_{x}\to\mathbf{E}[N_{0}]=\frac{e^{\lambda}+e^{-\lambda}}{e^{\lambda}-e^{-\lambda}}\quad\text{and}\quad\frac{1}{n}\sum_{x=0}^{n-1}Z_{x}\to\mathbf{E}[Z_{0}]=\frac{e^{\lambda}+e^{-\lambda}}{e^{\lambda}-e^{-\lambda}}\mathbf{E}[w_{0}]. (3.3)
Proof.

The limiting relations in (3.3) follow from Birkhoff’s ergodic theorem once we have shown the ergodicity of (Nx)x(N_{x})_{x\in\mathbb{Z}} and (Zx)x(Z_{x})_{x\in\mathbb{Z}} and have calculated 𝐄[N0]\mathbf{E}[N_{0}] and 𝐄[Z0]\mathbf{E}[Z_{0}]. It is worth mentioning that, using truncation techniques, Birkhoff’s ergodic theorem also applies in the case 𝐄[w0]=\mathbf{E}[w_{0}]=\infty. Regarding 𝐄[N0]\mathbf{E}[N_{0}], notice that the number of returns to the origin of (Yn)n0(Y_{n}^{-\infty})_{n\in\mathbb{N}_{0}} is geometric with success probability being the escape probability pescp_{\mathrm{esc}}. Consequently,

𝐄[N0]=n=1𝐏(N0n)=n=1(1pesc)n1=1pesc=eλ+eλeλeλ.\displaystyle\mathbf{E}[N_{0}]=\sum_{n=1}^{\infty}\mathbf{P}(N_{0}\geq n)=\sum_{n=1}^{\infty}(1-p_{\mathrm{esc}})^{n-1}=\frac{1}{p_{\mathrm{esc}}}=\frac{e^{\lambda}+e^{-\lambda}}{e^{\lambda}-e^{-\lambda}}.

Further, since NxN_{x} and (τwx,x,k)k(\tau_{w_{x},x,k})_{k\in\mathbb{N}} are independent, by Wald’s identity,

𝐄[Z0]=𝐄[N0]𝐄[τw0,0,1]=𝐄[N0]𝐄[w0].\displaystyle\mathbf{E}[Z_{0}]=\mathbf{E}[N_{0}]\mathbf{E}[\tau_{w_{0},0,1}]=\mathbf{E}[N_{0}]\mathbf{E}[w_{0}].

It remains to prove the ergodicity of (Nx)x(N_{x})_{x\in\mathbb{Z}} and (Zx)x(Z_{x})_{x\in\mathbb{Z}}. To this end, first notice that N0=g(𝑼0,𝑼1,;𝑼1,𝑼2,)N_{0}=g(\boldsymbol{U}_{\!0},\boldsymbol{U}_{\!1},\ldots;\boldsymbol{U}_{\!-1},\boldsymbol{U}_{\!-2},\ldots) for some Borel measurable function g:((0,1))g:((0,1)^{\mathbb{N}})^{\mathbb{Z}}\to\mathbb{R}. Then, since Nx=g(𝑼x,𝑼x+1,;𝑼x1,𝑼x2,)N_{x}=g(\boldsymbol{U}_{\!x},\boldsymbol{U}_{\!x+1},\ldots;\boldsymbol{U}_{\!x-1},\boldsymbol{U}_{\!x-2},\ldots) and since the family (𝑼y)y(\boldsymbol{U}_{\!y})_{y\in\mathbb{Z}} is ergodic, so is (Nx)x(N_{x})_{x\in\mathbb{Z}} again by Lemma 5.6(c) of [2]. Regarding the ergodicity of (Zx)x(Z_{x})_{x\in\mathbb{Z}}, define

h:××D,(n,r,((xs,k)s0)k)j=1nxr,j,h:\mathbb{N}\times\mathbb{R}\times D^{\mathbb{N}}\to\mathbb{R},\quad(n,r,((x_{s,k})_{s\geq 0})_{k\in\mathbb{N}})\mapsto\sum_{j=1}^{n}x_{r,j},

then hh is product-measurable. This is true since the projections are measurable with respect to the Borel-σ\sigma-field 𝒟\mathcal{D} on the Skorohod space DD equipped with the Skorohod topology, see [5, Theorem 16.6]. By the right-continuity of the elements of the Skorohod space, also +×D(t,x)x(t)\mathbb{R}^{+}\times D\ni(t,x)\mapsto x(t)\in\mathbb{R} is ((+)𝒟(\mathcal{B}(\mathbb{R}^{+})\otimes\mathcal{D})-()\mathcal{B}(\mathbb{R})-measurable and using this, the product-measurability follows from standard arguments. Since we have

Zx=h(g(𝑼x,𝑼x+1,;𝑼x1,𝑼x2,),wx,((τs,x,k)s0)k),Z_{x}=h(g(\boldsymbol{U}_{\!x},\boldsymbol{U}_{\!x+1},\ldots;\boldsymbol{U}_{\!x-1},\boldsymbol{U}_{\!x-2},\ldots),w_{x},((\tau_{s,x,k})_{s\geq 0})_{k\in\mathbb{N}}),

the ergodicity of (Zx)x(Z_{x})_{x\in\mathbb{Z}} follows as above from the ergodicity of (𝑼x,wx,((τr,x,k)r0)k)x(\boldsymbol{U}_{x},w_{x},((\tau_{r,x,k})_{r\geq 0})_{k\in\mathbb{N}})_{x\in\mathbb{Z}} and again Lemma 5.6(c) of [2]. ∎

Proof of Theorem 2.1.

For n0n\in\mathbb{N}_{0}, let ϱn:=inf{t0:Xt=Rn}\varrho_{n}\vcentcolon=\inf\{t\geq 0:X_{t}=R_{n}\} and, similarly, ϱn:=inf{t0:Xt=Rn}\varrho_{n}^{-\infty}\vcentcolon=\inf\{t\geq 0:X_{t}^{-\infty}=R_{n}\}. Once the random walks hit R0R_{0}, the first regeneration point on the nonnegative axis, the trajectories of (Xt)t0(X_{t})_{t\geq 0} and (Xt)t0(X_{t}^{-\infty})_{t\geq 0} coincide, i.e.,

(Xϱ0+t)t0=(Xϱ0+t)t0.(X_{\varrho_{0}+t})_{t\geq 0}=(X^{-\infty}_{\varrho^{-\infty}_{0}+t})_{t\geq 0}.

(Notice that this is not the case with ϱ0\varrho_{0} and ϱ0\varrho^{-\infty}_{0} replaced by 0.) Let Δ:=ϱ0ϱ0\Delta\vcentcolon=\varrho_{0}-\varrho_{0}^{-\infty}. Then, for t>ϱ0ϱ0t>\varrho_{0}\vee\varrho^{-\infty}_{0},

Xtt=Xϱ0+tϱ0t=Xϱ0+tϱ0t=XtΔtΔtΔt.\displaystyle\frac{X_{t}}{t}=\frac{X_{\varrho_{0}+t-\varrho_{0}}}{t}=\frac{X_{\varrho_{0}^{-\infty}+t-\varrho_{0}}^{-\infty}}{t}=\frac{X_{t-\Delta}^{-\infty}}{t-\Delta}\frac{t-\Delta}{t}.

Therefore, it suffices to prove that

limtXtt=1𝐄[N0]𝐄[w0]almost surely.\lim_{t\to\infty}\frac{X_{t}^{-\infty}}{t}=\frac{1}{\mathbf{E}[N_{0}]\mathbf{E}[w_{0}]}\quad\text{almost surely}. (3.4)

Since the time the continuous-time random walk (Xt)t0(X_{t}^{-\infty})_{t\geq 0} spends on the negative half-axis is finite by transience and since RnR_{n}\to\infty almost surely as nn\to\infty, we conclude that

limnϱnRn=limn1Rnx=0Rn1Zx=𝐄[Z0]=𝐄[N0]𝐄[w0]almost surely.\displaystyle\lim_{n\to\infty}\frac{\varrho_{n}^{-\infty}}{R_{n}}=\lim_{n\to\infty}\frac{1}{R_{n}}\sum_{x=0}^{R_{n}-1}Z_{x}=\mathbf{E}[Z_{0}]=\mathbf{E}[N_{0}]\mathbf{E}[w_{0}]\quad\text{almost surely.} (3.5)

This may be rewritten as

limn1ϱnXϱn=limnRnϱn=1𝐄[N0]𝐄[w0]almost surely\displaystyle\lim_{n\to\infty}\frac{1}{\varrho_{n}^{-\infty}}X^{-\infty}_{\varrho_{n}^{-\infty}}=\lim_{n\to\infty}\frac{R_{n}}{\varrho_{n}^{-\infty}}=\frac{1}{\mathbf{E}[N_{0}]\mathbf{E}[w_{0}]}\quad\text{almost surely} (3.6)

and gives the desired convergence (3.4), but only as tt\to\infty along the regeneration times ϱ1,ϱ2,\varrho_{1}^{-\infty},\varrho_{2}^{-\infty},\ldots. This convergence along a subsequence can be lifted to (3.4) by a standard sandwich argument. For the reader’s convenience, we give this argument. For t>0t>0, let t+:=inf{ϱn:n0,Rn>Xt}t_{+}\vcentcolon=\inf\{\varrho_{n}^{-\infty}:n\in\mathbb{N}_{0},R_{n}>X_{t}\} and t:=max{ϱn:n0,RnXt}t_{-}\vcentcolon=\max\{\varrho_{n}^{-\infty}:n\in\mathbb{N}_{0},R_{n}\leq X_{t}\} where the maximum of the empty set is defined to be 0. From (3.1), we infer

limnRnn=limn(1Rnx=0Rn1Ix)1=1rλalmost surely.\displaystyle\lim_{n\to\infty}\frac{R_{n}}{n}=\lim_{n\to\infty}\bigg{(}\frac{1}{R_{n}}\sum_{x=0}^{R_{n}-1}I_{x}\bigg{)}^{-1}=\frac{1}{r_{\lambda}}\quad\text{almost surely}. (3.7)

in particular, Rn+1/Rn1R_{n+1}/R_{n}\to 1 almost surely.

We distinguish two cases, namely, 𝐄[w0]<\mathbf{E}[w_{0}]<\infty and 𝐄[w0]=\mathbf{E}[w_{0}]=\infty. First suppose that 𝐄[w0]=\mathbf{E}[w_{0}]=\infty. Then, for t>ϱ0t>\varrho_{0}^{-\infty}, we have

Xtt=XtXtXtttt.\displaystyle\frac{X_{t}^{-\infty}}{t}=\frac{X_{t}^{-\infty}}{X_{t_{-}}^{-\infty}}\frac{X_{t_{-}}^{-\infty}}{t_{-}}\frac{t_{-}}{t}. (3.8)

The first factor tends to 11 as tt\to\infty almost surely since Rn+1/Rn1R_{n+1}/R_{n}\to 1 almost surely. Further, t/tt_{-}/t is bounded by one whereas Xt/t0X_{t_{-}}^{-\infty}/t_{-}\to 0 almost surely by (3.6), so (3.4) holds.

Now suppose 𝐄[w0]<\mathbf{E}[w_{0}]<\infty. Then, for t>ϱ0t>\varrho_{0}^{-\infty}, we have

Xtttt+=Xtt+XttXt+t=Xt+t+t+t.\displaystyle\frac{X_{t_{-}}^{-\infty}}{t_{-}}\frac{t_{-}}{t_{+}}=\frac{X_{t_{-}}^{-\infty}}{t_{+}}\leq\frac{X_{t}^{-\infty}}{t}\leq\frac{X_{t_{+}}^{-\infty}}{t_{-}}=\frac{X_{t_{+}}^{-\infty}}{t_{+}}\frac{t_{+}}{t_{-}}. (3.9)

Here, limtXt/t=limtXt+/t+=1/(𝐄[N0]𝐄[w0])\lim_{t\to\infty}{X_{t_{-}}^{-\infty}}/{t_{-}}=\lim_{t\to\infty}{X_{t_{+}}^{-\infty}}/{t_{+}}={1}/(\mathbf{E}[N_{0}]\mathbf{E}[w_{0}]) almost surely by (3.5). It thus remains to show that limtt+/t=1\lim_{t\to\infty}t_{+}/t_{-}=1 almost surely or, equivalently, limnϱn+1/ϱn=1\lim_{n\to\infty}\varrho^{-\infty}_{n+1}/\varrho_{n}^{-\infty}=1 almost surely. The latter, however, follows from (3.5) in combination with 𝐄[w0]<\mathbf{E}[w_{0}]<\infty and Rn+1/Rn1R_{n+1}/R_{n}\to 1 almost surely. ∎

References

  • [1] E. Aïdékon. Speed of the biased random walk on a Galton-Watson tree. Probab. Theory Related Fields, 159(3-4):597–617, 2014.
  • [2] M. Axelson-Fisk and O. Häggström. Biased random walk in a one-dimensional percolation model. Stochastic Process. Appl., 119(10):3395–3415, 2009.
  • [3] G. Ben Arous and A. Fribergh. Biased random walks on random graphs. In Probability and statistical physics in St. Petersburg, volume 91 of Proc. Sympos. Pure Math., pages 99–153. Amer. Math. Soc., Providence, RI, 2016.
  • [4] N. Berger, N. Gantert, and Y. Peres. The speed of biased random walk on percolation clusters. Probab. Theory Related Fields, 126(2):221–242, 2003.
  • [5] P. Billingsley. Convergence of probability measures. Wiley Series in Probability and Statistics: Probability and Statistics. John Wiley & Sons, Inc., New York, second edition, 1999. A Wiley-Interscience Publication.
  • [6] A. Bowditch and Y. Tokushige. Differentiability of the speed of biased random walks on Galton-Watson trees. ALEA Lat. Am. J. Probab. Math. Stat., 17(1):609–642, 2020.
  • [7] R. Douc, E. Moulines, P. Priouret, and P. Soulier. Markov chains. Springer Series in Operations Research and Financial Engineering. Springer, Cham, 2018.
  • [8] A. Fribergh and A. Hammond. Phase transition for the speed of the biased random walk on the supercritical percolation cluster. Comm. Pure Appl. Math., 67(2):173–245, 2014.
  • [9] N. Gantert and A. Klenke. The tail of the length of an excursion in a trap of random size. J. Stat. Phys., 188(3):Paper No. 27, 20, 2022.
  • [10] N. Gantert, M. Meiners, and S. Müller. Regularity of the speed of biased random walk in a one-dimensional percolation model. J. Stat. Phys., 170(6):1123–1160, 2018.
  • [11] J.-E. Lübbers and M. Meiners. The speed of critically biased random walk in a one-dimensional percolation model. Electron. J. Probab., 24:Paper No. 23, 29, 2019.
  • [12] R. Lyons, R. Pemantle, and Y. Peres. Biased random walks on Galton-Watson trees. Probab. Theory Related Fields, 106(2):249–264, 1996.
  • [13] A.-S. Sznitman. On the anisotropic walk on the supercritical percolation cluster. Comm. Math. Phys., 240(1-2):123–148, 2003.