This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Ballisticity of Random walks in Random Environments on \mathbb{Z} with Bounded Jumps

Daniel J. Slonim111Purdue University, Department of Mathematics, 150 N. University Street, West Lafayette, IN 47907, [email protected], 0000-0002-9554-155X
Abstract

We characterize ballistic behavior for general i.i.d. random walks in random environments on \mathbb{Z} with bounded jumps. The two characterizations we provide do not use uniform ellipticity conditions. They are natural in the sense that they both relate to formulas for the limiting speed in the nearest-neighbor case. MSC 2020itions. : 60G50 60J10 60K37
Kewords: random walk, random environment, bounded jumps, ballisticity

1 Introduction

In this paper, we provide two characterizations of ballisticity for random walks in random environments (RWRE) on \mathbb{Z} with bounded jumps. Most previous characterizations of ballisticity for such RWRE (or for RWRE on a strip, a generalization of the one-dimensional bounded-jump model) are in terms of limits of norms of products of random matrices that are difficult or impossible to check in practice, and involve strong ellipticity assumptions that preclude certain types of environments. (See, for example, [1], [2], [5], [8], [3]).

Our characterizations of ballisticity are intuitively very easy to understand (if not to check in general), and do not use strong ellipticity assumptions. The primary motivation for our characterizations is that although we do not have a way to check them in general, we are able to check them in the case of Dirichlet environments, a special, weakly elliptic model of RWRE where certain exact computations are often possible (sometimes not for the walk one wants to study, but for walks on finite graphs that can be related to the desired model). The author does this in [9]. Here, we are focused only on the general case.

1.1 Model

An environment on \mathbb{Z} is a nonnegative function ω:×[0,1]\omega:\mathbb{Z}\times\mathbb{Z}\to[0,1] such that for all xx\in\mathbb{Z}, yω(x,y)=1\sum_{y\in\mathbb{Z}}\omega(x,y)=1. For a fixed xx and ω\omega, we will let ωx\omega^{x} be the measure on \mathbb{Z} defined by ωx(y)=ω(x,x+y)\omega^{x}(y)=\omega(x,x+y). Then we can identify the function ω\omega with the tuple (ωx)x(\omega^{x})_{x\in\mathbb{Z}}. Let 1()\mathcal{M}_{1}(\mathbb{Z}) be the set of probability measures on \mathbb{Z}, (endowed with the topology of weak convergence); then Ω:=x1()\Omega:=\prod_{x\in\mathbb{Z}}\mathcal{M}_{1}(\mathbb{Z}) is the set of all environments on \mathbb{Z}.

For a given environment ω\omega and xx\in\mathbb{Z}, we can define the quenched measure PωxP_{\omega}^{x} on \mathbb{Z}^{\mathbb{N}} (where we assume 00\in\mathbb{N}) to be the law of a Markov chain 𝐗=(Xn)n0{\bf X}=(X_{n})_{n\geq 0} on \mathbb{Z}, started at xx, with transition probabilities given by ω\omega. That is, Pωx(X0=x)=1P_{\omega}^{x}(X_{0}=x)=1, and for n1n\geq 1, Pωx(Xn+1=y|X0,,Xn)=ω(Xn,y)P_{\omega}^{x}(X_{n+1}=y|X_{0},\ldots,X_{n})=\omega(X_{n},y).

Let \mathcal{F} be the Borel sigma field with respect to the product topology on Ω\Omega, and let PP be a probability measure on (Ω,)(\Omega,\mathcal{F}). For a given xx\in\mathbb{Z}, we define the annealed measure x=P×Pωx\mathbb{P}^{x}=P\times P_{\omega}^{x} on Ω×\Omega\times\mathbb{Z}^{\mathbb{N}} by

x(A×B)=APωx(B)P(dω)\mathbb{P}^{x}(A\times B)=\int_{A}P_{\omega}^{x}(B)P(d\omega)

for measurable AΩ,BA\subset\Omega,B\subset\mathbb{Z}^{\mathbb{N}}. In particular, for all measurable events BB\subset\mathbb{Z}^{\mathbb{N}}, we have x(Ω×B)=E[Pωx(B)]\mathbb{P}^{x}(\Omega\times B)=E[P_{\omega}^{x}(B)]. We often abuse notation by writing x(B)\mathbb{P}^{x}(B) instead of x(Ω×B)\mathbb{P}^{x}(\Omega\times B).

As another notational convenience, we will use interval notation to denote sets of consecutive integers in the state space \mathbb{Z}, rather than subsets of \mathbb{R}. Thus, for example, we will use [1,)[1,\infty) or (0,)(0,\infty) to denote the set of integers strictly to the right of 0. However, we make one exception, using [0,1][0,1] to denote the set of all real numbers from 0 to 1.

For a subset SS\subseteq\mathbb{Z}, let ωS=(ωx)xS\omega^{S}=(\omega^{x})_{x\in S}. In the case where SS is a half-infinite interval, we simplify our notation by using ωx\omega^{\leq x} to denote ω(,x]\omega^{(-\infty,x]}, and similarly with ω<x\omega^{<x}, ωx\omega^{\geq x}, and ω>x\omega^{>x}.

We consider the following conditions for a probability measure PP on Ω\Omega:

  1. (C1)

    The {ωx}x\{\omega^{x}\}_{x\in\mathbb{Z}} are i.i.d. under PP.

  2. (C2)

    For PP–a.e. environment ω\omega, the Markov chain induced by Pω0P_{\omega}^{0} is irreducible.

  3. (C3)

    There exist LL and RR such that for PP–a.e. environment ω\omega, ω(a,b)=0\omega(a,b)=0 whenever bb is outside [aL,a+R][a-L,a+R].

1.2 Results

It was shown in [7] that under the above assumptions, a 0-1 law holds for directional transience. That is, the walk is either almost surely transient to the right, almost surely transient to the left, or almost surely recurrent. We can also show that under these assumptions, a limiting velocity necessarily exists.

Proposition 1.1.

Let PP be a probability measure on Ω\Omega satisfying (C1), (C2), and (C3). Then there is a 0\mathbb{P}^{0}–almost sure limiting velocity v=limnXnnv=\lim_{n\to\infty}\frac{X_{n}}{n}. Moreover, limxHxx=1v\lim_{x\to\infty}\frac{H_{\geq x}}{x}=\frac{1}{v}, where 1v\frac{1}{v} is understood to be \infty if v=0v=0.

It was seen in [2] that this limiting velocity exists under a uniform ellipticity assumption, but it can be proven in the more general case with standard techniques, which we outline in Section 2.1. We then provide a characterization of ballisticity, making the following additional assumption for convenience.

  1. (C4)

    For PP–a.e. environment ω\omega, limnXn=\lim_{n\to\infty}X_{n}=\infty, Pω0P_{\omega}^{0}–a.s.

By symmetry, our characterization also handles the case where the walk is transient to the left, and thus by the 0-1 law for directional transience, completely characterizes the regime v0v\neq 0 for all measures PP satisfying (C1), (C2), and (C3).

To formally state our characterization, we must establish notation for hitting times, as well as notation that counts the number of visits to a given site. For a given walk 𝐗=(Xn)n=0{\bf X}=(X_{n})_{n=0}^{\infty}, we define Hx(𝐗)H_{x}({\bf X}) to be the first time the walk hits xx\in\mathbb{Z}. That is,

Hx(𝐗)=inf{n:Xn=x}H_{x}({\bf X})=\inf\{n\in\mathbb{N}:X_{n}=x\}

We usually write it as HxH_{x} when we can do so without ambiguity. For a subset SS\subset\mathbb{Z}, let HS=minxSHxH_{S}=\min_{x\in S}H_{x}. First positive hitting times are denoted as H~x\tilde{H}_{x} or H~S\tilde{H}_{S}. That is,

H~x(𝐗)=inf{n:Xn=x},\tilde{H}_{x}({\bf X})=\inf\{n\in\mathbb{N}:X_{n}=x\},

and H~S=minxSH~x\tilde{H}_{S}=\min_{x\in S}\tilde{H}_{x}. If the set is the half-infinite interval [x,)[x,\infty), we use HxH_{\geq x} to denote its hitting time, and similarly with H>xH_{>x}, HxH_{\leq x}, and H>xH_{>x}. For a walk 𝐗=(Xn)n=0{\bf X}=(X_{n})_{n=0}^{\infty} on \mathbb{Z} with xx\in\mathbb{Z}, Nx(𝐗)=#{n:Xn=x}N_{x}({\bf X})=\#\{n\in\mathbb{N}:X_{n}=x\} is the number of times the walk is at site xx. We usually write it as NxN_{x} if we are able to do so without ambiguity. For a subset SS\subset\mathbb{Z}, let NS=xSNxN_{S}=\sum_{x\in S}N_{x}.

Theorem 1.2.

Let PP be a probability measure on Ω\Omega satisfying (C1), (C2), (C3), and (C4). Then the following are equivalent:

  1. (a)

    The walk is ballistic: v>0v>0.

  2. (b)

    𝔼0[H1]<\mathbb{E}^{0}[H_{\geq 1}]<\infty.

  3. (c)

    𝔼0[N0]=E[Eω0[N0]]<\mathbb{E}^{0}[N_{0}]=E[E_{\omega}^{0}[N_{0}]]<\infty.

Remark 1.1.

The equivalence of (a) and (b) was proven under a strong ellipticity assumption by Brémont (see [1, Theorem 3.7], [2, Proposition 9.1]). We prove the equivalence of (a), (b), and (c) without such an assumption.

Remark 1.2.

These characterizations are quite natural, given that in the nearest-neighbor case we in fact have the identity

v=1𝔼0[N0]=1𝔼0[H1].v=\frac{1}{\mathbb{E}^{0}[N_{0}]}=\frac{1}{\mathbb{E}^{0}[H_{\geq 1}]}. (1)

In general, the above turns into

v1𝔼0[N0]1𝔼0[H1].v\geq\frac{1}{\mathbb{E}^{0}[N_{0}]}\geq\frac{1}{\mathbb{E}^{0}[H_{\geq 1}]}. (2)

as we shall see from Lemmas 2.1, 2.5, and 2.6.

2 Proofs

We discuss the proof of Proposition 1.1, and then provide a full proof of Theorem 1.2.

2.1 Existence of limiting velocity

Becasue the proof of Proposition 1.1 is only a slight modification of work that has already been done, we simply outline some details of the argument rather than giving a full proof.

The proof for the recurrent case (where, necessarily, v=0v=0) can be done by a slight modification of arguments in [12], which we leave to the reader. The proof for the directionally transient case follows [6] in defining regeneration times (τk)k=0(\tau_{k})_{k=0}^{\infty}. Let τ0:=0\tau_{0}:=0, and for k1k\geq 1, define

τk:=min{n>τk1:Xn>Xj for all j<n,XnXj for all j>n}.\tau_{k}:=\min\{n>\tau_{k-1}:X_{n}>X_{j}\text{ for all }j<n,X_{n}\leq X_{j}\text{ for all }j>n\}. (3)

We can then show

v:=limnXnn=𝔼0[Xτ2Xτ1]𝔼[τ2τ1],v:=\lim_{n\to\infty}\frac{X_{n}}{n}=\frac{\mathbb{E}^{0}[X_{\tau_{2}}-X_{\tau_{1}}]}{\mathbb{E}[\tau_{2}-\tau_{1}]}, (4)

where the numerator is always finite and the fraction is understood to be 0 if the denominator is infinite. It is standard (e.g., [6], [10]) to prove a LLN like (4) by the following steps:

  1. (a)

    Show that Xτkk\frac{X_{\tau_{k}}}{k} approaches 𝔼[Xτ2Xτ1]\mathbb{E}[X_{\tau_{2}}-X_{\tau_{1}}].

  2. (b)

    Show that τkk\frac{\tau_{k}}{k} approaches 𝔼[τ2τ1]\mathbb{E}[\tau_{2}-\tau_{1}].

  3. (c)

    Show that 𝔼[Xτ2Xτ1]<\mathbb{E}[X_{\tau_{2}}-X_{\tau_{1}}]<\infty.

  4. (d)

    Conclude that the limit (4) holds for the subsequence (Xτkτk)k\left(\frac{X_{\tau_{k}}}{\tau_{k}}\right)_{k}.

  5. (e)

    Use straightforward bounds that come from the definitions of the τk\tau_{k} to get the limit for the entire sequence (Xnn)n\left(\frac{X_{n}}{n}\right)_{n}.

The identity limxHxx=1v\lim_{x\to\infty}\frac{H_{\geq x}}{x}=\frac{1}{v} then comes from a comparison of xHx\frac{x}{H_{\geq x}} with a subsequence of Xnn\frac{X_{n}}{n}.

The definition of the regeneration times is precisely set up so that both the sequences (τkτk1)k2(\tau_{k}-\tau_{k-1})_{k\geq 2} and (XτkXτk1)k2(X_{\tau_{k}}-X_{\tau_{k-1}})_{k\geq 2} are i.i.d. sequences, so proving the limits (a) and (b) is a matter of tracing how the i.i.d. feature follows from the definitions and applying the strong law of large numbers. In fact, arguing as in [6, Lemma 1], one can show that the triples

ξn:=(τnτn1,(Xτn1+iXτn1)i=1τnτn1,(ωx)x=Xτn1Xτn1)\xi_{n}:=\left(\tau_{n}-\tau_{n-1}~{},~{}(X_{\tau_{n-1}+i}-X_{\tau_{n-1}})_{i=1}^{\tau_{n}-\tau_{n-1}}~{},~{}(\omega^{x})_{x=X_{\tau_{n-1}}}^{X_{\tau_{n}}-1}\right)

are i.i.d. under 0=P×Pω0\mathbb{P}^{0}=P\times P_{\omega}^{0} for n2n\geq 2.

The finiteness in (c) can be shown using arguments along the lines of those in [11, Lemma 3.2.5]. Because we have assumed almost-sure transience to the right, the measure \mathbb{Q} introduced there is unnecessary. Another difference is that in our model, transience to the right does not imply that every vertex to the right of the origin is hit. There is a point in the argument from [11] where Zeitouni argues the the \mathbb{Q}-probability, for a given xx, that a regeneration occurs at site xx approaches 0(H<0=)\mathbb{P}^{0}(H_{<0}=\infty). Instead, we focus on the probability that the regeneration occurs on a given interval of length RR. For z0z\geq 0, let BzB_{z} be the event that for some kk, Xτk[zR,(z+1)R)X_{\tau_{k}}\in[zR,(z+1)R). Then

0(Bz)\displaystyle\mathbb{P}^{0}(B_{z}) =E[Pω0(Bz)]\displaystyle=E\left[P_{\omega}^{0}(B_{z})\right]
E[i=0R1Pω0(XH[zR,(z+1)R)=zR+i)PωzR+i(H<zR+i=)]\displaystyle\geq E\left[\sum_{i=0}^{R-1}P_{\omega}^{0}(X_{H_{[zR,(z+1)R)}}=zR+i)P_{\omega}^{zR+i}(H_{<zR+i}=\infty)\right]
=i=0R1E[Pω0(XH[zR,(z+1)R)=zR+i)PωzR+i(H<zR+i=)]\displaystyle=\sum_{i=0}^{R-1}E\left[P_{\omega}^{0}(X_{H_{[zR,(z+1)R)}}=zR+i)P_{\omega}^{zR+i}(H_{<zR+i}=\infty)\right]
=i=0R10(XH[zR,(z+1)R)=zR+i)zR+i(H<zR+i=)\displaystyle=\sum_{i=0}^{R-1}\mathbb{P}^{0}(X_{H_{[zR,(z+1)R)}}=zR+i)\mathbb{P}^{zR+i}(H_{<zR+i}=\infty)
=0(H<0=),\displaystyle=\mathbb{P}^{0}(H_{<0}=\infty), (5)

where the second to last equality comes from the fact that ω<zR\omega^{<zR} is independent of ωzR+i\omega^{\geq zR+i}, and the last comes from translation invariance and the fact that H[zR,(z+1)R)<H_{[zR,(z+1)R)}<\infty 0\mathbb{P}^{0}–a.s. The rest of the argument from [11] goes through to prove (c), and (d) and (e) easily follow.

2.2 Ballisticity

For the rest of this paper, assume PP satisfies (C1), (C2), (C3), and (C4). Our goal is to prove Theorem 1.2. We begin with the following lemma, from which (b)(c)(b)\Rightarrow(c) follows immediately.

Lemma 2.1.

𝔼0[N0]𝔼0[H1]\mathbb{E}^{0}[N_{0}]\leq\mathbb{E}^{0}[H_{\geq 1}].

Proof.

The visits to 0 may be sorted based on the farthest point to the right that the walk has hit in the past at the time of each visit. For a given y<xy<x\in\mathbb{Z}, define Ny(,x)N_{y}^{(-\infty,x)} to be the amount of time the walk spends at yy before HxH_{\geq x}.. Thus, for a walk started at 0 we get

N0=x=0(N0(,x+1)N0(,x)).N_{0}=\sum_{x=0}^{\infty}\left(N_{0}^{(-\infty,x+1)}-N_{0}^{(-\infty,x)}\right). (6)

Taking expectations on both sides, we get

𝔼0[N0]=x=0E[Eω0[N0(,x+1)N0(,x)]]\mathbb{E}^{0}[N_{0}]=\sum_{x=0}^{\infty}E\left[E_{\omega}^{0}\left[N_{0}^{(-\infty,x+1)}-N_{0}^{(-\infty,x)}\right]\right] (7)

Now N0(,x)N_{0}^{(-\infty,x)} and N0(,x+1)N_{0}^{(-\infty,x+1)} can only differ if the walk hits [x,)[x,\infty) at xx. Conditioned on this event, the distribution under Pω0P_{\omega}^{0} of (Xn+Hx)n=0(X_{n+H_{\geq x}})_{n=0}^{\infty} is the distribution of 𝐗{\bf X} under PωxP_{\omega}^{x}. Thus,

Eω0[N0(,x+1)N0(,x)]=Pω0(XHx=x)Eωx[N0(,x+1)].E_{\omega}^{0}\left[N_{0}^{(-\infty,x+1)}-N_{0}^{(-\infty,x)}\right]=P_{\omega}^{0}(X_{H_{\geq x}}=x)E_{\omega}^{x}\left[N_{0}^{(-\infty,x+1)}\right]. (8)

Combining (7) and (8), we get

𝔼0[N0]\displaystyle\mathbb{E}^{0}[N_{0}] =x=0E[Pω0(XHx=x)Eωx[N0(,x+1)]]\displaystyle=\sum_{x=0}^{\infty}E\left[P_{\omega}^{0}(X_{H_{\geq x}}=x)E_{\omega}^{x}\left[N_{0}^{(-\infty,x+1)}\right]\right]
x=0E[Eωx[N0(,x+1)]]\displaystyle\leq\sum_{x=0}^{\infty}E\left[E_{\omega}^{x}\left[N_{0}^{(-\infty,x+1)}\right]\right]
=x=0𝔼x[N0(,x+1)].\displaystyle=\sum_{x=0}^{\infty}\mathbb{E}^{x}\left[N_{0}^{(-\infty,x+1)}\right].

By stationarity,

𝔼0[N0]\displaystyle\mathbb{E}^{0}[N_{0}] x=0𝔼0[Nx(,1)]\displaystyle\leq\sum_{x=0}^{\infty}\mathbb{E}^{0}\left[N_{-x}^{(-\infty,1)}\right]
=𝔼0[x=0Nx(,1)]\displaystyle=\mathbb{E}^{0}\left[\sum_{x=0}^{\infty}N_{-x}^{(-\infty,1)}\right]
=𝔼0[H1].\displaystyle=\mathbb{E}^{0}[H_{\geq 1}].

This completes the proof. ∎

Our next goal is to prove (c) \Rightarrow (a). The proof of the following lemma, will use the regeneration times defined in (3).

Lemma 2.2.

For any cc\in\mathbb{Z},

limx1xy=cxNy=1v,a–a.s.\lim_{x\to\infty}\frac{1}{x}\sum_{y=c}^{x}N_{y}=\frac{1}{v},~{}\mathbb{P}^{a}\text{--a.s.}

If v=0v=0, then the limit is infinity.

Proof.

Fix cc. As in the proof of Lemma 2.1, we use Ny(,x)N_{y}^{(-\infty,x)} to denote the amount of time the walk spends at yy before HxH_{\geq x}. Then for any x>cx>c, write

Hxx=1xy=c1Ny(,x)+1xy=cx1Ny(,x).\frac{H_{\geq x}}{x}=\frac{1}{x}\sum_{y=-\infty}^{c-1}N_{y}^{(-\infty,x)}+\frac{1}{x}\sum_{y=c}^{x-1}N_{y}^{(-\infty,x)}.

The first sum is bounded above by y=c1Ny\sum_{y=-\infty}^{c-1}N_{y}, which is almost surely finite by assumption (C4). Dividing by xx therefore sends the first term to 0 as xx\to\infty; hence, by Proposition 1.1,

limx1xy=cx1Ny(,x)=1v,a–a.s.\lim_{x\to\infty}\frac{1}{x}\sum_{y=c}^{x-1}N_{y}^{(-\infty,x)}=\frac{1}{v},\mathbb{P}^{a}\text{--a.s.} (9)

We note that NyN_{y} and Ny(,x)N_{y}^{(-\infty,x)} differ only if the walk backtracks and visits yy after reaching [x,)[x,\infty). The sum, over all y<xy<x, of these differences, is the total amount of time the walk spends to the left of xx after HxH_{\geq x}, and it is bounded above by the time from HxH_{\geq x} to the next regeneration time (defined as in (3)), which is in turn bounded above by τJ(x)τJ(x)1\tau_{J(x)}-\tau_{J(x)-1}, where J(x)J(x) is the (random) jj such that τj1Hx<τj\tau_{j-1}\leq H_{\geq x}<\tau_{j}. Hence

1xy=cx1Ny(,x)1xy=cx1Ny1xy=cx1Ny(,x)+1x[τJ(x)τJ(x)1].\frac{1}{x}\sum_{y=c}^{x-1}N_{y}^{(-\infty,x)}\leq\frac{1}{x}\sum_{y=c}^{x-1}N_{y}\leq\frac{1}{x}\sum_{y=c}^{x-1}N_{y}^{(-\infty,x)}+\frac{1}{x}[\tau_{J(x)}-\tau_{J(x)-1}]. (10)

Assume v=0v=0. Then by (9), the left side of (10) approaches \infty as xx approaches \infty, and therefore so does the middle. On the other hand, suppose v>0v>0. By (4), 𝔼[τ2τ1]<\mathbb{E}[\tau_{2}-\tau_{1}]<\infty. Then by the strong law of large numbers, τnn𝔼[τ2τ1]<\frac{\tau_{n}}{n}\to\mathbb{E}[\tau_{2}-\tau_{1}]<\infty, which implies that τnτn1n\frac{\tau_{n}-\tau_{n-1}}{n} approaches 0. Since J(x)x+1J(x)\leq x+1, the term 1x[τJ(x)τJ(x)1]\frac{1}{x}[\tau_{J(x)}-\tau_{J(x)-1}] approaches zero almost surely; hence the squeeze theorem yields the desired result. ∎

Next, we will define a “walk from -\infty to \infty.” Call the set of vertices ((k1)R,kR]((k-1)R,kR] the kkth level of \mathbb{Z}, and for xx\in\mathbb{Z}, let [[x]]R[[x]]_{R} denote the level containing xx. Let ω\omega be a given environment. From each point aa\in\mathbb{Z}, run a walk according to the transition probabilities given by ω\omega until it reaches the next level (i.e., [[a+R]]R[[a+R]]_{R}). This will happen PωaP_{\omega}^{a}–a.s. for PP–a.e. ω\omega, by assumption (C4) and because it is not possible to jump over a set of length RR. Do this independently at every point for every level. This gives what we will call a cascade: a set of (almost surely finite) walks indexed by \mathbb{Z}, where the walk indexed by aa\in\mathbb{Z} starts at aa and ends upon reaching level [[a+R]]R[[a+R]]_{R}. Equip the set of cascades with the natural sigma field, let PωP_{\omega} be the probability measure we have just described on the space of cascades, and let =P×Pω\mathbb{P}=P\times P_{\omega}.

For \mathbb{P}–almost every cascade (i.e., those where the walk started from each vertex hits the level to its right), we can concatenate an appropriate chain of these finite walks to generate a walk started at any point aa\in\mathbb{Z}. To the walk started from a=a0a=a_{0}, append the walk started from the point a1[[a+R]]Ra_{1}\in[[a+R]]_{R} where that walk lands. And to that, append the walk started from the point a2[[a+2R]]Ra_{2}\in[[a+2R]]_{R} where the finite walk from a1a_{1} lands, and so on. This gives, for each point aa, a right-infinite walk 𝐗a=(Xna)n=0{\bf X}^{a}=(X_{n}^{a})_{n=0}^{\infty}. It is crucial to note that by the strong Markov property, the law of 𝐗a{\bf X}^{a} under PωP_{\omega} is the same as the law of 𝐗{\bf X} under PωaP_{\omega}^{a}, which also implies that the law of 𝐗a{\bf X}^{a} under \mathbb{P} is the same as the law of 𝐗{\bf X} under a\mathbb{P}^{a}.

For each xx\in\mathbb{Z}, let the “coalescence event” CxC_{x} be the event that all the walks from level [[xR]]R[[x-R]]_{R} first hit level [[x]]R[[x]]_{R} at xx. On the event CxC_{x}, we say a coalescence occurs at xx.

Lemma 2.3.

Let 1\mathcal{E}_{1} be the event that all the 𝐗a{\bf X}^{a} are transient to the right, that all steps to the left and right are bounded by LL and RR, respectively, and that infinitely many coalescences occur to the left and to the right of 0. Then (1)=1\mathbb{P}(\mathcal{E}_{1})=1.

Proof.

Boundedness of steps has probability 1 by assumption (C3), and by assumption (C4) all the walks 𝐗a{\bf X}^{a} are transient to the right with probability 1. Now for k2k\geq 2 and xx\in\mathbb{Z}, let Cx,kC_{x,k} be the event that all the walks from level [[xR]]R[[x-R]]_{R} first hit level [[x]]R[[x]]_{R} at xx without ever having reached [[xkR]]R[[x-kR]]_{R}. Choose kk large enough that (C0,k)>0\mathbb{P}(C_{0,k})>0; then under the law \mathbb{P}, the events {CnkR,k}n\{C_{nkR,k}\}_{n\in\mathbb{Z}} are all independent and have equal, positive probability. Thus, infinitely many of them will occur in both directions, \mathbb{P}–a.s. By definition, Cx,kCxC_{x,k}\subset C_{x}, and so infinitely many of the events CxC_{x} occur in both directions, \mathbb{P}–a.s. ∎

Assume the environment and cascade are in the event 1\mathcal{E}_{1} defiend in the above lemma. Let (xk)k(x_{k})_{k\in\mathbb{Z}} be the locations of coalescence events (with x0x_{0} the smallest non-negative xx such that CxC_{x} occurs). By definition of the xkx_{k}, for every kk and for every aa to the left of [[xk]]R[[x_{k}]]_{R}, H[[xk]]R(𝐗a)=Hxk(𝐗a)<H_{[[x_{k}]]_{R}}({\bf X}^{a})=H_{x_{k}}({\bf X}^{a})<\infty. Now for j<kj<k, it necessarily holds that xjx_{j} is to the left of [[xk]]R[[x_{k}]]_{R}, since there can be only one xkx_{k} per level. Define ν(j,k):=Hxk(𝐗xj)\nu(j,k):=H_{x_{k}}({\bf X}^{x_{j}}). By definition of the walks 𝐗a{\bf X}^{a}, we have for j<kj<k, n0n\geq 0,

Xn+ν(j,k)xj=Xnxk.X_{n+\nu(j,k)}^{x_{j}}=X_{n}^{x_{k}}. (11)

From this one can easily check that the ν(j,k)\nu(j,k) are additive; that is, for j<k<j<k<\ell, we have ν(j,)=ν(j,k)+ν(k,)\nu(j,\ell)=\nu(j,k)+\nu(k,\ell). Because all the 𝐗xk{\bf X}^{x_{k}} agree with each other in the sense of (11), we may define a single, bi-infinite walk 𝐗¯=(X¯n)n\overline{\bf X}=(\overline{X}_{n})_{n\in\mathbb{Z}} that agrees with all of the 𝐗xk{\bf X}^{x_{k}}. We choose to let X¯0=x0\overline{X}_{0}=x_{0}. For n0n\geq 0, let X¯n=Xnx0\overline{X}_{n}=X_{n}^{x_{0}}. For n<0n<0, choose j<0j<0 such that ν(j,0)>|n|\nu(j,0)>|n|, and let Xn=Xν(j,0)|n|xjX_{n}=X_{\nu(j,0)-|n|}^{x_{j}}. This definition is independent of the choice of jj, because if j<k<0j<k<0 with v(k,0)>|n|v(k,0)>|n|, then by (11) and the additivity of the ν(j,k)\nu(j,k), we have

Xν(j,0)|n|xj=Xν(j,k)+ν(k,0)|n|xj=Xν(k,0)|n|xk.X_{\nu(j,0)-|n|}^{x_{j}}=X_{\nu(j,k)+\nu(k,0)-|n|}^{x_{j}}=X_{\nu(k,0)-|n|}^{x_{k}}.

We may then define N¯x:=#{n:X¯n=x}\overline{N}_{x}:=\#\{n\in\mathbb{Z}:\overline{X}_{n}=x\} to be the amount of time the walk 𝐗¯\overline{\bf X} spends at xx. Thus, N¯x=limaNx(𝐗a)\overline{N}_{x}=\lim_{a\to-\infty}N_{x}({\bf X}^{a}).

Lemma 2.4.

Both of the sequences (𝐗a)a({\bf X}^{a})_{a\in\mathbb{Z}} and (N¯x)x(\overline{N}_{x})_{x\in\mathbb{Z}} are stationary and ergodic.

Proof.

For a given environment, the cascade that defines 𝐗¯\overline{\bf X} may be generated by a (countable) family 𝐔=(Una)n,a{\bf U}=\left(U_{n}^{a}\right)_{n\in\mathbb{N},a\in\mathbb{Z}} of i.i.d. uniform random variables on [0,1][0,1]. For such a collection, and an aa\in\mathbb{Z}, let 𝐔a{\bf U}^{a} be the projection (Una)n\left(U_{n}^{a}\right)_{n\in\mathbb{N}}. Given an environment ω\omega, the finite walk from aa to level [[a+R]]R[[a+R]]_{R} may be generated using the first several UnaU_{n}^{a}. (One of the UnaU_{n}^{a} is used for each step. Once the walk terminates, the rest of the UnaU_{n}^{a} are not needed, but one does not know in advance how many will be needed.) Let ω^x=(ωx,𝐔x)\hat{\omega}^{x}=(\omega^{x},{\bf U}^{x}), and ω^=(ω^x)x\hat{\omega}=(\hat{\omega}^{x})_{x\in\mathbb{Z}}. Define the left shift θ^\hat{\theta} by θ^(ω^):=(ω^x+1)x\hat{\theta}(\hat{\omega}):=(\hat{\omega}^{x+1})_{x\in\mathbb{Z}}. Then (ω^x)x(\hat{\omega}^{x})_{x\in\mathbb{Z}} is an i.i.d. sequence. We have 𝐗0=𝐗0(ω^){\bf X}^{0}={\bf X}^{0}(\hat{\omega}) and 𝐗a=𝐗0(θ^aω^){\bf X}^{a}={\bf X}^{0}(\hat{\theta}^{a}\hat{\omega}). Similarly, N¯0=N¯0(ω^)\overline{N}_{0}=\overline{N}_{0}(\hat{\omega}) and Nx=N¯0(θ^xω^)N_{x}=\overline{N}_{0}(\hat{\theta}^{x}\hat{\omega}). So it suffices to show that 𝐗0{\bf X}^{0} and N¯0\overline{N}_{0} are measurable. The measurability of 𝐗0{\bf X}^{0} is obvious. For N¯0\overline{N}_{0}, let Ak,,B,rA_{k,\ell,B,r} be the event that:

  1. (a)

    for some x<0x<0, a coalescence event Cx,kC_{x,k} (as defined in the proof of Lemma 2.3) occurs with BxkR<x<0-B\leq x-kR<x<0, so that 𝐗¯\overline{\bf X} agrees with 𝐗x{\bf X}^{x} to the right of xx;

  2. (b)

    N0[B,B](𝐗x)N_{0}^{[-B,B]}({\bf X}^{x})\geq\ell, where N0[B,B]N_{0}^{[-B,B]} is the amount of time the walk spends at xx before exiting [B,B][-B,B]; and

  3. (c)

    none of the walks from sites a[B,B]a\in[-B,B] uses more than rr of the random variables UraU_{r}^{a}.

On this event, N¯0\overline{N}_{0} is seen to be at least \ell by looking only within [B,B][-B,B] and only at the first rr uniform random variables at each site. The event Ak,,B,rA_{k,\ell,B,r} is measurable, because it is a measurable function of finitely many random variables, and the event {N¯0>}\{\overline{N}_{0}>\ell\} is, up to a null set, simply the union over all rr, then over all BB, and then over all kk of these events. Thus, N¯0\overline{N}_{0} is measurable. ∎

We now give the connection between N¯0\overline{N}_{0} and the limiting velocity vv.

Lemma 2.5.

v=1𝔼[N¯0]v=\frac{1}{\mathbb{E}[\overline{N}_{0}]}. Consequently, the walk is ballistic if and only if 𝔼[N¯0]<\mathbb{E}[\overline{N}_{0}]<\infty.

We note that a similar formula for the limiting speed in the ballistic case can be obtained from [4, Theorem 6.12] for discrete-time RWRE on a strip, but under a stronger ellipticity assumption (and with a less explicit probabilisitic interpretation).

Proof.

By Lemma 2.4 and Birkhoff’s Ergodic theorem, for any cc\in\mathbb{Z} we have

limn1ny=cnN¯y=𝔼[N¯0],–a.s.\lim_{n\to\infty}\frac{1}{n}\sum_{y=c}^{n}\overline{N}_{y}=\mathbb{E}[\overline{N}_{0}],~{}\mathbb{P}\text{--a.s.}

Fix aa\in\mathbb{Z}. For large enough yy, Ny(𝐗a)=N¯yN_{y}({\bf X}^{a})=\overline{N}_{y}. We therefore get

limn1ny=cnNy(𝐗a)=𝔼[N¯0],–a.s.\lim_{n\to\infty}\frac{1}{n}\sum_{y=c}^{n}N_{y}({\bf X}^{a})=\mathbb{E}[\overline{N}_{0}],~{}\mathbb{P}\text{--a.s.}

It follows that

limn1ny=cnNy(𝐗)=𝔼[N¯0],a–a.s.\lim_{n\to\infty}\frac{1}{n}\sum_{y=c}^{n}N_{y}({\bf X})=\mathbb{E}[\overline{N}_{0}],~{}\mathbb{P}^{a}\text{--a.s.}

By Lemma 2.2, we get v=1𝔼[N¯0]v=\frac{1}{\mathbb{E}[\overline{N}_{0}]}. ∎

Now we can see that the walk is ballistic if and only if 𝔼[N¯0]<\mathbb{E}[\overline{N}_{0}]<\infty. We now compare 𝔼[N¯0]\mathbb{E}[\overline{N}_{0}] with 𝔼0[N0]\mathbb{E}^{0}[N_{0}].

Lemma 2.6.

𝔼[N¯0]𝔼0[N0]\mathbb{E}[\overline{N}_{0}]\leq\mathbb{E}^{0}[N_{0}].

Proof.

If 𝔼0[N0]=\mathbb{E}^{0}[N_{0}]=\infty, the inequality is trivial. Assume, therefore, that 𝔼0[N0]<\mathbb{E}^{0}[N_{0}]<\infty.

Note that limxN0(𝐗x)=N¯0{\lim_{x\to\infty}N_{0}({\bf X}^{-x})=\overline{N}_{0}}, \mathbb{P}–a.s. Using Fatou’s lemma222We can get equality in (12) using dominated convergence, but it is a bit more cumbersome and unnecessary., we have

𝔼[N¯0]\displaystyle\mathbb{E}[\overline{N}_{0}] =𝔼[limxN0(𝐗x)]\displaystyle=\mathbb{E}\left[\lim_{x\to\infty}N_{0}({\bf X}^{-x})\right]
limx𝔼[N0(𝐗x)]\displaystyle\leq\lim_{x\to\infty}\mathbb{E}\left[N_{0}({\bf X}^{-x})\right] (12)
=limx𝔼x[N0(𝐗)].\displaystyle=\lim_{x\to\infty}\mathbb{E}^{-x}[N_{0}({\bf X})].

But each term 𝔼x[N0(𝐗)]=E[Eωx[N0]]\mathbb{E}^{-x}[N_{0}({\bf X})]=E[E_{\omega}^{-x}[N_{0}]] is less than 𝔼0[N0]=E[Eω0[N0]]\mathbb{E}^{0}[N_{0}]=E[E_{\omega}^{0}[N_{0}]], since Eωx[N0]=Pωx(H0<)Eω0[N0]E_{\omega}^{-x}[N_{0}]=P_{\omega}^{-x}(H_{0}<\infty)E_{\omega}^{0}[N_{0}]. Therefore, we may conclude 𝔼[N¯0]𝔼0[N0]\mathbb{E}[\overline{N}_{0}]\leq\mathbb{E}^{0}[N_{0}]. ∎

We are now in a position to prove our main theorem, and in fact two of the three implications are already done.

Proof of Theorem 1.2.

(b) \Rightarrow (c) This is an immediate consequence of Lemma 2.1.

(c) \Rightarrow (a) Assume 𝔼0[N0]<\mathbb{E}^{0}[N_{0}]<\infty. Then combining Lemmas 2.5 and 2.6 gives v>0v>0.

(a) \Rightarrow (b) Suppose 𝔼0[H1]=\mathbb{E}^{0}[H_{\geq 1}]=\infty. We will show that v=0v=0. We claim

𝔼[min1iRHR+1(𝐗i)]=.\mathbb{E}\left[\min_{1\leq i\leq R}H_{\geq R+1}({\bf X}^{i})\right]=\infty. (13)

By assumptions (C1), (C2), and (C3), there is an m0max(L,R)m_{0}\geq\max(L,R) large enough that every interval of length m0m_{0} is irreducible with positive PP-probability. Let AA be the event that

  • For each i=1,,R1i=1,\ldots,R-1, the walk 𝐗i{\bf X}^{i} hits RR before leaving [Rm0+1,R][R-m_{0}+1,R].

  • The walk 𝐗R{\bf X}^{R} first exits [Rm0+1,R][R-m_{0}+1,R] by hitting Rm0R-m_{0}.

Then under PP, the random variable Pω(A)P_{\omega}(A) is independent of ωRm0\omega^{\leq R-m_{0}}. Now, on the event AA, the minimum min1iRHR+1(𝐗i)\min_{1\leq i\leq R}H_{\geq R+1}({\bf X}^{i}) is attained for i=Ri=R, since all the other walks take time to get to RR and then simply follow 𝐗R{\bf X}^{R}. Now on AA, HR+1(𝐗R)H_{\geq R+1}({\bf X}^{R}) is greater than the amount of time it takes for the walk 𝐗R{\bf X}^{R} to cross back to [Rm0+1,)[R-m_{0}+1,\infty) after first hitting Rm0R-m_{0}. The quenched expectation of this time, conditioned on AA, is EωRm0[HRm0+1]E_{\omega}^{R-m_{0}}[H_{\geq R-m_{0}+1}] by the strong Markov property, and this depends only on ωRm0\omega^{\leq R-m_{0}}. Hence

𝔼[min1iRHR+1(𝐗i)]\displaystyle\mathbb{E}\left[\min_{1\leq i\leq R}H_{\geq R+1}({\bf X}^{i})\right] E[Pω(A)Eω[HR+1(𝐗R)|A]]\displaystyle\geq E\left[P_{\omega}(A)E_{\omega}[H_{\geq R+1}({\bf X}^{R})|A]\right]
E[Pω(A)EωRm0[HRm0+1(𝐗)]]\displaystyle\geq E\left[P_{\omega}(A)E_{\omega}^{R-m_{0}}[H_{\geq R-m_{0}+1}({\bf X})]\right]
=(A)𝔼Rm0[HRm0+1(𝐗)]\displaystyle=\mathbb{P}(A)\mathbb{E}^{R-m_{0}}[H_{\geq R-m_{0}+1}({\bf X})]
=(A)𝔼0[H1(𝐗)]\displaystyle=\mathbb{P}(A)\mathbb{E}^{0}[H_{\geq 1}({\bf X})]
=.\displaystyle=\infty.

This proves (13). Now for x1x\geq 1,

HxR+1(𝐗0)H1(𝐗0)+k=1xmin1iRHkR+1(𝐗(k1)R+i).H_{\geq xR+1}({\bf X}^{0})\geq H_{\geq 1}({\bf X}^{0})+\sum_{k=1}^{x}\min_{1\leq i\leq R}H_{\geq kR+1}({\bf X}^{(k-1)R+i}).

Dividing by xRxR and taking limits as xx\to\infty, we get limxHxR+1(𝐗0)xR=\lim_{x\to\infty}\frac{H_{\geq xR+1}({\bf X}^{0})}{xR}=\infty, \mathbb{P}–a.s. by Birkhoff’s ergodic theorem. Hence limxHxR+1(𝐗)(x+1)R=\lim_{x\to\infty}\frac{H_{\geq xR+1}({\bf X})}{(x+1)R}=\infty, 0\mathbb{P}^{0}–a.s. For nk=HkR+1(𝐗)n_{k}=H_{\geq kR+1}({\bf X}), Xnk(x+1)RX_{n_{k}}\leq(x+1)R, and so Xnknk(k+1)RHkR+1(𝐗)0\frac{X_{n_{k}}}{n_{k}}\leq\frac{(k+1)R}{H_{\geq kR+1}({\bf X})}\to 0 as kk\to\infty, 0\mathbb{P}^{0}–a.s. Since Xnknk\frac{X_{n_{k}}}{n_{k}} is a subsequence of Xnn\frac{X_{n}}{n}, it must 0\mathbb{P}^{0}–a.s. approach vv, and therefore v=0v=0. ∎

References

  • [1] Julien Brémont. On some random walks on z in random medium. Ann. Probab., 30(3):1266–1312, 07 2002.
  • [2] Julien Brémont. Random walks in random medium on z and lyapunov spectrum. Annales de l’Institut Henri Poincare (B) Probability and Statistics, 40(3):309 – 336, 2004.
  • [3] Julien Brémont. One-dimensional finite range random walk in random medium and invariant measure equation. Ann. Inst. H. Poincaré Probab. Statist., 45(1):70–103, 02 2009.
  • [4] Dmitry Dolgopyat and Ilya Goldsheid. Invariant measure for random walks on ergodic environments on a strip. The Annals of Probability, 47(4):2494 – 2528, 2019.
  • [5] Ilya Goldsheid. Linear and sub-linear growth and the clt for hitting times of a random walk in random environment on a strip. Probability Theory and Related Fields, 141:471–511, 07 2008.
  • [6] H. Kesten. A renewal theorem for random walk in a random environment. Proc. Sympos. Pure Math., 31:67–77, 1977.
  • [7] Eric S. Key. Recurrence and transience criteria for random walk in a random environment. Ann. Probab., 12(2):529–560, 05 1984.
  • [8] Alexander Roitershtein. Transient random walks on a strip in a random environment. Ann. Probab., 36(6):2354–2387, 11 2008.
  • [9] Daniel J. Slonim. Random walks in dirichlet random environments on \mathbb{Z} with bounded jumps. Preprint, submitted May 2021. https://arxiv.org/abs/2104.14950.
  • [10] Alain-Sol Sznitman and Martin Zerner. A Law of Large Numbers for Random Walks in Random Environment. The Annals of Probability, 27(4):1851 – 1869, 1999.
  • [11] O. Zeitouni. Random walks in random environment. In J. Picard, editor, Lecture notes in probability theory and statistics: École d’été de probabilités de Saint-Flour XXXI-2001, volume 1837 of Lect. Notes Math., pages 190–313. Springer, 2004.
  • [12] Martin Zerner. A non-ballistic law of large numbers for random walks in i.i.d. random environment. Electron. Commun. Probab., 7:191–197, 2002.