Speed function for biased random walks with traps
Abstract.
We consider a biased nearest-neighbor random walk on which at each step is trapped for some random time with random, site-dependent mean. We derive a simple formula for the speed function in terms of the model parameters.
Keywords: Bouchaud’s trap model, random walk in random environment, rate of escape, speed function
Subclass: MSC: 60K37 60F15
1. Introduction
Biased random walks on random graphs have been popular objects of study in recent years. The most relevant effect is that the random walk can run into traps, i. e. portions of the random graph that can only be exited by making a large number of steps in the direction opposite to the drift, which can take a long time. This leads to a decrease of the speed of the ballistic motion that is typical for a biased random in the absence of traps. Explicitly, if we embed the random graph into and define the ballistic speed of the walk as (provided the limit exists almost surely), then in many models (or rather its projection in the direction of the bias) is not a monotone function of the bias, and often equals zero beyond a critical value of the bias. The latter behaviour is known to hold for the biased walk on the infinite connected cluster of supercritical bond percolation on , [4, 8, 13], on the Galton-Watson tree with leaves [1, 6, 12] and on certain one-dimenisional percolation models [2, 10]. We refer to the lecture notes [3] and the introduction of [10] for further details on the models and relations to physical models.
The purpose of this note is to derive an explicit formula for the speed in a class of toy models that includes the biased Bouchaud’s trap model on the integers. In these models, a random walk runs on , and the geometry of the random graph is replaced by a family of random average holding times. Explicitly, a random average holding time is sampled for each , and when the biased random walk hits the point at time , it waits for a random time , mimicking the time it takes to leave a trap with entrance at . The stochastic processes , , are i. i. d. (independent and identically distributed) and has mean , but the , are not re-sampled during the dynamics, and thus the waiting times , are not independent. This means that a trivial application of the law of large numbers to compute the average waiting time per site, , is not possible. Under the assumption that is an ergodic family, we will show that the law of large numbers nevertheless holds, and derive an explicit expression for in terms of the waiting time distributions. We illustrate our results with several examples that also demonstrate how close our toy model is to various models of biased walks on random graphs.
2. Model, result and examples
For , let be a sequence of i. i. d. random variables, with
(2.1) |
We define for . The process is a biased nearest-neighbor random walk on starting at .
Let be an ergodic sequence of positive random variables, modeling the average waiting times at the points . Let be a family of independent random processes with for all and all , . We assume that for fixed and is a stochastic process taking values in the Skorohod space of right-continuous functions with existing left limits at all points in , see e. g. [5, Section 16]. Define to be the continuous-time process that follows the trajectory of the walk but following its step to site , say, spends the random time at before making the next step. More precisely, let and
(2.2) |
for . Then we define via
(2.3) |
The biased Bouchaud’s trap model on is the special case where is an i. i. d. sequence, and where , and is a family of i. i. d. exponentially distributed random variables with mean ; see [3] for further details on Bouchaud’s trap model. Note that the exponential distribution is necessary (and sufficient) to make a continuous time Markov process, but this property is irrelevant for our purposes.
There is a strong law of large numbers for our model. To formulate it, we introduce the notation for the escape probability of from the origin, i.e.,
Theorem 2.1.
Let be ergodic under , and let be the expectation with respect to . Then exists almost surely, and we have
(2.4) |
Formula (2.4) has the simple interpretation that travels, by the strong law of large numbers, with linear speed . The average duration of each visit of the walk to a site is equal to . So on average, the walk makes a step every units of time resulting in a speed of .
Example 1: One-dimensional walk with vertical traps.
Consider the graph obtained by attaching below each point of (pictured as lying on a horizontal line, called the backbone)
a finite one-dimensional ‘branch’ (branches are pictured as lying on vertical lines, called traps, see Figure 1 below)
of random length .
Assume that the , are i. i. d. We run a biased
discrete-time random walk on this graph, with the following transition probabilities: when is on the backbone and
the length of the trap is positive, it goes to the right with probability , to the left with probability
, and into the trap with probability for some fixed . When in a trap, it goes down the trap
with probability and up the trap with probability , unless it is at the very bottom of the trap,
at which point is goes up with probability one.
This model can be embedded into our model by simply only considering the horizontal coordinate of the walk. Then, the walk spends a random amount of time on or below the vertex of the backbone before moving onto a neighboring vertex, the times at different vertices are independent, and have possibly different distributions, and the times spent at consecutive visits at the same vertex are also independent, but have the same distribution at every visit. All that remains to be done is to calculate and . To this end, assume that we have a trap of length at some vertex. Let be the random time needed by the walk to exit the trap conditional on making one step into it. Although the distribution of is complicated, its expectation can be computed by electrical network methods, and the result is
see [9, Eq. (1.5)], and note that in their notation, . Since the walk can enter a trap multiple times without moving horizontally, the distribution of the total time spent at a trap of depth is equal to the distribution of , where is geometrically distributed with success probability (i. e., , ) and where the , are independent copies of . It follows that the expected time spent at a site with a trap of length is equal to
Therefore, with if , we must choose as i. i. d. random variables with
(2.5) |
and , where are independent copies of . We extend to piecewise linear to make (right-) continuous and to ensure for all . (Actually, we can choose the non-decreasing in and hence non-decreasing in , but this is not required here.) We then get
We observe that the distribution of the trap length plays a crucial role. If has all exponential moments, then the speed is strictly positive for all , while in the case where has no exponential moments, the speed is always equal to zero. The phase transition from positive to zero speed occurs if has some but not all exponential moments. This is the case, for instance, if is geometrically distributed, which seems to be the case (at least approximately) in most of the important models – see Example 3 below and the discussion following it. We set , . In [9], precise tail estimates for the random variable are obtained for this choice of , but for our purposes, simple calculations suffice. We obtain
and equal to infinity otherwise. We conclude that for fixed and ,
whereas if .
Example 2: Bouchaud’s trap model with drift-dependent holding times.
We can reduce the previous example to Bouchaud’s trap model by replacing the precise distribution of with a general heavy-tailed distribution for , and by setting with exponentially distributed with mean . The main features are the same: inspired by the known tail behaviour of as a function of the bias [9, 11], we set
(2.6) |
for some . We obtain
We choose where is a parameter, and write for if (2.6) holds with . In particular, if ,
Example 3: One-dimensional percolation with horizontal traps.
As in Example 1, we consider a random graph with a backbone that is just , pictured as lying on a horizontal line, and we attach traps at each point of the backbone. The difference now is while the first vertex of a trap is still below the vertex of at which the trap is attached, all further vertices of the trap are to the right of that first vertex, below some other vertices of the backbone. We demand that no other traps of positive length can occur for steps to the right of a trap of length , meaning that we can represent the random graph as a subgraph of (with nearest-neighbor edges).
In fact, this is the model studied in [2, 10] with traps only or single edges to the right. If we assume that edges not in the backbone appear independently with probability , then conditionally on having only a backbone and traps, the trap length is geometric with success probability . Inside a trap, we assume the same dynamics as in Example 1, i.e. a bias towards the end of the trap (which is now to the right, so the bias has the same direction and strength as in the backbone). The only point where greater generality than given in Example 1 might be desirable is the probability of the final jump out of the trap, which is in the vertical direction, and one might want to assign it a different probability than . We do not do this for the sake of not further complicating affairs.
The model described above can again be mapped exactly into the context of Theorem 2.1. To do this, consider a Markov chain with values in and with transition probabilities
The interpretation of this chain is that when , then there is a trap below (possibly beginning to the left of ) with remaining vertices (including the vertex below ). Traps of length do not obstruct further traps, but traps of length prevent further traps for steps. Therefore, if we define with
then models the length of the trap rooted at site .
The Markov chain has a unique invariant measure if and only if : from the transition probabilities, we conclude that the weight function of an invariant measure must fulfil the equations , and
The unique probability weight function solving these equations is given by
The stationary Markov chain is thus ergodic (see e.g. Theorem 5.2.6 of [7]), and by [2, Lemma 5.6(c)] and the representation , also the process is ergodic. Defining as in Example 1 and setting , we see (e.g. again by [2, Lemma 5.6(c)]) that also is ergodic. After defining in the same way as in Example 1, we are back in the setting of Theorem 2.1. It remains to compute . We have
Since and , we obtain
Assuming again that is geometric with success probability , and , we know from Example 1 that
and we calculate . This gives
and finally we obtain an explicit formula for
Remark 2.2.
We have seen in Examples 1 and 3 that the tail behaviour of the geometric distribution of the trap length is crucial for the slowdown to zero of the speed for a finite bias. Proposition 1.1 of [8] shows that this tail is also present in the case of the biased walk on a percolation cluster, although in this case a direct mapping to our reduced model is probably not possible. For the full model of one-dimensional percolation on the ladder graph [2, 10], besides the traps that we treat in Example 3 there are areas that are not traps, i. e. where it is possible to go to the next trap entrance to the right without making a step to the left. In such areas, the speed of the biased random walk should be non-decreasing as a function of the bias. For an explicit formula for the speed in the ladder graph model, one would need to find the probabilities of all possible non-trap areas along with the average time it takes to travel through them, then combine this with the considerations of Example 3. Using some of the considerations in [2], this seems possible, but will be messy, so we do not do it. We anyway expect that the slowdown behavior will be of the same type, since it is dominated by the trap regions.
3. Proof of Theorem 2.1
While Theorem 2.1 is highly plausible, a rigorous proof is required. We use a coupling-from-the-past argument adapted from [2], in which a formula, not tractable for explicit calculations, for the speed of biased random walk in a one-dimensional percolation environment is derived.
On a suitable probability space, consider independent families of random variables , and where is uniform on , is ergodic, the processes , are i. i. d. and for all . We note already here that by [2, Lemma 5.6(b)], the family
is ergodic. Let us write for the target space of the . By [2, Lemma 5.6(b)], for any measurable function , the sequence
is ergodic. We will make extensive use of this fact.
First, for any , we may define as the discrete-time random walk started at that, when hitting a state for the time, makes the next step from to if
and steps to , otherwise. A point is called a regeneration point if for all . Write for the indicator of the event that is a regeneration point.
Lemma 3.1.
The sequence is ergodic and
(3.1) |
for .
We write for .
Proof.
Notice that for some Borel measurable function (actually, only depends on the variables ). Then, since the family is ergodic as a family of i.i.d. random variables on , we conclude that is also ergodic by [2, Lemma 5.6(c)].
Since , each of the walks is transient to the right and, thus, . Now (3.1) follows from Birkhoff’s ergodic theorem. ∎
By (3.1), and hence, by shift invariance and with some effort, we conclude . We enumerate the random points with from left to right with such that for all and . Now notice that for any , the trajectories of and coincide once they hit the first . For , define to be the first time at which the random walk started at hits . By the transience to the right, for all . Also, at time the walk started at hits and its trajectory then coincides with the walk started at for , i.e.,
(3.2) |
Consequently, we may set
for . As almost surely as , this defines for all almost surely, as we may choose (randomly) such that . Notice that when , then also and is defined twice (actually infinitely often). Then (3.2) guarantees that the definitions coincide, namely,
We may assume without loss of generality that . We then define as the continuous-time process (with right-continuous paths) starting at the origin at time that follows the trajectory of but stays at upon its visit to this point for time . (This process has the same law as the process defined in Section 2.) Similarly, we define as the continuous-time process (with right-continuous paths) that follows the trajectory of , hits the origin for the first time at time and stays at upon its visit to this point for time .
For , we define
to be the number of visits of the walk to and the time the continuous-time random walk spends at , respectively.
Lemma 3.2.
The families and are ergodic. In particular, almost surely as ,
(3.3) |
Proof.
The limiting relations in (3.3) follow from Birkhoff’s ergodic theorem once we have shown the ergodicity of and and have calculated and . It is worth mentioning that, using truncation techniques, Birkhoff’s ergodic theorem also applies in the case . Regarding , notice that the number of returns to the origin of is geometric with success probability being the escape probability . Consequently,
Further, since and are independent, by Wald’s identity,
It remains to prove the ergodicity of and . To this end, first notice that for some Borel measurable function . Then, since and since the family is ergodic, so is again by Lemma 5.6(c) of [2]. Regarding the ergodicity of , define
then is product-measurable. This is true since the projections are measurable with respect to the Borel--field on the Skorohod space equipped with the Skorohod topology, see [5, Theorem 16.6]. By the right-continuity of the elements of the Skorohod space, also is )--measurable and using this, the product-measurability follows from standard arguments. Since we have
the ergodicity of follows as above from the ergodicity of and again Lemma 5.6(c) of [2]. ∎
Proof of Theorem 2.1.
For , let and, similarly, . Once the random walks hit , the first regeneration point on the nonnegative axis, the trajectories of and coincide, i.e.,
(Notice that this is not the case with and replaced by .) Let . Then, for ,
Therefore, it suffices to prove that
(3.4) |
Since the time the continuous-time random walk spends on the negative half-axis is finite by transience and since almost surely as , we conclude that
(3.5) |
This may be rewritten as
(3.6) |
and gives the desired convergence (3.4), but only as along the regeneration times . This convergence along a subsequence can be lifted to (3.4) by a standard sandwich argument. For the reader’s convenience, we give this argument. For , let and where the maximum of the empty set is defined to be . From (3.1), we infer
(3.7) |
in particular, almost surely.
References
- [1] E. Aïdékon. Speed of the biased random walk on a Galton-Watson tree. Probab. Theory Related Fields, 159(3-4):597–617, 2014.
- [2] M. Axelson-Fisk and O. Häggström. Biased random walk in a one-dimensional percolation model. Stochastic Process. Appl., 119(10):3395–3415, 2009.
- [3] G. Ben Arous and A. Fribergh. Biased random walks on random graphs. In Probability and statistical physics in St. Petersburg, volume 91 of Proc. Sympos. Pure Math., pages 99–153. Amer. Math. Soc., Providence, RI, 2016.
- [4] N. Berger, N. Gantert, and Y. Peres. The speed of biased random walk on percolation clusters. Probab. Theory Related Fields, 126(2):221–242, 2003.
- [5] P. Billingsley. Convergence of probability measures. Wiley Series in Probability and Statistics: Probability and Statistics. John Wiley & Sons, Inc., New York, second edition, 1999. A Wiley-Interscience Publication.
- [6] A. Bowditch and Y. Tokushige. Differentiability of the speed of biased random walks on Galton-Watson trees. ALEA Lat. Am. J. Probab. Math. Stat., 17(1):609–642, 2020.
- [7] R. Douc, E. Moulines, P. Priouret, and P. Soulier. Markov chains. Springer Series in Operations Research and Financial Engineering. Springer, Cham, 2018.
- [8] A. Fribergh and A. Hammond. Phase transition for the speed of the biased random walk on the supercritical percolation cluster. Comm. Pure Appl. Math., 67(2):173–245, 2014.
- [9] N. Gantert and A. Klenke. The tail of the length of an excursion in a trap of random size. J. Stat. Phys., 188(3):Paper No. 27, 20, 2022.
- [10] N. Gantert, M. Meiners, and S. Müller. Regularity of the speed of biased random walk in a one-dimensional percolation model. J. Stat. Phys., 170(6):1123–1160, 2018.
- [11] J.-E. Lübbers and M. Meiners. The speed of critically biased random walk in a one-dimensional percolation model. Electron. J. Probab., 24:Paper No. 23, 29, 2019.
- [12] R. Lyons, R. Pemantle, and Y. Peres. Biased random walks on Galton-Watson trees. Probab. Theory Related Fields, 106(2):249–264, 1996.
- [13] A.-S. Sznitman. On the anisotropic walk on the supercritical percolation cluster. Comm. Math. Phys., 240(1-2):123–148, 2003.