Local time, upcrossing time and weak cutpoints of a spatially inhomogeneous random walk on the line
22footnotetext: School of Mathematics and Statistics, Anhui Normal University, Wuhu 241003, China 44footnotetext: Email: [email protected]Abstract
In this paper, we study a transient spatially inhomogeneous random walk with asymptotically zero drifts on the lattice of the positive half line. We give criteria for the finiteness of the number of points having exactly the same local time and/or upcrossing time and weak cutpoints (a point is called a weak cutpoint if the walk never returns to after its first upcrossing from to ). In addition, for the walk with some special local drifts, we also give the order of the expected number of these points in Finally, we show that, when properly scaled, the number of these points in converges in distribution to a random variable with the standard exponential distribution. Our results answer three conjectures related to the local time, the upcrossing time, and the weak cutpoints proposed by E. Csáki, A. Földes, P. Révész [J. Theoret. Probab. 23 (2) (2010) 624-638].
Keywords: Random walk; Local time; Upcrossing time; Cutpoints; Moment method
MSC 2020: 60J10, 60G50, 60J55
1 Introduction
In this paper, we study the local time, upcrossing time and weak cutpoints of a spatially inhomogeneous random walk on with asymptotically zero drifts. To introduce precisely the model, suppose that are numbers such that and for all Let be a Markov chain on starting from with transition probabilities
for and Since the transition probabilities are spatially inhomogeneous, we call the chain a spatially inhomogeneous random walk. We remark that the chain is also called a birth and death chain in the literature.
For let Then we have the following criterion of recurrence or transience for the chain which can be found, e.g., in Chung’s book [1, Part I. §12].
Proposition 1.
The chain is transient if and only if
Notice that the local drift of the walk at is What we are concerned about are random walks with asymptotically zero drifts, that is, the case as whose limit behaviors are different in many aspects from those of simple random walks. Such random walks date back to a series of works of Lamperti. In [8, 10], Lamperti gave the criteria for recurrence or transience and the existence of the moments. Moreover, in [9], he proved an invariance principle which says that under certain conditions, converges weakly in to a Bessel process. Such an invariance principle was further investigated and strengthened in Csáki et al. [3]. It is well known that a transient simple random walk grows linearly to infinity. But for a transient spatially inhomogeneous random walk, if the drifts are asymptotically zero, intuitively, the walk grows much slower than the simple random walk. The number of cutpoints on the path of the walk can reflect the speed of the walk. Roughly speaking, if the walk never returns to after its first entry into then is a cutpoint. By intuition, if the walk runs to infinity more quickly, there are more cutpoints. But for a spatially inhomogeneous random walk, things are very different. James et al. [6] gave an example of a transient random walk which has only finitely many cutpoints. Such a phenomenon never happens to the simple random walk since it is known that the number of cutpoints of the simple random walk is either 0 (recurrent case) or (transient case). Along this line, Csáki et al. [4] provided a criterion to determine the finiteness of the number of cutpoints, which we will quote below. We first give the following definition.
Definition 1.
For we call the local time of the chain at and the upcrossing time of the chain from to respectively. If we call a cutpoint of If we call a strong cutpoint of
For write
(4) |
and denote
(5) |
The theorem below gives a criterion for the finiteness of the number of cutpoints.
Theorem A(Csáki et al. [4]) Suppose and let be as in (5). If
then the chain has finitely many cutpoints almost surely. If is increasing in there exist and such that for all and
then has infinitely many strong cutpoints almost surely.
Remark 1.
Csáki et al. proposed in [4] four conjectures related to the weak cutpoints, local time and upcrossing time of the walk. We quote here the first three of them word for word.
Open Problems(Csáki et al. [4])
-
1.
It would be interesting to know whether Theorem A also holds for the number of sites with or for any fixed integer , i.e., whether we have the same criteria for and to occur infinitely often almost surely for any positive integer
-
2.
Call the site a weak cutpoint if, for some , we have and One would like to know whether Theorem A can be extended for the number of weak cutpoints.
-
3.
It would be interesting to know whether Theorem A holds for cutpoints with a given local time, i.e., for or, in general, infinitely often almost surely, with positive integers
The main task of the paper is to answer the above open problems.
Definition 2.
Let and be two subsets of We denote by
the collection of nonnegative sites at which the local times and the upcrossing times of the walk belong to and respectively. Moreover, consider the weak cutpoint defined in the above open problem 2. We denote by
the collection of all weak cutpoints.
In what follows, if instead of we write simply The notations and can be understood similarly. Also, for simplicity, we write
(6) |
Notice that is the collection of sites at which the upcrossing times of the walk are exactly while is the collection of sites at which the local times of the walk are exactly Finally, we use to denote the cardinality for a set
The theorem below provides criteria to tell whether or are finite or not, answering the above-mentioned open problems.
Theorem 1.
Let be as in (5). Suppose that is increasing in for some and as If
then almost surely, we have
-
(i)
for each and
-
(ii)
for each and
-
(iii)
If for some and and
then almost surely, we have
-
(i)
for each and
-
(ii)
for each and
-
(iii)
Remark 2.
We explain how Theorem 1 answers the above open problems. Noticing that and thus the criteria for and answer the above open problem 1. Clearly, the criterion for gives an answer to the above open problem 2 and the criterion for the finiteness of provides an answer to the above open problem 3.
Fix and If for some and hence all then it follows from Proposition 1 that the chain is recurrent. Thus, for any site we have As a consequence, we get which coincides with the convergent part of Theorem 1 since in this case, we always have The interesting phenomenon arises when for all and In this case, the walk is transient, but almost surely, we have
Finally, we mention that and are collections of the cutpoints and strong cutpoints, respectively. From this point of view, Theorem 1 is a generalization of Theorem A.
As pointed out in Csáki et al. [4], the condition “ for some and ” is a technical one. We now give a concrete example for which such a condition is satisfied naturally.
Fix Clearly, there exists such that for all For set
(9) |
which serves as perturbation that will be added to the transition probability of the simple recurrent random walk. We emphasize that the perturbation in (9) is taken from [4] without change.
Lemma 1.
For let be the one in (9) and set Then
The proof of Lemma 1 is given in Section 7. It is easy to see from Lemma 1 that there is such that Thus, applying Theorem 1, we have the following corollary, which gives sharp criteria for the finiteness of the sets , and
Corollary 1.
Fix and For let be the one in (9) and set If we have almost surely. If we have almost surely.
Another interesting question is to estimate the cardinality of the set for or and large enough. We first compute the expectations of the cardinalities of these sets.
Proposition 2.
Remark 3.
Inspired by Proposition 2, one may expect that for or almost surely as However, this is not the case. The main reason is that the events are not independent. We have the following theorem.
Theorem 2.
Suppose are positive integers. Let and for Set for Then
as where (and in what follows) the notation means the convergence in distribution and is an exponential random variable with
Remark 4.
Theorem 2 is proved by the standard moment method. The key step is to find the limits of the moments of or which depend on the joint probability of for , or Except for the special case it is hard to give explicitly the formulae of such joint probabilities. But we can show that the events satisfy certain Markov properties, that is, for , or So it suffices to compute and estimate the two-dimensional joint probabilities of these events. Fix an integer For general and we currently do not know how to get the limit distributions of and For example, for is the collection of sites in at which the local times of the walk are exactly But neither is a Markov chain nor is it possible for us to compute and estimate the joint probability of Therefore, by our method, we can not find the limit distribution of
Outline of the paper. The present paper is built up as follows. Firstly, besides introducing the exit probabilities of the walk, we prove some combinatorial results in Section 2, which play key roles in proving the main results. Then, by summarizing the proofs of Csáki et al. [4, Theorem 1.1] and James et al. [6, Theorem 3.1], we get a version of the Borel-Cantelli lemma in Section 3, which will be used to prove Theorem 1. In Section 4, we express the upcrossing times in terms of loops, moves and escapes. The probability of and the joint probability and dependence of and for or are studied in Section 5. Section 6 is devoted to proving Theorem 1. Finally, Proposition 2 and Theorem 2 are proved in Section 7 and Section 8, respectively.
Notations: When writing , we mean as . The notation means there is a constant such that for all large enough. We also adopt the convention that an empty product equals identity and an empty sum equals Finally, throughout the paper, is assumed to be a constant whose value may change from line to line.
2 Auxiliary results
In this section, we introduce first the exit probabilities of the walk from certain intervals and then give some combinatorial results that are helpful for the proofs of both Theorem 1 and Theorem 2.
2.1 Exit probabilities
For set
(14) | ||||
(15) |
which are the exit probabilities of the walk from Evidently, we have
Set also
Since we must have
The following lemma is from Csáki et al. [2], see Lemma 2.2 therein, which provides formulae for the exit probabilities.
Lemma 2.
For we have
It is easily seen from Lemma 2 that for
(16) |
Notice that these exit or escape probabilities are written in terms of and . By some direct computation, we see that
Furthermore, it is shown in Csáki et al. [4] (see display (2.3) therein) that
(17) |
Finally, we give the following estimations.
Lemma 3.
Suppose that as Then, for each there exists which is independent of such that for we have
Proof. Fix Choose which is independent of such that for all Since there is a number such that for we have Let Then, we have
and
Moreover, since we have
Therefore, fixing the lemma is proved by choosing small enough.
2.2 Some combinatorial results
To begin with, we study the cardinality of certain simplexes. For positive integers set
(18) | ||||
Lemma 4.
For we have and
Proof. The first assertion is obvious. Indeed, arrange balls in a lineup. This lineup has spaces among the balls. Choose of those spaces and erect a wall at each of the selected places to get an arrangement of nonempty sets of balls. Obviously, this can be done by ways. Thus
Next, we compute the cardinality of Consider a vector There are exactly strictly positive coordinates. Since by definition, thus for some we have and But there are ways to choose coordinates from coordinates. Therefore, we must have
The lemma is proved.
The next lemma is crucial for proving Theorem 2.
Lemma 5.
Write and fix We have
(19) |
Remark 5.
We have never seen formula (19) in the literature we are aware of, so we do not know whether it is actually new or not. If that is the case, such a formula may have an independent interest in mathematical analysis.
3 A version of the Borel-Cantelli lemma
The following Theorem 3 can be seen as a version of the Borel-Cantelli lemma. Its proof is indeed a summary of Csáki et al. [4, Theorem 1.1] and James et al. [6, Theorem 3.1]. But some details are different. So we provide a complete proof here. We remark that to prove Theorem 1, we just need to check one-by-one that all conditions of Theorem 3 are fulfilled.
Theorem 3 ([4], [6]).
For let and be certain numbers. Assume that is a probability space and are -measurable events such that
(27) |
for some
(i) Suppose that
(28) |
and there exists a number such that
(29) |
Then we have
(ii) Suppose that there exist and such that for all and
(30) |
In addition, assume that for each there exists such that is increasing in and
(31) |
Then we have
Proof. We prove part (i) first. Suppose all conditions of part (i) are fulfilled. For set and write
Let be as in (29). Taking (28) and (29) into consideration, for (or equivalently ), and we have
Thus, for
(32) |
Using (29), we get for Since as thus, from (32), we infer that
Applying the Borel-Cantelli lemma, we get Thus, Part (i) is proved.
Next, we prove part (ii). Suppose all conditions of part (ii) hold. For set Since as there exists a number such that for Notice that is increasing in It follows from Csáki et al. [4, Lemma 2.2] that and are equiconvergent. Therefore, we get
(33) |
Now fix Obviously, for Then, there exists a number such that for all Therefore, it follows from (30) and (31) that for
(34) |
Define
Then for we have Therefore, it follows from (3) that
(35) |
Next, we consider Note that for , we have . Since is increasing in thus, using again the fact as from (3) we deduce that
Since for all it can be shown that for some constant depending only on see [4, p.635, display (4.2)]. Thus,
(36) |
Taking (35) and (36) together, we have
Writing , owing to (33) , we have
An application of a generalized version of the Borel-Cantelli lemma (see Petrov [11, p. 235]) yields that As a consequence, we get Since is arbitrary, letting we conclude that Part (ii) is proved.
4 Upcrossing times represented by loops, moves and escapes
In this section, by decomposing the path of the walk, we represent the upcrossing times by certain loops, moves and escapes. Throughout this section, we assume As a consequence, we see from Proposition 1 that almost surely.
4.1 Loops, moves and escapes
Now, considering the random walk we introduce the following events. For integers define
and

Definition 3.
If the event or occurs, we say that a forward loop at a forward loop at avoiding a forward move from to or an escape from to appears, respectively. Similarly, if the event or occurs, we say that a backward loop at a backward loop at avoiding or a backward move from to appears, respectively. When there is no danger of confusion, these notations of events are also used to represent the loop, move, or escape that they define. For example, if we say there is a forward loop we mean that a forward loop appears at
4.2 Upcrossing times and branching processes with immigration
Since the chain is transient, for each there must be an escape at For if then there must be forward loops and one escape at
Fixing and on the event let
(44) |
and
(45) |

Conditioned on the event by strong Markov property, the paths contained in those forward loops are i.i.d. and moreover, they are all independent of the path contained in the escape Therefore, we conclude that
(46) |
where are i.i.d. and they are all independent of the number represents the escape from to Indeed, from the definition in (4.2) and (4.2), by the strong Markov property, are mutually independent. Thus, (46) says indeed that
(47) |
To study the weak cutpoints, we need to introduce a new chain related to the upcrossing times. Let and for let
Then, in view of (4.2), (4.2) and (46), we have
(48) |
Clearly, by the strong Markov property, and are mutually independent. Thus, we come to the conclusion that
(49) |
Now, we consider the sets and defined in Definition 2 and (6). Clearly,
(50) | |||
(51) | |||
(52) |
From (47) and (49) we see that both and are Markov chains. Thus, by the Markov property, we get directly the following lemma.
Lemma 6.
Suppose Then we have
for or
What we really need is Lemma 6. Of course, we can say more about the Markov chain and For write
(53) | |||
(54) |
By the Markov property, it follows that
from which we get Thus, both and are candidates for probability generation functions. By checking carefully, we can show that and are indeed the probability generation functions of the random variables and defined in (4.2) and (4.2), respectively. Consequently, from (46) and (48) we get the following proposition.
Proposition 3.
5 Probability, joint probability and dependence
As seen in (29) and (31), to apply Theorem 3, we need to compute the probability of and characterize the dependence of and for or The following proposition is the main result of this section.
Proposition 4.
Suppose as Let or Then
(58) |
and
(59) | |||
(60) | |||
(61) |
Furthermore, there exists such that for all we have
(62) | |||
(63) |
and for we have
(64) |
5.1 Weak cutpoints-Proof of Proposition 4 for
To warm up, we consider first the weak cutpoints. The following lemma gives the probability of and the joint probability of
Lemma 7.
We have
(65) | ||||
(66) |
Proof. To begin with, we prove (65). Notice that the event if and only if after hitting for the first time, the walk forms a number of (possibly ) backward loops at one after another and then never visits Thus, on accounting of (41), we get
which proves (65).
Next, we show (66). Notice that the event occurs if and only if the following events occur consecutively: after hitting for the first time, the walk forms a number of (possibly ) backward loops at one after another; then, restarting from it hits before it hits restarting from it forms a number of (possibly ) backward loops at avoiding one after another; finally, restarting from it never hits
Therefore, it follows from the Markov property and (14)-(16) and (41)-(42) that
which finishes the proof of (66).
Next, we give the proof of Proposition 4 for
Proof of Proposition 4 for Let Suppose as Then we have as and for all Thus, it follows from (65) that
(67) |
which proves (58).
Next, taking (65) and (66) together, for we get
(68) |
and
We thus get (60). Since for and for then from (68), we obtain
from which we get (59). Notice that by (67) we have Thus, due to (49) and (52), (61) is a direct consequence of (59).
Now fix By (67), there exists a number such that
(69) |
Using again (65) and (66), we get
Applying Lemma 3, we can find a number such
for all Therefore, we get
(70) |
Taking (69) and (70) together, we infer that
(71) |
for Choosing small enough such that and and letting from (70) and (71), we get (62) and (63), respectively. Finally, putting (69) and (71) together, we get (64). Proposition 4 is proved for
5.2 Local times and upcrossing times-Proof of Proposition 4 for or
Compared with the case it is much more involved to prove Proposition 4 for or The main difficulty is to compute the joint probability of for or
5.2.1 Probabilities and joint probabilities
To begin with, we compute the probabilities of and
Lemma 8.
For and we have
(72) | ||||
(73) |
Proof. For and if and only if the following events occur consecutively: starting from the walk hits for the first time (with probability 1); restarting from the walk forms backward loops at and forward loops at finally, the walk forms an escape from to
We next determine the possible orders of those backward loops forward loops and one escape To this end, notice that there are jumps of the walk from to or We know exactly that when standing at for the last time, the walk must take a jump from to which forms the beginning of the escape at But there are possible ways to choose jumps from the other jumps to jump from to which are the beginnings of those forward loops
To sum up the above discussion, owing to (37), (40) and (41), we get
Finally, taking summation on both sides of (72) over from to we obtain (73). The lemma is proved.
The lemma below gives the joint probability of or
Lemma 9.
For integers and we have
(74) |
and
(75) |
Remark 6.
Setting and taking summation on both sides of (9) over and from to by some tedious manipulation, we get
But we do not need such a formula because is finite if and only if is finite for all while is infinite if and only if is infinite for some
We give two proofs of Lemma 9 from different scopes, i.e., the combinatorial one and the path decomposition one.
Proof of Lemma 9(Combinatorial scope). For simplicity, write
Let and be the loops, moves and escapes defined in Definition 3 whose probabilities are given in (37)-(43).
Notice that the event occurs if and only if for some there are backward loops at forward loops at avoiding forward moves from to , forward loops at backward loops at avoiding backward moves from to and finally an escape from to See Figure 3 for the case and

Clearly,
(76) |
If the order of those events is determined, in view of (37)-(43), by the Marov property, the probability of these events occurring one by one is
(77) |
which does not depend on the order of these events.
What is left for us to do is to prove (78). Indeed, fix On one hand, since the local time at is the walk should visit for times, or in other words, it must jump away from to or for times. On the other hand, since the upcrossing time at is the walk should jump from to for times and jump from to for times. Those jumps from to form the beginnings of those backward loops while those jumps from to form the beginnings of those forward moves and forward loops But we know exactly that when standing at for the last time, it jumps from to and never returns to Except for such a jump, we do not know exactly which jumps of the other jumps are from to There are ways to choose jumps from those jumps to jump from to and form the beginnings of those backward loops
Up to now, we already know which jumps (out of jumps) of the walk are from to They form the beginnings of those forward loops and forward moves As mentioned above, the last jump of those jumps is from to which forms the beginning of an upward move There are ways to choose jumps from the left jumps to form the beginning of those forward loops and the other jumps to form the beginnings of the other forward moves
We now know there are jumps that the walk takes from to and already know which jumps of them form the beginnings of the forward loops (which end with a jump from to ) and which jumps of them form the beginnings of the forward moves But since the upcrossing time at is except for the last time, every time the walk jumps forward from to after a number of steps, it must return to Thus, except for the last forward move from to the other forward moves from to must be followed by a number (possibly ) of forward loops and/or a number(possibly ) of backward loops and then, with probability a backward move from to The walk jumps from to or for times. We also know that when the last time the walk standing at it jumps from to and never returns to There are possible ways to choose jumps from the other jumps to form the beginning of those backward moves from to
Until now, we have fixed jumps from to which are the beginnings of those backward moves from to and one jump from to which is the beginning of the escape from to But the walk will leave for times. Therefore, except for the above already fixed jumps, the other jumps that the walk jumps away from should be the beginnings of those forward loops and backward loops at There are ways to choose jumps from those jumps to serve as the beginnings of those forward loops at So far, we have already determined the order of that escape and those loops and moves.
Finally, taking summation on both sides of (9) over and from to and to respectively, and using the formula by some tedious but elementary computation, we get (75). The lemma is proved.
Remark 7.
The above combinatorial proof of Lemma 9 related to loops, moves and escapes is relatively short and easy to understand. The idea of such a proof comes from an anonymous referee of an early unpublished version of our manuscript [13], which contains only criterion for the finiteness of and the limit distribution of But we worry that some of the readers may doubt the rigorousness of such a combinatorial proof. So we also provide a path decomposition proof here, which is much longer.
Proof of Lemma 9(Path decomposition scope). In this proof, for a -dimensional vector we stipulate that its th coordinate is For two -dimensional vectors and we write if and
We show only (9), since (75) can be derived from (9). Denote again Let and for let
Then we have
where for if Notice that on For let
(79) | ||||
(80) |
Furthermore, let
(81) |
and for set
(82) |
Clearly, occurs if and only if
or equivalently,
Now fix and Denote
(83) |
In order to compute the probability of the event for we define
It is easily seen that
(84) |
Furthermore, we claim that
(85) |
and
(86) |
Indeed, the event occurs if and only if starting from the walk hits before it hits which occurs with probability Thus, we get (85). Next, fixing we show (5.2.1). Notice that the event occurs if and only if the following events occur:
(i) starting from the walk hits before it hits
(ii) restarting from before it hits it has jumps away from of which the last one is from to , and jumps are from to and the other jumps are from to .
Clearly, the probability of the first event equals ). To see the probability of the second event, notice that standing at for the last time, the walk jumps to and restarting from it hits before it hits with probability Next, we see what happens after the other times that the walk stands at For times, it moves from to and returns back to with probability And for times, it moves from to and restarting from it returns back to before it hits with probability But there are possible ways to choose jumps from those jumps to move forward from to Consequently, the probability of the second event equals
Therefore, (5.2.1) is true.
Notice that if occur, when standing at for the last time, the walk moves forward from to and never returns back to with probability Thus similar to (5.2.1), we obtain
(88) |
We are now ready to compute the probability of the event defined in (83). Notice that Thus, if then for
5.2.2 Proof of Proposition 4 for or
Proof. Fix Suppose as Then we have as and for all Firstly, it follows directly from (72) and (73) that
(89) |
and
We thus get (58) for or
Furthermore, it is straightforward to see from (72), (9) and (73), (75) that
which proves (59). Putting (59) and (89) together, on accounting of (47) and (50)-(51), we get (61).
Next, we show (62). For this purpose, on one hand, from (72) and (9), we get
(90) |
On the other hand, taking (73) and (75) together, we infer that
(91) |
Fix and . We claim that there exists such that , in (90) and in (91) satisfy
(92) | |||
(93) |
and
(94) |
Indeed, choose such that
(95) | |||
(96) | |||
(97) |
Since by Lemma 3, there exists such that
(98) | |||
(99) | |||
(100) |
Thus, on accounting of (95) and (96), we have
for all We thus get (92). To prove (93) and (94), it is easy to see from (98)-(100) that
(101) | |||
(102) |
Consequently, putting (97), (101) and (102) together, we get (93) and (94). Finally, substituting (92)-(93) into (90) and (91), we get
(103) |
for or But by (58), there exists a number such that
(104) |
for or Therefore, letting from (103), we obtain
(105) |
for or Taking (104) and (105) together, thanks to (47) and (50)-(51), we conclude that
(106) |
for or Choosing small enough such that and from (103), (105) and (106), we get (62), (63) and (64).
6 Proof of Theorem 1
Proof. Let or or Let and be as in (4) and (5). Suppose that is increasing in for some and as Then clearly, we have
(107) | |||
(108) |
Suppose now If then by Proposition 1, the chain is recurrent almost surely. Therefore, almost surely, for Assume next From Lemma 6, we see that (28) is fulfilled. Putting (60) and (108) together, we deduce that (29) is fulfilled. Therefore, applying part (i) of Theorem 3, we conclude that
(109) |
almost surely. Notice that for each we must have and Consequently, we infer from (109) that and almost surely for each and The convergent part of Theorem 1 is proved.
Next, we prove the divergent part. To this end, suppose that there exist and such that for all and From (17), we see that (30) is fulfilled. Moreover, from (107), we know that is increasing in Finally, from (62), we infer that (31) holds. Thus, it follows from part (ii) of Theorem 3 that
(110) |
almost surely. Now, consider and Obviously, from (110), we obtain and almost surely. The divergent part of Theorem 1 is proved.
7 Expectations-Proof of Proposition 2
In this section, letting the perturbations be as in (9), we estimate the expectations of the cardinalities of the sets and For this purpose, we first give the proof of Lemma 1.
7.1 Proof of Lemma 1
Proof. It is shown in Csáki et al. [4] (see page 637 therein) that for some constants The proof of Lemma 1 is a refinement of such a result. We need the following lemma.
Lemma 10.
Suppose that is a nonnegative function, is decreasing in for some and Set Then,
(111) |
Proof. Under the given conditions, by checking carefully, we can show that the series
is convergent. Thus, (111) is true.
Now we are ready to finish the proof of Lemma 1. Notice that as Letting then we have as Thus, applying Lemma 10 we get
(112) |
Set for Fix By (112) there exists such that
(113) |
Thus, we have
(114) |
Let Since is decreasing in then we have for Thus, we deduce that
But by L’Hospital’s rule, Therefore, there exists a number such that
(115) |
Substituting (115) into (7.1), we infer that for
Since
as (see Csáki et al. [4], page 637), there is such that for
7.2 Proof of Proposition 2
Proof. Fix integers For we set Then we have
It follows from Lemma 8 that
Noting that as thus applying Lemma 1, we see that
Consequently, for there exists a number such that
But
Therefore, there exists a number such that for
(117) | ||||
(118) |
As a result, we deduce from (117) that
Since is arbitrary, letting we get (10). Similarly, from (118) we get (11). Proposition 2 is proved.
8 Limit distributions-Proof of Theorem 2
This section is devoted to proving Theorem 2. The proof is based on the moment method. To start with, we give an auxiliary lemma to estimate and
8.1 An auxiliary lemma
Lemma 11.
Set and for let Suppose that Then there exists a number such that
(119) | ||||
(120) |
Proof. Notice that
Then for choosing properly, we have
Fix We can find a number such that for all Thus we have
Similarly, we can find a number such that
Consequently, we get
As a result, we obtain
Notice that for some number we have
Therefore, we get
which proves (119). Letting in (119), we get (120). Lemma 11 is proved.
8.2 Convergence of the moments
An important step to prove Theorem 2 is to show the convergence of for or as We have the following lemma.
Lemma 12.
Fix integers Let and for Set for Then we have
Proof. Consider or Let be the one in (58). For set clearly we have
Let be the one defined in (18). Then
Since the values of are either or then taking Lemma 4 into account, we have
(121) |
Fix Let and be as in Proposition 4 and Lemma 11, respectively. Set and denote temporarily
where and in the remainder of this proof we set Obviously, we have Therefore, we can write
(122) |
Consider first the term From (64), (119) and (120), we get
Applying Lemma 5, we obtain
(123) |
Similarly, we have
(124) |
Next, we claim that
(125) |
Indeed, for each from (61), we see that
(126) |
Then, in view of (26), (119) and (120), we have
For from (119), (120) and (126), we deduce that
But it follows from Lemma 5 that
Therefore, we obtain
We thus finish the proof of (125).
8.3 Proof of Theorem 2
Proof. We have shown in Lemma 12 that for , and as Since as we thus have
Consequently, using Carleman’s test for the uniqueness of the moment problem (see e.g., [12, Chap.II, §12]), we conclude that
as where is an exponential random variable with Theorem 2 is proved.
Acknowledgements: The authors are in debt to two anonymous referees who read carefully an earlier unpublished version of our manuscript [13], provided us with very useful suggestions and comments, and especially helped us to improve the proofs of Lemma 5 and Lemma 9 in the current manuscript, respectively. We also thank Professor A. Földes for confirming to us that an additional monotonicity condition of is required for [4, Theorem 1.1]. This project is partially supported by the National Natural Science Foundation of China (Grant No. 11501008) and the Nature Science Foundation of Anhui Educational Committee (Grant No. 2023AH040025).
References
- [1] K.L. Chung, Markov chains with stationary transition probabilities, 2nd ed. Springer, New York, 1967.
- [2] E. Csáki, A. Földes, P. Révész, Transient nearest neighbor random walk on the line, J. Theoret. Probab. 22 (1) (2009) 100-122.
- [3] E. Csáki, A. Földes, P. Révész, Transient nearest neighbor random walk and Bessel process, J. Theoret. Probab. 22 (4) (2009) 992-1009.
- [4] E. Csáki, A. Földes, P. Révész, On the number of cutpoints of transient nearest neighbor random walk on the line, J. Theoret. Probab. 23 (2) (2010) 624-638.
- [5] M. Dwass, Branching processes in simple random walk, Proc. Amer. Math. Soc. 51 (2) (1975) 270-274.
- [6] N. James, R. Lyons, Y. Peres, A transient Markov chain with finitely many cutpoints, In: IMS Collections Probability and Statistics: Essays in Honor of David A. Freedman, 2 (2008) 24-29, Institute of Mathematical Statistics.
- [7] F.B. Knight, Random walks and a sojourn density process of Brownian motion, Trans. Amer. Math. Soc. 109 (1963) 56-86.
- [8] J. Lamperti, Criteria for the recurrence or transience of stochastic processes. I, J. Math. Anal. Appl. 1 (3-4) (1960) 314-330.
- [9] J. Lamperti, A new class of probability limit theorems, J. Math. Mech. 11 (5) (1962) 749-772.
- [10] J. Lamperti, Criteria for stochastic processes II: Passage-time moments, J. Math. Anal. Appl. 7 (1) (1963) 127-145.
- [11] V.V. Petrov, A generalization of the Borel-Cantelli lemma, Statist. Probab. Lett. 67 (3) (2004) 233-239.
- [12] A.N. Shiryaev, Probability, 2nd ed. Springer-Verlag, Berlin/Heidelberg/New York, 1996.
- [13] H.-M. Wang, Local time of transient random walk on the lattice of positive half line. Preprint, (2023) arXiv: 2306.14376.