This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Local time, upcrossing time and weak cutpoints of a spatially inhomogeneous random walk on the line

Hua-Ming WANG†,§ and Lingyun WANG
22footnotetext: School of Mathematics and Statistics, Anhui Normal University, Wuhu 241003, China 44footnotetext: Email: [email protected]

Abstract

In this paper, we study a transient spatially inhomogeneous random walk with asymptotically zero drifts on the lattice of the positive half line. We give criteria for the finiteness of the number of points having exactly the same local time and/or upcrossing time and weak cutpoints (a point xx is called a weak cutpoint if the walk never returns to x1x-1 after its first upcrossing from xx to x+1x+1). In addition, for the walk with some special local drifts, we also give the order of the expected number of these points in [1,n].[1,n]. Finally, we show that, when properly scaled, the number of these points in [1,n][1,n] converges in distribution to a random variable with the standard exponential distribution. Our results answer three conjectures related to the local time, the upcrossing time, and the weak cutpoints proposed by E. Csáki, A. Földes, P. Révész [J. Theoret. Probab. 23 (2) (2010) 624-638].

Keywords: Random walk; Local time; Upcrossing time; Cutpoints; Moment method

MSC 2020: 60J10, 60G50, 60J55

1 Introduction

In this paper, we study the local time, upcrossing time and weak cutpoints of a spatially inhomogeneous random walk on +:={0,1,2,}\mathbb{Z}+:=\{0,1,2,...\} with asymptotically zero drifts. To introduce precisely the model, suppose that pk,qk,k0p_{k},q_{k},k\geq 0 are numbers such that p0=1,p_{0}=1, q0=0,q_{0}=0, pk>0,p_{k}>0, qk>0q_{k}>0 and pk+qk=1p_{k}+q_{k}=1 for all k1.k\geq 1. Let X={Xk}k0X=\{X_{k}\}_{k\geq 0} be a Markov chain on +\mathbb{Z}_{+} starting from 0 with transition probabilities

(Xk+1=1|Xk=0)=p0,\displaystyle\mathbb{P}(X_{k+1}=1|X_{k}=0)=p_{0},
(Xk+1=n+1|Xk=n)=pn,\displaystyle\mathbb{P}(X_{k+1}=n+1|X_{k}=n)=p_{n},
(Xk+1=n1|Xk=n)=qn,\displaystyle\mathbb{P}(X_{k+1}=n-1|X_{k}=n)=q_{n},

for n1n\geq 1 and k0.k\geq 0. Since the transition probabilities are spatially inhomogeneous, we call the chain XX a spatially inhomogeneous random walk. We remark that the chain XX is also called a birth and death chain in the literature.

For k1,k\geq 1, let ρk=qkpk.\rho_{k}=\frac{q_{k}}{p_{k}}. Then we have the following criterion of recurrence or transience for the chain X,X, which can be found, e.g., in Chung’s book [1, Part I. §12].

Proposition 1.

The chain XX is transient if and only if k=1ρ1ρk<.\sum_{k=1}^{\infty}\rho_{1}\cdots\rho_{k}<\infty.

Notice that the local drift of the walk at n1n\geq 1 is 2pn1.2p_{n}-1. What we are concerned about are random walks with asymptotically zero drifts, that is, the case pn1/2p_{n}\rightarrow 1/2 as n,n\rightarrow\infty, whose limit behaviors are different in many aspects from those of simple random walks. Such random walks date back to a series of works of Lamperti. In [8, 10], Lamperti gave the criteria for recurrence or transience and the existence of the moments. Moreover, in [9], he proved an invariance principle which says that under certain conditions, X[nt]/nX_{[nt]}/\sqrt{n} converges weakly in D[0,1]D[0,1] to a Bessel process. Such an invariance principle was further investigated and strengthened in Csáki et al. [3]. It is well known that a transient simple random walk grows linearly to infinity. But for a transient spatially inhomogeneous random walk, if the drifts are asymptotically zero, intuitively, the walk grows much slower than the simple random walk. The number of cutpoints on the path of the walk can reflect the speed of the walk. Roughly speaking, if the walk never returns to [0,x][0,x] after its first entry into [x+1,),[x+1,\infty), then xx is a cutpoint. By intuition, if the walk runs to infinity more quickly, there are more cutpoints. But for a spatially inhomogeneous random walk, things are very different. James et al. [6] gave an example of a transient random walk which has only finitely many cutpoints. Such a phenomenon never happens to the simple random walk since it is known that the number of cutpoints of the simple random walk is either 0 (recurrent case) or \infty(transient case). Along this line, Csáki et al. [4] provided a criterion to determine the finiteness of the number of cutpoints, which we will quote below. We first give the following definition.

Definition 1.

For x+,x\in\mathbb{Z}_{+}, we call ξ(x):=k=01{Xk=x}\xi(x):=\sum_{k=0}^{\infty}1_{\{X_{k}=x\}} the local time of the chain XX at xx and ξ(x,):=k=01{Xk=x,Xk+1=x+1}\xi(x,\uparrow):=\sum_{k=0}^{\infty}1_{\{X_{k}=x,X_{k+1}=x+1\}} the upcrossing time of the chain XX from xx to x+1,x+1, respectively. If ξ(x,)=1,\xi(x,\uparrow)=1, we call xx a cutpoint of X.X. If ξ(x)=1,\xi(x)=1, we call xx a strong cutpoint of X.X.

For nm0,n\geq m\geq 0, write

D(m,n):={0,if n=m,1,if n=m+1,1+j=1nm1ρm+1ρm+j,if nm+2,\displaystyle D(m,n):=\left\{\begin{array}[]{ll}0,&\text{if }n=m,\\ 1,&\text{if }n=m+1,\\ 1+\sum_{j=1}^{n-m-1}\rho_{m+1}\cdots\rho_{m+j},&\text{if }n\geq m+2,\end{array}\right. (4)

and denote

D(m):=limnD(m,n).\displaystyle D(m):=\lim_{n\rightarrow\infty}D(m,n). (5)

The theorem below gives a criterion for the finiteness of the number of cutpoints.

Theorem A(Csáki et al. [4]) Suppose pi1/2,i=1,2,p_{i}\geq 1/2,\ i=1,2,... and let D(n),n=1,2,D(n),\ n=1,2,... be as in (5). If

n=21D(n)logn<,\sum_{n=2}^{\infty}\frac{1}{D(n)\log n}<\infty,

then the chain XX has finitely many cutpoints almost surely. If D(n)D(n) is increasing in n,n, there exist n0>0n_{0}>0 and δ>0\delta>0 such that D(n)δnlognD(n)\leq\delta n\log n for all nn0n\geq n_{0} and

n=21D(n)logn=,\sum_{n=2}^{\infty}\frac{1}{D(n)\log n}=\infty,

then XX has infinitely many strong cutpoints almost surely.

Remark 1.

We remark that the monotonicity condition of D(n)D(n) is not contained in [4, Theorem 1.1]. But we knew from Professor A. Földes, one author of [4], that such a condition is indeed required for proving the last display on page 634 of [4].

Csáki et al. proposed in [4] four conjectures related to the weak cutpoints, local time and upcrossing time of the walk. We quote here the first three of them word for word.

Open Problems(Csáki et al. [4])

  • 1.

    It would be interesting to know whether Theorem A also holds for the number of sites with ξ(R)=a\xi(R)=a or ξ(R,)=a\xi(R,\uparrow)=a for any fixed integer a>1a>1, i.e., whether we have the same criteria for {ξ(R)=a}\{\xi(R)=a\} and {ξ(R,)=a}\{\xi(R,\uparrow)=a\} to occur infinitely often almost surely for any positive integer a.a.

  • 2.

    Call the site RR a weak cutpoint if, for some kk, we have Xk=R,X_{k}=R, XiR,i=0,1,,k1,X_{i}\leq R,\ i=0,1,...,k-1, and XiR,i=k+1,k+2,.X_{i}\geq R,\ i=k+1,k+2,.... One would like to know whether Theorem A can be extended for the number of weak cutpoints.

  • 3.

    It would be interesting to know whether Theorem A holds for cutpoints with a given local time, i.e., for {ξ(R)=a,ξ(R,)=1}\{\xi(R)=a,\xi(R,\uparrow)=1\} or, in general, {ξ(R)=a,ξ(R,)=b}\{\xi(R)=a,\xi(R,\uparrow)=b\} infinitely often almost surely, with positive integers a,b.a,b.

The main task of the paper is to answer the above open problems.

Definition 2.

Let AA and BB be two subsets of {1,2,}.\{1,2,...\}. We denote by

C(A,B):={x+:ξ(x)A,ξ(x,)B}\displaystyle C(A,B):=\{x\in\mathbb{Z}_{+}:\xi(x)\in A,\xi(x,\uparrow)\in B\}

the collection of nonnegative sites at which the local times and the upcrossing times of the walk belong to AA and B,B, respectively. Moreover, consider the weak cutpoint defined in the above open problem 2. We denote by

Cw:={x+:x is a weak cutpoint}\displaystyle C_{w}:=\{x\in\mathbb{Z}_{+}:x\text{ is a weak cutpoint}\}

the collection of all weak cutpoints.

In what follows, if A={a},A=\{a\}, instead of C({a},B),C(\{a\},B), we write simply C(a,B).C(a,B). The notations C(A,a)C(A,a) and C(a,b)C(a,b) can be understood similarly. Also, for simplicity, we write

C(,a)=C({a,a+1,},a) and C(a,)=C(a,{1,,a}).\displaystyle C(*,a)=C(\{a,a+1,...\},a)\text{ and }C(a,*)=C(a,\{1,...,a\}). (6)

Notice that C(,a)C(*,a) is the collection of sites at which the upcrossing times of the walk are exactly a,a, while C(a,)C(a,*) is the collection of sites at which the local times of the walk are exactly a.a. Finally, we use |C||C| to denote the cardinality for a set C.C.

The theorem below provides criteria to tell whether C(A,a),C(A,a), C(a,B)C(a,B) or CwC_{w} are finite or not, answering the above-mentioned open problems.

Theorem 1.

Let D(n),n=1,2,D(n),\ n=1,2,... be as in (5). Suppose that ρk\rho_{k} is increasing in k>N0k>N_{0} for some N0>0N_{0}>0 and ρk1\rho_{k}\rightarrow 1 as k.k\rightarrow\infty. If

n=21D(n)logn<,\sum_{n=2}^{\infty}\frac{1}{D(n)\log n}<\infty,

then almost surely, we have

  1. (i)

    |C(A,a)|<|C(A,a)|<\infty for each a{1,2,}a\in\{1,2,...\} and A{a,a+1,};A\subseteq\{a,a+1,...\};

  2. (ii)

    |C(a,B)|<|C(a,B)|<\infty for each a{1,2,}a\in\{1,2,...\} and B{1,2,,a};B\subseteq\{1,2,...,a\};

  3. (iii)

    |Cw|<.|C_{w}|<\infty.

If D(n)δnlogn,D(n)\leq\delta n\log n, nn0n\geq n_{0} for some n0>0n_{0}>0 and δ>0\delta>0 and

n=21D(n)logn=,\sum_{n=2}^{\infty}\frac{1}{D(n)\log n}=\infty,

then almost surely, we have

  • (i)

    |C(A,a)|=|C(A,a)|=\infty for each a{1,2,}a\in\{1,2,...\} and ϕA{a,a+1,};\phi\neq A\subseteq\{a,a+1,...\};

  • (ii)

    |C(a,B)|=|C(a,B)|=\infty for each a{1,2,}a\in\{1,2,...\} and ϕB{1,2,,a};\phi\neq B\subseteq\{1,2,...,a\};

  • (iii)

    |Cw|=.|C_{w}|=\infty.

Remark 2.

We explain how Theorem 1 answers the above open problems. Noticing that C(a,{1,2,,a})={R+:ξ(R)=a}C(a,\{1,2,...,a\})=\{R\in\mathbb{Z}_{+}:\xi(R)=a\} and C({a,a+1,},a)={R+:ξ(R,)=a},C(\{a,a+1,...\},a)=\{R\in\mathbb{Z}_{+}:\xi(R,\uparrow)=a\}, thus the criteria for |C(a,{1,2,,a})|<|C(a,\{1,2,...,a\})|<\infty and |C({a,a+1,},a)|<|C(\{a,a+1,...\},a)|<\infty answer the above open problem 1. Clearly, the criterion for |Cw|<|C_{w}|<\infty gives an answer to the above open problem 2 and the criterion for the finiteness of C(a,b)C(a,b) provides an answer to the above open problem 3.

Fix a{1,2,},a\in\{1,2,...\}, A{a,a+1,}A\subseteq\{a,a+1,...\} and B{1,2,,a}.B\subseteq\{1,2,...,a\}. If D(n)=D(n)=\infty for some and hence all n0,n\geq 0, then it follows from Proposition 1 that the chain XX is recurrent. Thus, for any site x0,x\geq 0, we have ξ(x)=.\xi(x)=\infty. As a consequence, we get |C(A,a)|=|C(a,B)|=|Cw|=0,|C(A,a)|=|C(a,B)|=|C_{w}|=0, which coincides with the convergent part of Theorem 1 since in this case, we always have n=21D(n)logn=0.\sum_{n=2}^{\infty}\frac{1}{D(n)\log n}=0. The interesting phenomenon arises when D(n)<D(n)<\infty for all nn and n=21D(n)logn<.\sum_{n=2}^{\infty}\frac{1}{D(n)\log n}<\infty. In this case, the walk is transient, but almost surely, we have |C(A,a)|+|C(a,B)|+|Cw|<.|C(A,a)|+|C(a,B)|+|C_{w}|<\infty.

Finally, we mention that C(,1)C(*,1) and C(1,1)C(1,1) are collections of the cutpoints and strong cutpoints, respectively. From this point of view, Theorem 1 is a generalization of Theorem A.

As pointed out in Csáki et al. [4], the condition “D(n)δnlogn,D(n)\leq\delta n\log n, nn0n\geq n_{0} for some n0>0n_{0}>0 and δ>0\delta>0” is a technical one. We now give a concrete example for which such a condition is satisfied naturally.

Fix β.\beta\in\mathbb{R}. Clearly, there exists n1>0n_{1}>0 such that 14(1n+1n(loglogn)β)(0,1/2)\frac{1}{4}\left(\frac{1}{n}+\frac{1}{n(\log\log n)^{\beta}}\right)\in(0,1/2) for all n>n1.n>n_{1}. For n1,n\geq 1, set

rn={14(1n+1n(loglogn)β),if nn1,rn1,if n<n1,\displaystyle r_{n}=\left\{\begin{array}[]{ll}\frac{1}{4}\left(\frac{1}{n}+\frac{1}{n(\log\log n)^{\beta}}\right),&\text{if }n\geq n_{1},\\ r_{n_{1}},&\text{if }n<n_{1},\end{array}\right. (9)

which serves as perturbation that will be added to the transition probability of the simple recurrent random walk. We emphasize that the perturbation rnr_{n} in (9) is taken from [4] without change.

Lemma 1.

For n1,n\geq 1, let rnr_{n} be the one in (9) and set pn=12+rn.p_{n}=\frac{1}{2}+r_{n}. Then D(n)n(loglogn)β as n.D(n)\sim n(\log\log n)^{\beta}\text{ as }n\rightarrow\infty.

The proof of Lemma 1 is given in Section 7. It is easy to see from Lemma 1 that there is n0>0n_{0}>0 such that D(n)<nlogn,D(n)<n\log n, nn0.n\geq n_{0}. Thus, applying Theorem 1, we have the following corollary, which gives sharp criteria for the finiteness of the sets C(A,a)C(A,a), C(a,B)C(a,B) and Cw.C_{w}.

Corollary 1.

Fix a{1,2,},a\in\{1,2,...\}, ϕA{a,a+1,}\phi\neq A\subseteq\{a,a+1,...\} and ϕB{1,2,,a}.\phi\neq B\subseteq\{1,2,...,a\}. For n1,n\geq 1, let rnr_{n} be the one in (9) and set pn=12+rn.p_{n}=\frac{1}{2}+r_{n}. If β>1,\beta>1, we have |C(A,a)|+|C(a,B)|+|Cw|<|C(A,a)|+|C(a,B)|+|C_{w}|<\infty almost surely. If β1,\beta\leq 1, we have |C(A,a)|=|C(a,B)|=|Cw|=|C(A,a)|=|C(a,B)|=|C_{w}|=\infty almost surely.

Another interesting question is to estimate the cardinality of the set C[1,n]C\cap[1,n] for C=Cw,C=C_{w}, C(A,a)C(A,a) or C(a,B)C(a,B) and nn large enough. We first compute the expectations of the cardinalities of these sets.

Proposition 2.

Suppose ab1a\geq b\geq 1 are positive integers. For n1,n\geq 1, let rnr_{n} be as in (9) and set pn=12+rn.p_{n}=\frac{1}{2}+r_{n}. Then we have

limn(loglogn)βlogn𝔼|C(a,b)[1,n]|=12a(a1b1)\displaystyle\lim_{n\rightarrow\infty}\frac{(\log\log n)^{\beta}}{\log n}\mathbb{E}|C(a,b)\cap[1,n]|=\frac{1}{2^{a}}\binom{a-1}{b-1} (10)

and

limn(loglogn)βlogn𝔼|Cw[1,n]|=2.\displaystyle\lim_{n\rightarrow\infty}\frac{(\log\log n)^{\beta}}{\log n}\mathbb{E}|C_{w}\cap[1,n]|=2. (11)
Remark 3.

Fix a{1,2,},a\in\{1,2,...\}, A{a,a+1,}A\subseteq\{a,a+1,...\} and B{1,2,,a}.B\subseteq\{1,2,...,a\}. By the linear property of the expectation, it follows immediately from (10) that

limn(loglogn)βlogn𝔼|C(A,a)[1,n]|=xA12x(x1a1),\displaystyle\lim_{n\rightarrow\infty}\frac{(\log\log n)^{\beta}}{\log n}\mathbb{E}|C(A,a)\cap[1,n]|=\sum_{x\in A}\frac{1}{2^{x}}\binom{x-1}{a-1}, (12)
limn(loglogn)βlogn𝔼|C(a,B)[1,n]|=12axB(a1x1).\displaystyle\lim_{n\rightarrow\infty}\frac{(\log\log n)^{\beta}}{\log n}\mathbb{E}|C(a,B)\cap[1,n]|=\frac{1}{2^{a}}\sum_{x\in B}\binom{a-1}{x-1}. (13)

Recall that C(a,)C(a,*) is the collection of sites at which the local times of the walk are exactly a,a, while C(,a)C(*,a) is the collection of sites at which the upcrossing times of the walk are exactly a.a. From (12) and(13), we get

limn(loglogn)βlogn𝔼|C(,a)[1,n]|=1,\displaystyle\lim_{n\rightarrow\infty}\frac{(\log\log n)^{\beta}}{\log n}\mathbb{E}|C(*,a)\cap[1,n]|=1,
limn(loglogn)βlogn𝔼|C(a,)[1,n]|=1/2.\displaystyle\lim_{n\rightarrow\infty}\frac{(\log\log n)^{\beta}}{\log n}\mathbb{E}|C(a,*)\cap[1,n]|=1/2.

Inspired by Proposition 2, one may expect that (loglogn)βlogn|C(a,b)[1,n]|12a(a1b1)\frac{(\log\log n)^{\beta}}{\log n}\left|C(a,b)\cap[1,n]\right|\rightarrow\frac{1}{2^{a}}\binom{a-1}{b-1} for ab1a\geq b\geq 1 or (loglogn)βlogn|Cw[1,n]|2\frac{(\log\log n)^{\beta}}{\log n}\left|C_{w}\cap[1,n]\right|\rightarrow 2 almost surely as n.n\rightarrow\infty. However, this is not the case. The main reason is that the events {xC(a,b)}(or xCw),x=1,2,,n\{x\in C(a,b)\}(\text{or }x\in C_{w}),x=1,2,...,n are not independent. We have the following theorem.

Theorem 2.

Suppose ab1a\geq b\geq 1 are positive integers. Let r1=1/4r_{1}=1/4 and rn=12nr_{n}=\frac{1}{2n} for n2.n\geq 2. Set pn=12+rnp_{n}=\frac{1}{2}+r_{n} for n1.n\geq 1. Then

|C(a,b)[1,n]|2a(a1b1)logn𝑑S,|C(,a)[1,n]|logn𝑑S and |Cw[1,n]|2logn𝑑S\displaystyle\frac{\left|C(a,b)\cap[1,n]\right|}{2^{-a}\binom{a-1}{b-1}\log n}\overset{d}{\rightarrow}S,\ \frac{|C(*,a)\cap[1,n]|}{\log n}\overset{d}{\rightarrow}S\text{ and }\frac{\left|C_{w}\cap[1,n]\right|}{2\log n}\overset{d}{\rightarrow}S

as n,n\rightarrow\infty, where (and in what follows) the notation 𝑑\overset{d}{\rightarrow} means the convergence in distribution and SS is an exponential random variable with (S>t)=et,t>0.\mathbb{P}(S>t)=e^{-t},t>0.

Remark 4.

Theorem 2 is proved by the standard moment method. The key step is to find the limits of the moments of 2a|C(a,b)[1,n]|(a1b1)logn,\frac{2^{a}\left|C(a,b)\cap[1,n]\right|}{\binom{a-1}{b-1}\log n}, |C(,a)[1,n]|logn,\frac{\left|C(*,a)\cap[1,n]\right|}{\log n}, or |Cw[1,n]|2logn,\frac{\left|C_{w}\cap[1,n]\right|}{2\log n}, which depend on the joint probability of {j1C,,jmC}\{j_{1}\in C,...,j_{m}\in C\} for C=C(a,b)C=C(a,b), C(,a)C(*,a) or Cw,C_{w}, 0<j1<j2<<jm.0<j_{1}<j_{2}<...<j_{m}. Except for the special case ab=1,a\geq b=1, it is hard to give explicitly the formulae of such joint probabilities. But we can show that the events {jiC},\{j_{i}\in C\}, i=1,,mi=1,...,m satisfy certain Markov properties, that is, (js+1C|jsC,,j1C)=(js+1C|jsC)\mathbb{P}(j_{s+1}\in C|j_{s}\in C,...,j_{1}\in C)=\mathbb{P}(j_{s+1}\in C|j_{s}\in C) for 1s<m,1\leq s<m, C=C(a,b)C=C(a,b), C(,a)C(*,a) or Cw.C_{w}. So it suffices to compute and estimate the two-dimensional joint probabilities of these events. Fix an integer a>1.a>1. For general A{a,a+1,}A\subseteq\{a,a+1,...\} and B{1,2,,a},B\subseteq\{1,2,...,a\}, we currently do not know how to get the limit distributions of |C(A,a)[1,n]||C(A,a)\cap[1,n]| and |C(a,B)[1,n]|.|C(a,B)\cap[1,n]|. For example, for a>1,a>1, C(a,)[1,n]C(a,*)\cap[1,n] is the collection of sites in [1,n][1,n] at which the local times of the walk are exactly a.a. But neither is {ξ(x)}x0\{\xi(x)\}_{x\geq 0} a Markov chain nor is it possible for us to compute and estimate the joint probability of {ξ(j1)=a,,ξ(jm)=a}.\{\xi(j_{1})=a,...,\xi(j_{m})=a\}. Therefore, by our method, we can not find the limit distribution of |C(a,)[1,n]|.|C(a,*)\cap[1,n]|.

Outline of the paper. The present paper is built up as follows. Firstly, besides introducing the exit probabilities of the walk, we prove some combinatorial results in Section 2, which play key roles in proving the main results. Then, by summarizing the proofs of Csáki et al. [4, Theorem 1.1] and James et al. [6, Theorem 3.1], we get a version of the Borel-Cantelli lemma in Section 3, which will be used to prove Theorem 1. In Section 4, we express the upcrossing times in terms of loops, moves and escapes. The probability of {xC},\{x\in C\}, and the joint probability and dependence of {xC}\{x\in C\} and {yC}\{y\in C\} for C=Cw,C=C_{w}, C(,a)C(*,a) or C(a,b)C(a,b) are studied in Section 5. Section 6 is devoted to proving Theorem 1. Finally, Proposition 2 and Theorem 2 are proved in Section 7 and Section 8, respectively.

Notations: When writing A(n)B(n)A(n)\sim B(n), we mean A(n)/B(n)1A(n)/B(n)\rightarrow 1 as nn\rightarrow\infty. The notation A(n)=O(B(n))A(n)=O(B(n)) means there is a constant c>0c>0 such that |A(n)|<cB(n)\lvert A(n)\rvert<cB(n) for all nn large enough. We also adopt the convention that an empty product equals identity and an empty sum equals 0.0. Finally, throughout the paper, 0<c<0<c<\infty is assumed to be a constant whose value may change from line to line.

2 Auxiliary results

In this section, we introduce first the exit probabilities of the walk from certain intervals and then give some combinatorial results that are helpful for the proofs of both Theorem 1 and Theorem 2.

2.1 Exit probabilities

For nkm0n\geq k\geq m\geq 0 set

Pk(m,n,)\displaystyle P_{k}(m,n,-) :=(X hits m before it hits n|X0=k),\displaystyle:=\mathbb{P}(X\text{ hits }m\text{ before it hits }n|X_{0}=k), (14)
Pk(m,n,+)\displaystyle P_{k}(m,n,+) :=(X hits n before it hits m|X0=k),\displaystyle:=\mathbb{P}(X\text{ hits }n\text{ before it hits }m|X_{0}=k), (15)

which are the exit probabilities of the walk from [m,n].[m,n]. Evidently, we have

Pk(m,n,)+Pk(m,n,+)=1, for nkm0.P_{k}(m,n,-)+P_{k}(m,n,+)=1,\text{ for }n\geq k\geq m\geq 0.

Set also

P0(1,n,+)=(X hits n before it hits 1|X0=0).P_{0}(-1,n,+)=\mathbb{P}(X\text{ hits }n\text{ before it hits }-1|X_{0}=0).

Since q0=0,q_{0}=0, we must have

P0(1,n,+)=1 for n0.\displaystyle P_{0}(-1,n,+)=1\text{ for }n\geq 0.

The following lemma is from Csáki et al. [2], see Lemma 2.2 therein, which provides formulae for the exit probabilities.

Lemma 2.

For nkm0,n\geq k\geq m\geq 0, we have

Pk(m,n,)=1D(m,k)D(m,n).\displaystyle P_{k}(m,n,-)=1-\frac{D(m,k)}{D(m,n)}.

It is easily seen from Lemma 2 that for 0x<y,0\leq x<y,

Px+1(x,y,+)=1D(x,y),Px+1(x,,+)=1D(x),Py1(x,y,+)=D(x,y1)D(x,y).\displaystyle P_{x+1}(x,y,+)=\frac{1}{D(x,y)},\ P_{x+1}(x,\infty,+)=\frac{1}{D(x)},\ P_{y-1}(x,y,+)=\frac{D(x,y-1)}{D(x,y)}. (16)

Notice that these exit or escape probabilities are written in terms of D(m)D(m) and D(m,n)D(m,n). By some direct computation, we see that

D(m)=1+ρm+1D(m+1),m0,\displaystyle D(m)=1+\rho_{m+1}D(m+1),m\geq 0,
D(m,n)=1+ρm+1D(m+1,n),n>m0,\displaystyle D(m,n)=1+\rho_{m+1}D(m+1,n),n>m\geq 0,
D(m,n)=D(m,n1)+ρm+1ρn1,n>m0.\displaystyle D(m,n)=D(m,n-1)+\rho_{m+1}\cdots\rho_{n-1},n>m\geq 0.

Furthermore, it is shown in Csáki et al. [4] (see display (2.3) therein) that

D(m,n)D(m)=1i=mn1(11D(i)),n>m0.\displaystyle\frac{D(m,n)}{D(m)}=1-\prod_{i=m}^{n-1}\left(1-\frac{1}{D(i)}\right),n>m\geq 0. (17)

Finally, we give the following estimations.

Lemma 3.

Suppose that ρk1\rho_{k}\rightarrow 1 as k.k\rightarrow\infty. Then, for each ε>0\varepsilon>0 there exists N>0,N>0, which is independent of x,x, such that for x>N,y>x+N,x>N,\ y>x+N, we have

1D(x)ε,1D(x,y)ε, 1εD(x1)D(x)1+ε,\displaystyle\frac{1}{D(x)}\leq\varepsilon,\ \frac{1}{D(x,y)}\leq\varepsilon,\ 1-\varepsilon\leq\frac{D(x-1)}{D(x)}\leq 1+\varepsilon,
1εD(x1,y)D(x,y)1+ε, 1D(x,y)D(x,y1)1+ε.\displaystyle 1-\varepsilon\leq\frac{D(x-1,y)}{D(x,y)}\leq 1+\varepsilon,\ 1\leq\frac{D(x,y)}{D(x,y-1)}\leq 1+\varepsilon.

Proof.  Fix 0<η<1.0<\eta<1. Choose M1>0,M_{1}>0, which is independent of x,x, such that (1η)yx1<η(1-\eta)^{y-x-1}<\eta for all yxM1.y-x\geq M_{1}. Since limkρk=1,\lim_{k\rightarrow\infty}\rho_{k}=1, there is a number M2>0M_{2}>0 such that for kM2,k\geq M_{2}, we have 1+η>ρk1ρkρk1ρk>1η.1+\eta>\rho_{k}^{-1}\vee\rho_{k}\geq\rho_{k}^{-1}\wedge\rho_{k}>1-\eta. Let N=M1M2.N=M_{1}\vee M_{2}. Then, we have

D(x,y)\displaystyle D(x,y) =1+k=x+1y1ρx+1ρk>1+k=1yx1(1η)k\displaystyle=1+\sum_{k=x+1}^{y-1}\rho_{x+1}\cdots\rho_{k}>1+\sum_{k=1}^{y-x-1}(1-\eta)^{k}
=1(1η)yxη1ηη,x>N,y>x+N\displaystyle=\frac{1-(1-\eta)^{y-x}}{\eta}\geq\frac{1-\eta}{\eta},\ x>N,\ y>x+N

and

1\displaystyle 1 D(x,y)D(x,y1)=1+ρx+1ρy11+k=x+1y2ρx+1ρk\displaystyle\leq\frac{D(x,y)}{D(x,y-1)}=1+\frac{\rho_{x+1}\cdots\rho_{y-1}}{1+\sum_{k=x+1}^{y-2}\rho_{x+1}\cdots\rho_{k}}
=1+1k=x+1y1ρk1ρy111+1k=1yx1(1η)k\displaystyle=1+\frac{1}{\sum_{k=x+1}^{y-1}\rho_{k}^{-1}\cdots\rho_{y-1}^{-1}}\leq 1+\frac{1}{\sum_{k=1}^{y-x-1}(1-\eta)^{k}}
=1+η(1η)(1(1η)yx1)\displaystyle=1+\frac{\eta}{(1-\eta)(1-(1-\eta)^{y-x-1})}
1+η(1η)2,x>N,y>x+N.\displaystyle\leq 1+\frac{\eta}{(1-\eta)^{2}},\ x>N,\ y>x+N.

Moreover, since D(x1,y)D(x,y)=ρx+1D(x,y),\frac{D(x-1,y)}{D(x,y)}=\rho_{x}+\frac{1}{D(x,y)}, we have

1η<D(x1,y)D(x,y)<1+η+η1η,x>N,y>x+N.\displaystyle 1-\eta<\frac{D(x-1,y)}{D(x,y)}<1+\eta+\frac{\eta}{1-\eta},\ x>N,\ y>x+N.

Therefore, fixing ε>0,\varepsilon>0, the lemma is proved by choosing η\eta small enough. \Box

2.2 Some combinatorial results

To begin with, we study the cardinality of certain simplexes. For positive integers aji1,a\geq j\geq i\geq 1, set

S(a,j)={(v1,,vj):vk+/{0},jk1,k=1jvk=a},\displaystyle S(a,j)=\left\{(v_{1},...,v_{j}):\ v_{k}\in\mathbb{Z}_{+}/\{0\},\ j\geq k\geq 1,\ \sum_{k=1}^{j}v_{k}=a\right\}, (18)
S~(a,j)={(v1,,vj):vk+,jk1,vj1,k=1jvk=a},\displaystyle\tilde{S}(a,j)=\left\{(v_{1},...,v_{j}):\ v_{k}\in\mathbb{Z}_{+},\ j\geq k\geq 1,\ v_{j}\geq 1,\ \sum_{k=1}^{j}v_{k}=a\right\},
S~i(a,j)={(v1,,vj)S~(a,j):k=1j1vk0=i}.\displaystyle\tilde{S}_{i}(a,j)=\left\{(v_{1},...,v_{j})\in\tilde{S}(a,j):\ \sum_{k=1}^{j}1_{v_{k}\neq 0}=i\right\}.
Lemma 4.

For aji1,a\geq j\geq i\geq 1, we have |S(a,j)|=(a1j1)|S(a,j)|=\binom{a-1}{j-1} and |S~i(a,j)|=(j1i1)(a1i1).|\tilde{S}_{i}(a,j)|=\binom{j-1}{i-1}\binom{a-1}{i-1}.

Proof.  The first assertion is obvious. Indeed, arrange aa balls in a lineup. This lineup has a1a-1 spaces among the balls. Choose j1j-1 of those spaces and erect a wall at each of the selected places to get an arrangement of jj nonempty sets of balls. Obviously, this can be done by (a1j1)\binom{a-1}{j-1} ways. Thus |S(a,j)|=(a1j1).|S(a,j)|=\binom{a-1}{j-1}.

Next, we compute the cardinality of S~i(a,j).\tilde{S}_{i}(a,j). Consider a vector (v1,,vj)S~i(a,j).(v_{1},...,v_{j})\in\tilde{S}_{i}(a,j). There are exactly ii strictly positive coordinates. Since by definition, vj1,v_{j}\geq 1, thus for some k1,k2,,ki1,k_{1},k_{2},...,k_{i-1}, we have vkl>0,l=1,,i1v_{k_{l}}>0,l=1,...,i-1 and vk1++vki1+vj=a.v_{k_{1}}+\cdots+v_{k_{i-1}}+v_{j}=a. But there are (j1i1)\binom{j-1}{i-1} ways to choose i1i-1 coordinates from j1j-1 coordinates. Therefore, we must have

|S~i(a,j)|=(j1i1)|S(a,i)|=(j1i1)(a1i1).|\tilde{S}_{i}(a,j)|=\binom{j-1}{i-1}|S(a,i)|=\binom{j-1}{i-1}\binom{a-1}{i-1}.

The lemma is proved. \Box

The next lemma is crucial for proving Theorem 2.

Lemma 5.

Write j0=0j_{0}=0 and fix k1,n21.k\geq 1,n_{2}\geq 1. We have

limn1(logn)k0<j1<<jknjiji1n2,i=1,,k1j1(j2j1)(jkjk1)=1.\displaystyle\lim_{n\rightarrow\infty}\frac{1}{(\log n)^{k}}\sum_{\begin{subarray}{c}0<j_{1}<...<j_{k}\leq n\\ j_{i}-j_{i-1}\geq n_{2},i=1,...,k\end{subarray}}\frac{1}{j_{1}(j_{2}-j_{1})\cdots(j_{k}-j_{k-1})}=1. (19)
Remark 5.

We have never seen formula (19) in the literature we are aware of, so we do not know whether it is actually new or not. If that is the case, such a formula may have an independent interest in mathematical analysis.

Proof.  To start with, we prove (19) for n2=1.n_{2}=1. If k=1,k=1, it is clear that

limn1lognj1=1n1j1=1.\lim_{n\rightarrow\infty}\frac{1}{\log n}\sum_{j_{1}=1}^{n}\frac{1}{j_{1}}=1.

Next, we fix k2.k\geq 2. Notice that

1j1(j2j1)(jkjk1)=i=1k1(1ji+1ji+1ji)1jk.\displaystyle\frac{1}{j_{1}(j_{2}-j_{1})\cdots(j_{k}-j_{k-1})}=\prod_{i=1}^{k-1}\left(\frac{1}{j_{i}}+\frac{1}{j_{i+1}-j_{i}}\right)\frac{1}{j_{k}}.

We thus have

0<j1<<jkn1j1(j2j1)(jkjk1)\displaystyle\sum_{0<j_{1}<...<j_{k}\leq n}\frac{1}{j_{1}(j_{2}-j_{1})\cdots(j_{k}-j_{k-1})}
=jk=kn1jkjk1=k1jk1(1jk1+1jkjk1)j1=1j21(1j1+1j2j1).\displaystyle\quad\quad=\sum_{j_{k}=k}^{n}\frac{1}{j_{k}}\sum_{j_{k-1}=k-1}^{j_{k}-1}\left(\frac{1}{j_{k-1}}+\frac{1}{j_{k}-j_{k-1}}\right)\cdots\sum_{j_{1}=1}^{j_{2}-1}\left(\frac{1}{j_{1}}+\frac{1}{j_{2}-j_{1}}\right). (20)

We claim that for 1ik1,1\leq i\leq k-1,

ji=iji+11\displaystyle\sum_{j_{i}=i}^{j_{i+1}-1} (1ji+1ji+1ji)i(i1+logji)i1(i+1)(i+logji+1)i,\displaystyle\left(\frac{1}{j_{i}}+\frac{1}{j_{i+1}-j_{i}}\right)i(i-1+\log j_{i})^{i-1}\leq(i+1)(i+\log j_{i+1})^{i}, (21)
ji=iji+11\displaystyle\sum_{j_{i}=i}^{j_{i+1}-1} (1ji+1ji+1ji)i(logjilog(i1))i1(i+1)(logji+1logi)i.\displaystyle\left(\frac{1}{j_{i}}+\frac{1}{j_{i+1}-j_{i}}\right)i(\log j_{i}-\log(i-1))^{i-1}\geq(i+1)(\log j_{i+1}-\log i)^{i}. (22)

Indeed, (21) holds trivially for i=1.i=1. For 2ik1,2\leq i\leq k-1, since

ji=iji+11\displaystyle\sum_{j_{i}=i}^{j_{i+1}-1} i(i1+logji)i1jiii1ji+11(i1+log(x+1))i1x𝑑x\displaystyle\frac{i(i-1+\log j_{i})^{i-1}}{j_{i}}\leq i\int_{i-1}^{j_{i+1}-1}\frac{(i-1+\log(x+1))^{i-1}}{x}dx
ii1ji+11(i+logx)i1x𝑑x(i+logji+1)i\displaystyle\leq i\int_{i-1}^{j_{i+1}-1}\frac{(i+\log x)^{i-1}}{x}dx\leq(i+\log j_{i+1})^{i}

and

ji=iji+11\displaystyle\sum_{j_{i}=i}^{j_{i+1}-1} i(i1+logji)i1ji+1ji=ji=1ji+1ii(i1+log(ji+1ji))i1ji\displaystyle\frac{i(i-1+\log j_{i})^{i-1}}{j_{i+1}-j_{i}}=\sum_{j_{i}=1}^{j_{i+1}-i}\frac{i(i-1+\log(j_{i+1}-j_{i}))^{i-1}}{j_{i}}
ji=1[ji+1/2]+ji=1+[ji+1/2]ji+1ii(i1+log(ji+1ji))i1ji\displaystyle\leq\sum_{j_{i}=1}^{\left[j_{i+1}/2\right]}+\sum_{j_{i}=1+\left[j_{i+1}/2\right]}^{j_{i+1}-i}\frac{i(i-1+\log(j_{i+1}-j_{i}))^{i-1}}{j_{i}}
i(i+logji+1)i1ji=1[ji+1/2]1ji+i(i+log(ji+1/2))i1ji=1+[ji+1/2]ji+1i1ji\displaystyle\leq i(i+\log j_{i+1})^{i-1}\sum_{j_{i}=1}^{\left[j_{i+1}/2\right]}\frac{1}{j_{i}}+i(i+\log(j_{i+1}/2))^{i-1}\sum_{j_{i}=1+\left[j_{i+1}/2\right]}^{j_{i+1}-i}\frac{1}{j_{i}}
i(i+log(ji+1/2))i1(1+log(ji+1/2))+ilog2(i+log(ji+1/2))i1\displaystyle\leq i(i+\log(j_{i+1}/2))^{i-1}(1+\log(j_{i+1}/2))+i\log 2(i+\log(j_{i+1}/2))^{i-1}
i(i+logji+1)i,\displaystyle\leq i(i+\log j_{i+1})^{i},

then (21) is true. We now proceed to prove (22). To this end, note that

ji=iji+11\displaystyle\sum_{j_{i}=i}^{j_{i+1}-1} i(logjilog(i1))i1jiiiji+1(log(x1)log(i1))i1x𝑑x\displaystyle\frac{i(\log j_{i}-\log(i-1))^{i-1}}{j_{i}}\geq i\int_{i}^{j_{i+1}}\frac{(\log(x-1)-\log(i-1))^{i-1}}{x}dx
iiji+1(logxlogi)i1x𝑑x=(logji+1logi)i\displaystyle\geq i\int_{i}^{j_{i+1}}\frac{(\log x-\log i)^{i-1}}{x}dx=(\log j_{i+1}-\log i)^{i}

and

ji=iji+11\displaystyle\sum_{j_{i}=i}^{j_{i+1}-1} i(logjilog(i1))i1ji+1ji=ji=1ji+1ii(log(ji+1ji)log(i1))i1ji\displaystyle\frac{i(\log j_{i}-\log(i-1))^{i-1}}{j_{i+1}-j_{i}}=\sum_{j_{i}=1}^{j_{i+1}-i}\frac{i(\log(j_{i+1}-j_{i})-\log(i-1))^{i-1}}{j_{i}}
ji=1[ji+1/i]i(log(ji+1ji)log(i1))i1ji\displaystyle\geq\sum_{j_{i}=1}^{\left[j_{i+1}/i\right]}\frac{i(\log(j_{i+1}-j_{i})-\log(i-1))^{i-1}}{j_{i}}
i(log(i1)ji+1ilog(i1))i1ji=1[ji+1/i]1ji\displaystyle\geq i\left(\log\frac{(i-1)j_{i+1}}{i}-\log(i-1)\right)^{i-1}\sum_{j_{i}=1}^{\left[j_{i+1}/i\right]}\frac{1}{j_{i}}
i(logji+1logi)i1log([ji+1/i]+1)\displaystyle\geq i(\log j_{i+1}-\log i)^{i-1}\log([j_{i+1}/i]+1)
i(logji+1logi)i.\displaystyle\geq i(\log j_{i+1}-\log i)^{i}.

We thus get (22).

Using (21) and (22) time and again, from (2.2), we get

0<j1<<jkn1j1(j2j1)(jkjk1)\displaystyle\sum_{0<j_{1}<...<j_{k}\leq n}\frac{1}{j_{1}(j_{2}-j_{1})\cdots(j_{k}-j_{k-1})}
jk=kn1jkk(k1+logjk)k1k1nk(k1+log(x+1))k1x𝑑x\displaystyle\quad\leq\sum_{j_{k}=k}^{n}\frac{1}{j_{k}}k(k-1+\log j_{k})^{k-1}\leq\int_{k-1}^{n}\frac{k(k-1+\log(x+1))^{k-1}}{x}dx
k1nk(k+logx)k1x𝑑x(k+logn)k\displaystyle\quad\leq\int_{k-1}^{n}\frac{k(k+\log x)^{k-1}}{x}dx\leq(k+\log n)^{k} (23)

and

0<j1<<jkn1j1(j2j1)(jkjk1)\displaystyle\sum_{0<j_{1}<...<j_{k}\leq n}\frac{1}{j_{1}(j_{2}-j_{1})\cdots(j_{k}-j_{k-1})}
jk=kn1jkk(logjklog(k1))k1kn+1k(log(x1)log(k1))k1x𝑑x\displaystyle\quad\geq\sum_{j_{k}=k}^{n}\frac{1}{j_{k}}k(\log j_{k}-\log(k-1))^{k-1}\geq\int_{k}^{n+1}\frac{k(\log(x-1)-\log(k-1))^{k-1}}{x}dx
knk(logxlogk)k1x𝑑x=(lognlogk)k.\displaystyle\quad\geq\int_{k}^{n}\frac{k(\log x-\log k)^{k-1}}{x}dx=(\log n-\log k)^{k}. (24)

Dividing (2.2) and (2.2) by (logn)k(\log n)^{k} and taking the upper and lower limits, we obtain

limn1(logn)k0<j1<<jkn1j1(j2j1)(jkjk1)=1.\displaystyle\lim_{n\rightarrow\infty}\frac{1}{(\log n)^{k}}\sum_{0<j_{1}<...<j_{k}\leq n}\frac{1}{j_{1}(j_{2}-j_{1})\cdots(j_{k}-j_{k-1})}=1. (25)

Applying (25), for any l01l_{0}\geq 1 and 1ik,1\leq i\leq k, we have

limn\displaystyle\lim_{n\rightarrow\infty} 1(logn)k0<j1<<jkn,jiji1=l01j1(j2j1)(jkjk1)\displaystyle\frac{1}{(\log n)^{k}}\sum_{\begin{subarray}{c}0<j_{1}<...<j_{k}\leq n,\\ j_{i}-j_{i-1}=l_{0}\end{subarray}}\frac{1}{j_{1}(j_{2}-j_{1})\cdots(j_{k}-j_{k-1})}
=1l0limn(log(nl0))k1(logn)k1(log(nl0))k1\displaystyle=\frac{1}{l_{0}}\lim_{n\rightarrow\infty}\frac{(\log(n-l_{0}))^{k-1}}{(\log n)^{k}}\frac{1}{(\log(n-l_{0}))^{k-1}}
×0<j1<<jk1nl01j1(j2j1)(jk1jk2)\displaystyle\quad\quad\quad\quad\times\sum_{0<j_{1}<...<j_{k-1}\leq n-l_{0}}\frac{1}{j_{1}(j_{2}-j_{1})\cdots(j_{k-1}-j_{k-2})}
=0.\displaystyle=0.

Consequently, we infer that for any 1ik,1\leq i\leq k,

limn\displaystyle\lim_{n\rightarrow\infty} 1(logn)k0<j1<<jkn,jiji1n21j1(j2j1)(jkjk1)=0.\displaystyle\frac{1}{(\log n)^{k}}\sum_{\begin{subarray}{c}0<j_{1}<...<j_{k}\leq n,\\ j_{i}-j_{i-1}\leq n_{2}\end{subarray}}\frac{1}{j_{1}(j_{2}-j_{1})\cdots(j_{k}-j_{k-1})}=0. (26)

Putting (25) and (26) together, we get (19). Lemma 5 is proved. \Box

3 A version of the Borel-Cantelli lemma

The following Theorem 3 can be seen as a version of the Borel-Cantelli lemma. Its proof is indeed a summary of Csáki et al. [4, Theorem 1.1] and James et al. [6, Theorem 3.1]. But some details are different. So we provide a complete proof here. We remark that to prove Theorem 1, we just need to check one-by-one that all conditions of Theorem 3 are fulfilled.

Theorem 3 ([4], [6]).

For n>m0,n>m\geq 0, let D(m)1D(m)\geq 1 and D(m,n)1D(m,n)\geq 1 be certain numbers. Assume that (Ω,,)(\Omega,\mathcal{F},\mathbb{P}) is a probability space and Γn,n0\Gamma_{n},n\geq 0 are \mathcal{F}-measurable events such that

limnD(n)(Γn)=σ\displaystyle\lim_{n\rightarrow\infty}D(n)\mathbb{P}(\Gamma_{n})=\sigma (27)

for some 0<σ<.0<\sigma<\infty.

(i) Suppose that n=21D(n)logn<,\sum_{n=2}^{\infty}\frac{1}{D(n)\log n}<\infty,

(Γk+1cΓnc|ΓiΓk)=(Γk+1cΓnc|Γk), 0ik<n,\displaystyle\mathbb{P}(\Gamma_{k+1}^{c}\cdots\Gamma_{n}^{c}|\Gamma_{i}\Gamma_{k})=\mathbb{P}(\Gamma_{k+1}^{c}\cdots\Gamma_{n}^{c}|\Gamma_{k}),\ 0\leq i\leq k<n, (28)

and there exists a number N1>0N_{1}>0 such that

D(m,n)c(nm),(Γm|Γn)>cD(m,n),n>m>N1.\displaystyle D(m,n)\leq c(n-m),\ \mathbb{P}(\Gamma_{m}|\Gamma_{n})>\frac{c}{D(m,n)},\ n>m>N_{1}. (29)

Then we have (i=0j=iΓj)=0.\mathbb{P}\left(\bigcap_{i=0}^{\infty}\bigcup_{j=i}^{\infty}\Gamma_{j}\right)=0.

(ii) Suppose that n=21D(n)logn<,\sum_{n=2}^{\infty}\frac{1}{D(n)\log n}<\infty, there exist n0>0n_{0}>0 and δ>0\delta>0 such that D(n)δnlogn,D(n)\leq\delta n\log n, for all nn0n\geq n_{0} and

D(m,n)D(m)=1i=mn1(11D(i)),n>m0.\displaystyle\frac{D(m,n)}{D(m)}=1-\prod_{i=m}^{n-1}\left(1-\frac{1}{D(i)}\right),\ n>m\geq 0. (30)

In addition, assume that for each ε>0,\varepsilon>0, there exists N>0N>0 such that D(k),kND(k),k\geq N is increasing in k,k, and

D(m,n)D(m)(ΓmΓn)(Γm)(Γn)1+ε,m>N,nm>N.\displaystyle\frac{D(m,n)}{D(m)}\frac{\mathbb{P}(\Gamma_{m}\Gamma_{n})}{\mathbb{P}(\Gamma_{m})\mathbb{P}(\Gamma_{n})}\leq 1+\varepsilon,\ m>N,\ n-m>N. (31)

Then we have (i=0j=iΓj)=1.\mathbb{P}\left(\bigcap_{i=0}^{\infty}\bigcup_{j=i}^{\infty}\Gamma_{j}\right)=1.

Proof.  We prove part (i) first. Suppose all conditions of part (i) are fulfilled. For 0m<k,0\leq m<k, set Am,k:=j=2m+12k1Γj,A_{m,k}:=\sum_{j=2^{m}+1}^{2^{k}}1_{\Gamma_{j}}, lm=max{2m+1<j2m+1: 1Γj=1}l_{m}=\max\left\{2^{m}+1<j\leq 2^{m+1}:\ 1_{\Gamma_{j}}=1\right\} and write

bm:=mink(2m,2m+1]i=12m11D(ki,k).b_{m}:=\min_{k\in(2^{m},2^{m+1}]}\sum_{i=1}^{2^{m-1}}\frac{1}{D(k-i,k)}.

Let N1N_{1} be as in (29). Taking (28) and (29) into consideration, for m1+logN1log2=:M1m\geq 1+\frac{\log N_{1}}{\log 2}=:M_{1} (or equivalently 2m1>N12^{m-1}>N_{1}), 2m<k2m+12^{m}<k\leq 2^{m+1} and 2m1<i<k2^{m-1}<i<k we have

\displaystyle\mathbb{P} (Γi|Am,m+1>0,lm=k)=(ΓiΓkΓk+1cΓ2m+1c)(ΓkΓk+1cΓ2m+1c)=(Γi|Γk)cD(i,k).\displaystyle(\Gamma_{i}|A_{m,m+1}>0,l_{m}=k)=\frac{\mathbb{P}\left(\Gamma_{i}\Gamma_{k}\Gamma_{k+1}^{c}\cdots\Gamma_{2^{m+1}}^{c}\right)}{\mathbb{P}\left(\Gamma_{k}\Gamma_{k+1}^{c}\cdots\Gamma_{2^{m+1}}^{c}\right)}=\mathbb{P}(\Gamma_{i}|\Gamma_{k})\geq\frac{c}{D(i,k)}.

Thus, for mM1,m\geq M_{1},

j=2m1+12m+1\displaystyle\sum_{j=2^{m-1}+1}^{2^{m+1}} (Γj)=𝔼(Am1,m+1)\displaystyle\mathbb{P}(\Gamma_{j})=\mathbb{E}\left(A_{m-1,m+1}\right)
k=2m+12m+1(Am,m+1>0,lm=k)𝔼(Am1,m+1|Am,m+1>0,lm=k)\displaystyle\geq\sum_{k=2^{m}+1}^{2^{m+1}}\mathbb{P}(A_{m,m+1}>0,l_{m}=k)\mathbb{E}(A_{m-1,m+1}|A_{m,m+1}>0,l_{m}=k)
=k=2m+12m+1(Am,m+1>0,lm=k)i=2m1+1kP(Γi|Am,m+1>0,lm=k)\displaystyle=\sum_{k=2^{m}+1}^{2^{m+1}}\mathbb{P}(A_{m,m+1}>0,l_{m}=k)\sum_{i={2^{m-1}+1}}^{k}P(\Gamma_{i}|A_{m,m+1}>0,l_{m}=k)
c(Am,m+1>0)min2m<k2m+1i=12m11D(ki,k)\displaystyle\geq c\mathbb{P}(A_{m,m+1}>0)\min_{2^{m}<k\leq 2^{m+1}}\sum_{i=1}^{2^{m-1}}\frac{1}{D(k-i,k)}
=cbm(Am,m+1>0).\displaystyle=cb_{m}\mathbb{P}(A_{m,m+1}>0). (32)

Using (29), we get bmci=12m11icmb_{m}\geq c\sum_{i=1}^{2^{m-1}}\frac{1}{i}\geq cm for m>M1.m>M_{1}. Since (Γn)σ/D(n),\mathbb{P}\left(\Gamma_{n}\right)\sim\sigma/D(n), as n,n\rightarrow\infty, thus, from (32), we infer that

m=M1\displaystyle\sum_{m=M_{1}}^{\infty} (Am,m+1>0)cm=M11bmj=2m1+12m+1(jC)\displaystyle\mathbb{P}(A_{m,m+1}>0)\leq c\sum_{m=M_{1}}^{\infty}\frac{1}{b_{m}}\sum_{j=2^{m-1}+1}^{2^{m+1}}\mathbb{P}(j\in C)
cm=M11mj=2m1+12m+11D(j)cm=1j=2m1+12m+11D(j)logj\displaystyle\leq c\sum_{m=M_{1}}^{\infty}\frac{1}{m}\sum_{j=2^{m-1}+1}^{2^{m+1}}\frac{1}{D(j)}\leq c\sum_{m=1}^{\infty}\sum_{j=2^{m-1}+1}^{2^{m+1}}\frac{1}{D(j)\log j}
cn=21D(n)logn<.\displaystyle\leq c\sum_{n=2}^{\infty}\frac{1}{D(n)\log n}<\infty.

Applying the Borel-Cantelli lemma, we get (n=1m=n{Am,m+1>0})=0.\mathbb{P}\left(\bigcap_{n=1}^{\infty}\bigcup_{m=n}^{\infty}\{A_{m,m+1}>0\}\right)=0. Thus, (i=0j=iΓj)=0.\mathbb{P}\left(\bigcap_{i=0}^{\infty}\bigcup_{j=i}^{\infty}\Gamma_{j}\right)=0. Part (i) is proved.

Next, we prove part (ii). Suppose all conditions of part (ii) hold. For k1,k\geq 1, set mk=[klogk].m_{k}=[k\log k]. Since (Γn)σ/D(n),\mathbb{P}\left(\Gamma_{n}\right)\sim\sigma/D(n), as n,n\rightarrow\infty, there exists a number M2>0M_{2}>0 such that (Γmk)>σ/2\mathbb{P}(\Gamma_{m_{k}})>\sigma/2 for kM2.k\geq M_{2}. Notice that D(n)D(n) is increasing in n>N.n>N. It follows from Csáki et al. [4, Lemma 2.2] that k=M21D([klogk])\sum_{k=M_{2}}^{\infty}\frac{1}{D([k\log k])} and k=M21D(k)logk\sum_{k=M_{2}}^{\infty}\frac{1}{D(k)\log k} are equiconvergent. Therefore, we get

k=M2(Γmk)σ2k=M21D([klogk])=.\displaystyle\sum_{k=M_{2}}^{\infty}\mathbb{P}(\Gamma_{m_{k}})\geq\frac{\sigma}{2}\sum_{k=M_{2}}^{\infty}\frac{1}{D([k\log k])}=\infty. (33)

Now fix ε>0.\varepsilon>0. Obviously, for l>k,l>k, mlmklloglklogklogk.m_{l}-m_{k}\geq l\log l-k\log k\geq\log k. Then, there exists a number M3>0M_{3}>0 such that for all kM3,k\geq M_{3}, mkN,mlmk>N.m_{k}\geq N,m_{l}-m_{k}>N. Therefore, it follows from (30) and (31) that for l>k>M3,l>k>M_{3},

(ΓmkΓml)\displaystyle\mathbb{P}(\Gamma_{m_{k}}\Gamma_{m_{l}}) (1+ε)(Γmk)(Γml)(D(mk,ml)D(mk))1\displaystyle\leq(1+\varepsilon)\mathbb{P}(\Gamma_{m_{k}})\mathbb{P}(\Gamma_{m_{l}})\left(\frac{D(m_{k},m_{l})}{D(m_{k})}\right)^{-1}
=(1+ε)(1i=mkml1(11D(i)))1(Γmk)(Γml)\displaystyle=(1+\varepsilon)\left(1-\prod_{i=m_{k}}^{m_{l}-1}\left(1-\frac{1}{D(i)}\right)\right)^{-1}\mathbb{P}(\Gamma_{m_{k}})\mathbb{P}(\Gamma_{m_{l}})
(1+ε)(1exp{i=mkml11D(i)})1(Γmk)(Γml).\displaystyle\leq(1+\varepsilon)\left(1-\exp\left\{-\sum_{i=m_{k}}^{m_{l}-1}\frac{1}{D(i)}\right\}\right)^{-1}\mathbb{P}(\Gamma_{m_{k}})\mathbb{P}(\Gamma_{m_{l}}). (34)

Define

1=min{lk:i=mkml11D(i)log1+εε}.\displaystyle\ell_{1}=\min\left\{l\geq k:\sum_{i=m_{k}}^{m_{l}-1}\frac{1}{D(i)}\geq\log\frac{1+\varepsilon}{\varepsilon}\right\}.

Then for l1,l\geq\ell_{1}, we have (1exp{i=mkml11D(i)})11+ε.\left(1-\exp\left\{-\sum_{i=m_{k}}^{m_{l}-1}\frac{1}{D(i)}\right\}\right)^{-1}\leq 1+\varepsilon. Therefore, it follows from (3) that

(ΓmkΓml)(1+ε)2(Γmk)(Γml), for l1.\displaystyle\mathbb{P}(\Gamma_{m_{k}}\Gamma_{m_{l}})\leq(1+\varepsilon)^{2}\mathbb{P}(\Gamma_{m_{k}})\mathbb{P}(\Gamma_{m_{l}}),\text{ for }l\geq\ell_{1}. (35)

Next, we consider k<l<1.k<l<\ell_{1}. Note that for 0ulog1+εε0\leq u\leq\log\frac{1+\varepsilon}{\varepsilon}, we have 1eucu1-e^{-u}\geq cu. Since D(i),imkD(i),i\geq m_{k} is increasing in i,i, thus, using again the fact (Γn)σ/D(n),\mathbb{P}\left(\Gamma_{n}\right)\sim\sigma/D(n), as n,n\rightarrow\infty, from (3) we deduce that

(ΓmkΓml)\displaystyle\mathbb{P}(\Gamma_{m_{k}}\Gamma_{m_{l}}) (1+ε)(1exp{i=mkml11D(i)})1(Γmk)(Γml)\displaystyle\leq(1+\varepsilon)\left(1-\exp\left\{-\sum_{i=m_{k}}^{m_{l}-1}\frac{1}{D(i)}\right\}\right)^{-1}\mathbb{P}(\Gamma_{m_{k}})\mathbb{P}(\Gamma_{m_{l}})
cP(Γmk)(Γml)i=mkml11D(i)cP(Γmk)(Γml)D(ml)mlmkcP(Γmk)lloglklogk.\displaystyle\leq\frac{cP(\Gamma_{m_{k}})\mathbb{P}(\Gamma_{m_{l}})}{\sum_{i=m_{k}}^{m_{l}-1}\frac{1}{D(i)}}\leq\frac{cP(\Gamma_{m_{k}})\mathbb{P}(\Gamma_{m_{l}})D(m_{l})}{m_{l}-m_{k}}\leq\frac{cP(\Gamma_{m_{k}})}{l\log l-k\log k}.

Since D(n)δnlogn,D(n)\leq\delta n\log n, for all nn0,n\geq n_{0}, it can be shown that log1logkγ\frac{\log\ell_{1}}{\log k}\leq\gamma for some constant γ\gamma depending only on ε,\varepsilon, see [4, p.635, display (4.2)]. Thus,

l=k+111\displaystyle\sum_{l=k+1}^{\ell_{1}-1} (ΓmkΓml)cP(Γmk)l=k+1111lloglklogk\displaystyle\mathbb{P}(\Gamma_{m_{k}}\Gamma_{m_{l}})\leq cP(\Gamma_{m_{k}})\sum_{l=k+1}^{\ell_{1}-1}\frac{1}{l\log l-k\log k}
cP(Γmk)1logkl=k+1111lkcP(Γmk)log1logkcP(Γmk).\displaystyle\leq cP(\Gamma_{m_{k}})\frac{1}{\log k}\sum_{l=k+1}^{\ell_{1}-1}\frac{1}{l-k}\leq cP(\Gamma_{m_{k}})\frac{\log\ell_{1}}{\log k}\leq cP(\Gamma_{m_{k}}). (36)

Taking (35) and (36) together, we have

k=M3nl=k+1n(ΓmkΓml)k=M3nl=k+1n(1+ε)2(Γmk)(Γml)+ck=M3n(Γmk).\sum_{k=M_{3}}^{n}\sum_{l=k+1}^{n}\mathbb{P}(\Gamma_{m_{k}}\Gamma_{m_{l}})\leq\sum_{k=M_{3}}^{n}\sum_{l=k+1}^{n}(1+\varepsilon)^{2}\mathbb{P}(\Gamma_{m_{k}})\mathbb{P}(\Gamma_{m_{l}})+c\sum_{k=M_{3}}^{n}\mathbb{P}(\Gamma_{m_{k}}).

Writing H(ε)=(1+ε)2H(\varepsilon)=(1+\varepsilon)^{2}, owing to (33) , we have

αH(ε)\displaystyle\alpha_{H(\varepsilon)} :=lim¯nk=M3nl=k+1n(ΓmkΓml)k=M3nl=k+1nH(ε)(Γmk)(Γml)[k=M3n(Γmk)]2\displaystyle:=\varliminf_{n\rightarrow\infty}\frac{\sum_{k=M_{3}}^{n}\sum_{l=k+1}^{n}\mathbb{P}(\Gamma_{m_{k}}\Gamma_{m_{l}})-\sum_{k=M_{3}}^{n}\sum_{l=k+1}^{n}H(\varepsilon)\mathbb{P}(\Gamma_{m_{k}})\mathbb{P}(\Gamma_{m_{l}})}{[\sum_{k=M_{3}}^{n}\mathbb{P}(\Gamma_{m_{k}})]^{2}}
limnck=M3n(Γmk)=0.\displaystyle\leq\lim_{n\rightarrow\infty}\frac{c}{\sum_{k=M_{3}}^{n}\mathbb{P}(\Gamma_{m_{k}})}=0.

An application of a generalized version of the Borel-Cantelli lemma (see Petrov [11, p. 235]) yields that (k=M3j=kΓmj)1H(ε)+2αH(ε)1(1+ε)2.\mathbb{P}\left(\bigcap_{k=M_{3}}^{\infty}\bigcup_{j=k}^{\infty}\Gamma_{m_{j}}\right)\geq\frac{1}{H(\varepsilon)+2\alpha_{H(\varepsilon)}}\geq\frac{1}{(1+\varepsilon)^{2}}. As a consequence, we get (i=0j=iΓj)(k=M3j=kΓmj)1(1+ε)2.\mathbb{P}\left(\bigcap_{i=0}^{\infty}\bigcup_{j=i}^{\infty}\Gamma_{j}\right)\geq\mathbb{P}\left(\bigcap_{k=M_{3}}^{\infty}\bigcup_{j=k}^{\infty}\Gamma_{m_{j}}\right)\geq\frac{1}{(1+\varepsilon)^{2}}. Since ε\varepsilon is arbitrary, letting ε0,\varepsilon\rightarrow 0, we conclude that (i=0j=iΓj)=1.\mathbb{P}\left(\bigcap_{i=0}^{\infty}\bigcup_{j=i}^{\infty}\Gamma_{j}\right)=1. Part (ii) is proved. \Box

4 Upcrossing times represented by loops, moves and escapes

In this section, by decomposing the path of the walk, we represent the upcrossing times ξ(n,),n0\xi(n,\uparrow),n\geq 0 by certain loops, moves and escapes. Throughout this section, we assume D(0)<.D(0)<\infty. As a consequence, we see from Proposition 1 that limnXn=\lim_{n\rightarrow\infty}X_{n}=\infty almost surely.

4.1 Loops, moves and escapes

Now, considering the random walk X,X, we introduce the following events. For integers 0x<y,0\leq x<y, define

FLx\displaystyle FL_{x} ={starting from x, the walk jumps upward to x+1, and then, after a number of steps, it returns to x before going to },\displaystyle=\left\{\begin{array}[]{l}\text{starting from }x,\text{ the walk jumps upward to }x+1,\text{ and then, }\\ \text{after a number of steps, it returns to }x\text{ before going to }\infty\end{array}\right\},
FLxy\displaystyle FL_{xy} ={starting from x, the walk jumps upward to x+1, and then, after a number of steps, it returns to x before it hits y},\displaystyle=\left\{\begin{array}[]{l}\text{starting from }x,\text{ the walk jumps upward to }x+1,\text{ and then, }\\ \text{after a number of steps, it returns to }x\text{ before it hits }y\end{array}\right\},
Fxy\displaystyle F_{xy} ={starting from x, the walk jumps upward to x+1, and then, after a number of steps, it hits y before returning to x},\displaystyle=\left\{\begin{array}[]{l}\text{starting from }x,\text{ the walk jumps upward to }x+1,\text{ and then, }\\ \text{after a number of steps, it hits }y\text{ before returning to }x\end{array}\right\},
Fx\displaystyle F_{x} ={starting from x, the walk jumps to x+1 and never returns to x},\displaystyle=\{\text{starting from }x,\text{ the walk jumps to }x+1\text{ and never returns to }x\},

and

BLx\displaystyle BL_{x} ={starting from x, the walk jumps downward to x1,and then, after a number of steps, it returns to x},\displaystyle=\left\{\begin{array}[]{l}\text{starting from }x,\text{ the walk jumps downward to }x-1,\\ \text{and then,}\text{ after a number of steps,}\text{ it returns to }x\end{array}\right\},
BLyx\displaystyle BL_{yx} ={starting from y, the walk jumps downward to y1, and then, after a number of steps, it returns to y before hitting x},\displaystyle=\left\{\begin{array}[]{l}\text{starting from }y,\text{ the walk jumps downward to }y-1,\text{ and }\\ \text{then, after a number of steps, it returns to }y\text{ before hitting }x\end{array}\right\},
Byx\displaystyle B_{yx} ={starting from y, the walk jumps downward to y1, and then, after a number of steps, it hits x before returning to y}.\displaystyle=\left\{\begin{array}[]{l}\text{starting from }y,\text{ the walk jumps downward to }y-1,\text{ and }\\ \text{then, after a number of steps, it hits }x\text{ before returning to }y\end{array}\right\}.
Refer to caption
Figure 1: The figure illustrates the sample paths of the loops, moves, and escapes defined in Definition 3. They all begin with a dashed arrow. Some special remarks should be added to FLxFL_{x} and BLy.BL_{y}. Notice that the sample path of FLxFL_{x} in the figure has reached some site above y.y. But by definition, for a forward loop FLx,FL_{x}, the walk may or may not visit the sites above y.y. The path of BLyBL_{y} should be understood similarly.
Definition 3.

If the event FLx,FL_{x}, FLxy,FL_{xy}, Fxy,F_{xy}, or FxF_{x} occurs, we say that a forward loop at x,x, a forward loop at xx avoiding y,y, a forward move from xx to y,y, or an escape from xx to \infty appears, respectively. Similarly, if the event BLx,BL_{x}, BLyx,BL_{yx}, or ByxB_{yx} occurs, we say that a backward loop at x,x, a backward loop at yy avoiding x,x, or a backward move from yy to xx appears, respectively. When there is no danger of confusion, these notations of events are also used to represent the loop, move, or escape that they define. For example, if we say there is a forward loop FLx,FL_{x}, we mean that a forward loop appears at x.x.

The sample paths of these loops, moves and escapes are illustrated in Figure 1. Consulting (14), (15) and (16) time and again, we get

(FLx)\displaystyle\mathbb{P}(FL_{x}) =pxPx+1(x,,)=px(11D(x)),\displaystyle=p_{x}P_{x+1}(x,\infty,-)=p_{x}\left(1-\frac{1}{D(x)}\right), (37)
(FLxy)\displaystyle\mathbb{P}(FL_{xy}) =pxPx+1(x,y,)=px(11D(x,y)),\displaystyle=p_{x}P_{x+1}(x,y,-)=p_{x}\left(1-\frac{1}{D(x,y)}\right), (38)
(Fxy)\displaystyle\mathbb{P}(F_{xy}) =pxPx+1(x,y,+)=pxD(x,y),\displaystyle=p_{x}P_{x+1}(x,y,+)=\frac{p_{x}}{D(x,y)}, (39)
(Fx)\displaystyle\mathbb{P}(F_{x}) =pxPx+1(x,,+)=pxD(x),\displaystyle=p_{x}P_{x+1}(x,\infty,+)=\frac{p_{x}}{D(x)}, (40)

and

(BLx)\displaystyle\mathbb{P}(BL_{x}) =qx,\displaystyle=q_{x}, (41)
(BLyx)\displaystyle\mathbb{P}(BL_{yx}) =qyPy1(x,y,+)=qyD(x,y1)D(x,y),\displaystyle=q_{y}P_{y-1}(x,y,+)=\frac{q_{y}D(x,y-1)}{D(x,y)}, (42)
(Byx)\displaystyle\mathbb{P}(B_{yx}) =qyPy1(x,y,)=qy(1D(x,y1)D(x,y)).\displaystyle=q_{y}P_{y-1}(x,y,-)=q_{y}\left(1-\frac{D(x,y-1)}{D(x,y)}\right). (43)

4.2 Upcrossing times and branching processes with immigration

Since the chain XX is transient, for each x+,x\in\mathbb{Z}_{+}, there must be an escape FxF_{x} at x.x. For x0,x\geq 0, k1,k\geq 1, if ξ(x,)=k,\xi(x,\uparrow)=k, then there must be k1k-1 forward loops FLxFL_{x} and one escape FxF_{x} at x.x.

Fixing x1x\geq 1 and k0,k\geq 0, on the event {ξ(x1,)=k+1},\{\xi(x-1,\uparrow)=k+1\}, let

η(x,i)=\displaystyle\eta(x,i)= the number of forward loops FLx at x contained\displaystyle\text{ the number of forward loops }FL_{x}\text{ at }x\text{ contained}
in the i-th forward loop FLx1 at x1, for i1\displaystyle\text{ in the }i\text{-th forward loop }FL_{x-1}\text{ at }x-1,\text{ for }i\geq 1 (44)

and

ζ(x)=\displaystyle\zeta(x)= the number of forward loops FLx at x contained\displaystyle\text{ the number of forward loops }FL_{x}\text{ at }x\text{ contained}
in the escape Fx1 at x1.\displaystyle\text{ in the escape }F_{x-1}\text{ at }x-1. (45)
Refer to caption
Figure 2: The left part of the figure is the sample path of a forward loop FLx1FL_{x-1} at x1,x-1, it contains 33 forward loops FLxFL_{x} at x.x. The right part is the sample path of an escape Fx1F_{x-1} from x1x-1 to .\infty. It contains 44 forward loops FLxFL_{x} at x.x.

Conditioned on the event {ξ(x1,)=k+1},\{\xi(x-1,\uparrow)=k+1\}, by strong Markov property, the paths contained in those kk forward loops FLx1FL_{x-1} are i.i.d. and moreover, they are all independent of the path contained in the escape Fx1.F_{x-1}. Therefore, we conclude that

ξ(x,)=i=1ξ(x1,)1η(x,i)+ζ(x)+1,\displaystyle\xi(x,\uparrow)=\sum_{i=1}^{\xi(x-1,\uparrow)-1}\eta(x,i)+\zeta(x)+1, (46)

where η(x,i),i1\eta(x,i),i\geq 1 are i.i.d. and they are all independent of ζ(x),\zeta(x), the number 11 represents the escape FxF_{x} from xx to .\infty. Indeed, from the definition in (4.2) and (4.2), by the strong Markov property, η(x,i),ζ(x),x1,i1\eta(x,i),\zeta(x),x\geq 1,i\geq 1 are mutually independent. Thus, (46) says indeed that

{ξ(x,)}x=0 is a Markov chain (or a branching process with immigration).\displaystyle\{\xi(x,\uparrow)\}_{x=0}^{\infty}\text{ is a Markov chain (or a branching process with immigration).} (47)

To study the weak cutpoints, we need to introduce a new chain related to the upcrossing times. Let Y0=0,Y_{0}=0, and for x1,x\geq 1, let

Yx=\displaystyle Y_{x}= number of forward loops FLx at x contained\displaystyle\text{ number of forward loops }FL_{x}\text{ at }x\text{ contained}
in all forward loops FLx1 at x1.\displaystyle\text{ in all forward loops }FL_{x-1}\text{ at }x-1.

Then, in view of (4.2), (4.2) and (46), we have

Yx=i=1Yx1+ζ(x1)η(x,i),x1.\displaystyle Y_{x}=\sum_{i=1}^{Y_{x-1}+\zeta(x-1)}\eta(x,i),\ x\geq 1. (48)

Clearly, by the strong Markov property, Yx1Y_{x-1} and ζ(x1)\zeta(x-1) are mutually independent. Thus, we come to the conclusion that

{Yx}x=0 is a Markov chain (or a branching process with immigration).\displaystyle\{Y_{x}\}_{x=0}^{\infty}\text{ is a Markov chain (or a branching process with immigration).} (49)

Now, we consider the sets C(a,b),C(a,b), CwC_{w} and C(,a)C(*,a) defined in Definition 2 and (6). Clearly,

{xC(,a)}={ξ(x,)=a},\displaystyle\{x\in C(*,a)\}=\{\xi(x,\uparrow)=a\}, (50)
{xC(a,b)}={ξ(x,)=b,ξ(x1,)=ab+1},\displaystyle\{x\in C(a,b)\}=\{\xi(x,\uparrow)=b,\ \xi(x-1,\uparrow)=a-b+1\}, (51)
{xCw}={Yx=0}.\displaystyle\{x\in C_{w}\}=\{Y_{x}=0\}. (52)

From (47) and (49) we see that both {ξ(x,)}x=0\{\xi(x,\uparrow)\}_{x=0}^{\infty} and {Yx}x=0\{Y_{x}\}_{x=0}^{\infty} are Markov chains. Thus, by the Markov property, we get directly the following lemma.

Lemma 6.

Suppose D(1)<.D(1)<\infty. Then we have

(j\displaystyle\mathbb{P}(j\notin C,k+1jn|iC,kC)=(jC,k+1jn|kC),\displaystyle C,k+1\leq j\leq n|i\in C,k\in C)=\mathbb{P}(j\notin C,k+1\leq j\leq n|k\in C),

for 0ik<n,0\leq i\leq k<n, C=C(a,b),C=C(a,b), C(,a)C(*,a) or Cw.C_{w}.

What we really need is Lemma 6. Of course, we can say more about the Markov chain {ξ(x,)}x=0\{\xi(x,\uparrow)\}_{x=0}^{\infty} and {Yx}x=0.\{Y_{x}\}_{x=0}^{\infty}. For x0,x\geq 0, write

fx(s):=1Px(x1,,)qx1spxPx+1(x,,),s[0,1],\displaystyle f_{x}(s):=\frac{1}{P_{x}(x-1,\infty,-)}\frac{q_{x}}{1-sp_{x}P_{x+1}(x,\infty,-)},\ s\in[0,1], (53)
gx(s):=1Px(x1,,+)pxPx+1(x,,+)1spxPx+1(x,,),s[0,1].\displaystyle g_{x}(s):=\frac{1}{P_{x}(x-1,\infty,+)}\frac{p_{x}P_{x+1}(x,\infty,+)}{1-sp_{x}P_{x+1}(x,\infty,-)},\ s\in[0,1]. (54)

By the Markov property, it follows that

Px(x1,,)\displaystyle P_{x}(x-1,\infty,-) =pxPx+1(x,,)Px(x1,,)+qx,\displaystyle=p_{x}P_{x+1}(x,\infty,-)P_{x}(x-1,\infty,-)+q_{x},
Px(x1,,+)\displaystyle P_{x}(x-1,\infty,+) =pxPx+1(x,,+),\displaystyle=p_{x}P_{x+1}(x,\infty,+),

from which we get fx(1)=gx(1)=1.f_{x}(1)=g_{x}(1)=1. Thus, both fx(s)f_{x}(s) and gx(s)g_{x}(s) are candidates for probability generation functions. By checking carefully, we can show that fx(s)f_{x}(s) and gx(s)g_{x}(s) are indeed the probability generation functions of the random variables η(x,i)\eta(x,i) and ζ(x)\zeta(x) defined in (4.2) and (4.2), respectively. Consequently, from (46) and (48) we get the following proposition.

Proposition 3.

Let ξ(1,)=1\xi(-1,\uparrow)=1 and fx(s),f_{x}(s), gx(s),g_{x}(s), x0x\geq 0 be as in (53) and (54). Then {ξ(x,)}x=1\{\xi(x,\uparrow)\}_{x=-1}^{\infty} and {Yx}x=0\{Y_{x}\}_{x=0}^{\infty} are branching processes with immigration such that

𝔼(sξ(x,)|ξ(x1,)=k+1)=sgx(s)(fx(s))k,k0,x0,\displaystyle\mathbb{E}\left(s^{\xi(x,\uparrow)}\Big{|}\xi(x-1,\uparrow)=k+1\right)=sg_{x}(s)\left(f_{x}(s)\right)^{k},\ k\geq 0,\ x\geq 0,
𝔼(sYx|Yx1=k)=gx1(fx(s))(fx(s))k,k0,x1.\displaystyle\mathbb{E}\left(s^{Y_{x}}\big{|}Y_{x-1}=k\right)=g_{x-1}(f_{x}(s))(f_{x}(s))^{k},\ k\geq 0,\ x\geq 1.

Proposition 3 can be seen as a generalization of the classical Ray-Knight theorem (see Knight [7, Theorem 1.1]) on the local time of a recurrent simple random walk or the branching structure in the path of a transient simple random walk (see Dwass [5, Theorem 2]).

5 Probability, joint probability and dependence

As seen in (29) and (31), to apply Theorem 3, we need to compute the probability of {xC}\{x\in C\} and characterize the dependence of {xC}\{x\in C\} and {yC}\{y\in C\} for C=C(a,b),C=C(a,b), CwC_{w} or C(,a).C(*,a). The following proposition is the main result of this section.

Proposition 4.

Suppose ρk1\rho_{k}\rightarrow 1 as k.k\rightarrow\infty. Let C=C(a,b),C(,a)C=C(a,b),C(*,a) or Cw.C_{w}. Then

limx(xC)D(x)=λC={2,if C=Cw,(a1b1)12a,if C(a,b),1,if C(,a),\displaystyle\lim_{x\rightarrow\infty}\mathbb{P}(x\in C)D(x)=\lambda_{C}=\left\{\begin{array}[]{cl}2,&\text{if }C=C_{w},\\ \binom{a-1}{b-1}\frac{1}{2^{a}},&\text{if }C(a,b),\\ 1,&\text{if }C(*,a),\end{array}\right. (58)

and

(yC|xC)cD(x)D(x,y)D(y),yx0,\displaystyle\mathbb{P}(y\in C|x\in C)\leq\frac{cD(x)}{D(x,y)D(y)},\ y\geq x\geq 0, (59)
(xC|yC)cD(x,y),yx0,\displaystyle\mathbb{P}(x\in C|y\in C)\geq\frac{c}{D(x,y)},\ y\geq x\geq 0, (60)
(j1C,,jkC)c(i=1k1D(ji,ji+1))D(jk), 0<j1<<jk.\displaystyle\mathbb{P}(j_{1}\in C,...,j_{k}\in C)\leq\frac{c}{\left(\prod_{i=1}^{k-1}D(j_{i},j_{i+1})\right)D(j_{k})},\ 0<j_{1}<...<j_{k}. (61)

Furthermore, there exists N>0N>0 such that for all x>N,y>x+N,x>N,y>x+N, we have

1εD(x,y)D(x)(xC,yC)(xC)(yC)1+ε,\displaystyle 1-\varepsilon\leq\frac{D(x,y)}{D(x)}\frac{\mathbb{P}(x\in C,y\in C)}{\mathbb{P}(x\in C)\mathbb{P}(y\in C)}\leq 1+\varepsilon, (62)
(λCε)D(x)D(x,y)D(y)(yC|xC)(λC+ε)D(x)D(x,y)D(y),\displaystyle(\lambda_{C}-\varepsilon)\frac{D(x)}{D(x,y)D(y)}\leq\mathbb{P}(y\in C|x\in C)\leq(\lambda_{C}+\varepsilon)\frac{D(x)}{D(x,y)D(y)}, (63)

and for 0=j0<j1<<jk,0=j_{0}<j_{1}<...<j_{k}, jiji1>N,j_{i}-j_{i-1}>N, i=1,,ki=1,...,k we have

(λCε)kD(jk)i=1k1D(ji,ji+1)\displaystyle\frac{(\lambda_{C}-\varepsilon)^{k}}{D(j_{k})\prod_{i=1}^{k-1}D(j_{i},j_{i+1})} (j1C,,jkC)(λC+ε)kD(jk)i=1k1D(ji,ji+1).\displaystyle\leq\mathbb{P}(j_{1}\in C,...,j_{k}\in C)\leq\frac{(\lambda_{C}+\varepsilon)^{k}}{D(j_{k})\prod_{i=1}^{k-1}D(j_{i},j_{i+1})}. (64)

5.1 Weak cutpoints-Proof of Proposition 4 for C=CwC=C_{w}

To warm up, we consider first the weak cutpoints. The following lemma gives the probability of {xCw}\{x\in C_{w}\} and the joint probability of {xCw,yCw}.\{x\in C_{w},y\in C_{w}\}.

Lemma 7.

We have

(xCw)=1pxD(x1),\displaystyle\mathbb{P}(x\in C_{w})=\frac{1}{p_{x}D(x-1)}, (65)
(xCw,yCw)=1pxD(x1,y)11qyD(x1,y1)D(x1,y)1D(y1).\displaystyle\mathbb{P}(x\in C_{w},y\in C_{w})=\frac{1}{p_{x}D(x-1,y)}\frac{1}{1-\frac{q_{y}D(x-1,y-1)}{D(x-1,y)}}\frac{1}{D(y-1)}. (66)

Proof.  To begin with, we prove (65). Notice that the event {xCw}\{x\in C_{w}\} if and only if after hitting xx for the first time, the walk forms a number of (possibly 0) backward loops BLxBL_{x} at xx one after another and then never visits x1.x-1. Thus, on accounting of (41), we get

(xCw)=k=0((BLx))kPx(x1,,+)=k=0qxkD(x1)=1pxD(x1),\displaystyle\mathbb{P}(x\in C_{w})=\sum_{k=0}^{\infty}(\mathbb{P}(BL_{x}))^{k}P_{x}(x-1,\infty,+)=\sum_{k=0}^{\infty}\frac{q_{x}^{k}}{D(x-1)}=\frac{1}{p_{x}D(x-1)},

which proves (65).

Next, we show (66). Notice that the event {xCw,yCw}\{x\in C_{w},y\in C_{w}\} occurs if and only if the following events occur consecutively: after hitting xx for the first time, the walk forms a number of (possibly 0) backward loops BLxBL_{x} at xx one after another; then, restarting from x,x, it hits yy before it hits x1;x-1; restarting from y,y, it forms a number of (possibly 0) backward loops BLy(x1)BL_{y(x-1)} at yy avoiding x1x-1 one after another; finally, restarting from y,y, it never hits y1.y-1.

Therefore, it follows from the Markov property and (14)-(16) and (41)-(42) that

\displaystyle\mathbb{P} (xCw,yCw)\displaystyle(x\in C_{w},y\in C_{w})
=k=0j=0((BLx))kPx(x1,y,+)(P(BLy(x1)))jPy(y1,,+)\displaystyle=\sum_{k=0}^{\infty}\sum_{j=0}^{\infty}(\mathbb{P}(BL_{x}))^{k}P_{x}(x-1,y,+)\left(P(BL_{y(x-1)})\right)^{j}P_{y}(y-1,\infty,+)
=k=0j=0qxk1D(x1,y)(qyD(x1,y1)D(x1,y))j1D(y1)\displaystyle=\sum_{k=0}^{\infty}\sum_{j=0}^{\infty}q_{x}^{k}\frac{1}{D(x-1,y)}\left(\frac{q_{y}D(x-1,y-1)}{D(x-1,y)}\right)^{j}\frac{1}{D(y-1)}
=1pxD(x1,y)11qyD(x1,y1)D(x1,y)1D(y1),\displaystyle=\frac{1}{p_{x}D(x-1,y)}\frac{1}{1-\frac{q_{y}D(x-1,y-1)}{D(x-1,y)}}\frac{1}{D(y-1)},

which finishes the proof of (66). \Box

Next, we give the proof of Proposition 4 for C=Cw.C=C_{w}.

Proof of Proposition 4 for C=Cw.C=C_{w}. Let C=Cw.C=C_{w}. Suppose ρk1\rho_{k}\rightarrow 1 as k.k\rightarrow\infty. Then we have px1/2,p_{x}\rightarrow 1/2, D(x)D(x)\rightarrow\infty as xx\rightarrow\infty and pkpk1<c,p_{k}\vee p_{k}^{-1}<c, ρkρk1c,\rho_{k}\vee\rho_{k}^{-1}\leq c, for all k1.k\geq 1. Thus, it follows from (65) that

(xCw)D(x)=D(x)pxD(x1)=1px(ρx+1D(x))2, as x,\displaystyle\mathbb{P}(x\in C_{w})D(x)=\frac{D(x)}{p_{x}D(x-1)}=\frac{1}{p_{x}\left(\rho_{x}+\frac{1}{D(x)}\right)}\rightarrow 2,\text{ as }x\rightarrow\infty, (67)

which proves (58).

Next, taking (65) and (66) together, for yx1,y\geq x\geq 1, we get

(y\displaystyle\mathbb{P}(y Cw|xCw)=D(x1)D(x1,y)D(y1)11qyD(x1,y1)D(x1,y)\displaystyle\in C_{w}|x\in C_{w})=\frac{D(x-1)}{D(x-1,y)D(y-1)}\frac{1}{1-\frac{q_{y}D(x-1,y-1)}{D(x-1,y)}}
=D(x)D(x,y)D(y)D(x1)D(x)D(x,y)D(x1,y)D(y)D(y1)11qyD(x1,y1)D(x1,y)\displaystyle=\frac{D(x)}{D(x,y)D(y)}\frac{D(x-1)}{D(x)}\frac{D(x,y)}{D(x-1,y)}\frac{D(y)}{D(y-1)}\frac{1}{1-\frac{q_{y}D(x-1,y-1)}{D(x-1,y)}}
=D(x)D(x,y)D(y)(ρx+1D(x))1ρx+1D(x,y)1ρy+1D(y)11qyD(x1,y1)D(x1,y),\displaystyle=\frac{D(x)}{D(x,y)D(y)}\left(\rho_{x}+\frac{1}{D(x)}\right)\frac{1}{\rho_{x}+\frac{1}{D(x,y)}}\frac{1}{\rho_{y}+\frac{1}{D(y)}}\frac{1}{1-\frac{q_{y}D(x-1,y-1)}{D(x-1,y)}}, (68)

and

(x\displaystyle\mathbb{P}(x Cw|yCw)=1pxD(x1,y)py1qyD(x1,y1)D(x1,y)\displaystyle\in C_{w}|y\in C_{w})=\frac{1}{p_{x}D(x-1,y)}\frac{p_{y}}{1-q_{y}\frac{D(x-1,y-1)}{D(x-1,y)}}
pypxD(x1,y)=pypxD(x,y)1ρx+1D(x,y)\displaystyle\geq\frac{p_{y}}{p_{x}D(x-1,y)}=\frac{p_{y}}{p_{x}D(x,y)}\frac{1}{\rho_{x}+\frac{1}{D(x,y)}}
pypxD(x,y)11+ρx=pyD(x,y)cD(x,y).\displaystyle\geq\frac{p_{y}}{p_{x}D(x,y)}\frac{1}{1+\rho_{x}}=\frac{p_{y}}{D(x,y)}\geq\frac{c}{D(x,y)}.

We thus get (60). Since D(m)D(m,n)1D(m)\geq D(m,n)\geq 1 for n>m0n>m\geq 0 and D(x1,y)>D(x1,y1)D(x-1,y)>D(x-1,y-1) for yx1,y\geq x\geq 1, then from (68), we obtain

(yCw|xCw)D(x)D(x,y)D(y)ρx+1ρxρy+1ρy\displaystyle\mathbb{P}(y\in C_{w}|x\in C_{w})\leq\frac{D(x)}{D(x,y)D(y)}\frac{\rho_{x}+1}{\rho_{x}}\frac{\rho_{y}+1}{\rho_{y}}

from which we get (59). Notice that by (67) we have (xCw)cD(x).\mathbb{P}(x\in C_{w})\leq\frac{c}{D(x)}. Thus, due to (49) and (52), (61) is a direct consequence of (59).

Now fix η>0.\eta>0. By (67), there exists a number M1>0M_{1}>0 such that

(2η)1D(x)\displaystyle(2-\eta)\frac{1}{D(x)} (xCw)(2+η)1D(x),xM1.\displaystyle\leq\mathbb{P}(x\in C_{w})\leq(2+\eta)\frac{1}{D(x)},\ x\geq M_{1}. (69)

Using again (65) and (66), we get

(xCw,yCw)(xCw)(yCw)=D(x1)D(x1,y)py1qyD(x1,y1)D(x1,y)\displaystyle\frac{\mathbb{P}(x\in C_{w},y\in C_{w})}{\mathbb{P}(x\in C_{w})\mathbb{P}(y\in C_{w})}=\frac{D(x-1)}{D(x-1,y)}\frac{p_{y}}{1-\frac{q_{y}D(x-1,y-1)}{D(x-1,y)}}
=D(x)D(x,y)D(x1)D(x)D(x,y)D(x1,y)py1qyD(x1,y1)D(x1,y).\displaystyle=\frac{D(x)}{D(x,y)}\frac{D(x-1)}{D(x)}\frac{D(x,y)}{D(x-1,y)}\frac{p_{y}}{1-\frac{q_{y}D(x-1,y-1)}{D(x-1,y)}}.

Applying Lemma 3, we can find a number M2>0M_{2}>0 such

1ηD(x1)D(x)D(x,y)D(x1,y)py1qyD(x1,y1)D(x1,y)1+η,\displaystyle 1-\eta\leq\frac{D(x-1)}{D(x)}\frac{D(x,y)}{D(x-1,y)}\frac{p_{y}}{1-\frac{q_{y}D(x-1,y-1)}{D(x-1,y)}}\leq 1+\eta,

for all x>M2,y>x+M2.x>M_{2},\ y>x+M_{2}. Therefore, we get

1ηD(x,y)D(x)(xCw,yCw)(xCw)(yCw)1+η,x>M2,y>x+M2.\displaystyle 1-\eta\leq\frac{D(x,y)}{D(x)}\frac{\mathbb{P}(x\in C_{w},y\in C_{w})}{\mathbb{P}(x\in C_{w})\mathbb{P}(y\in C_{w})}\leq 1+\eta,\ x>M_{2},\ y>x+M_{2}. (70)

Taking (69) and (70) together, we infer that

(2η)(1η)D(x)D(x,y)D(y)(yC|xC)(2+η)(1+η)D(x)D(x,y)D(y),\displaystyle\frac{(2-\eta)(1-\eta)D(x)}{D(x,y)D(y)}\leq\mathbb{P}(y\in C|x\in C)\leq\frac{(2+\eta)(1+\eta)D(x)}{D(x,y)D(y)}, (71)

for x>M1M2,y>x+M1M2.x>M_{1}\vee M_{2},\ y>x+M_{1}\vee M_{2}. Choosing η\eta small enough such that (2+η)(1+η)<2+ε(2+\eta)(1+\eta)<2+\varepsilon and (2η)(1η)>2ε(2-\eta)(1-\eta)>2-\varepsilon and letting N=M1M2,N=M_{1}\vee M_{2}, from (70) and (71), we get (62) and (63), respectively. Finally, putting (69) and (71) together, we get (64). Proposition 4 is proved for C=Cw.C=C_{w}. \Box

5.2 Local times and upcrossing times-Proof of Proposition 4 for C=C(a,b)C=C(a,b) or C(,a)C(*,a)

Compared with the case C=Cw,C=C_{w}, it is much more involved to prove Proposition 4 for C=C(a,b)C=C(a,b) or C(,a).C(*,a). The main difficulty is to compute the joint probability of {xC,yC}\{x\in C,y\in C\} for C=C(a,b)C=C(a,b) or C(,a).C(*,a).

5.2.1 Probabilities and joint probabilities

To begin with, we compute the probabilities of {xC(a,b)}\{x\in C(a,b)\} and {xC(,b)}.\{x\in C(*,b)\}.

Lemma 8.

For x1x\geq 1 and ab1,a\geq b\geq 1, we have

(ξ(x)=a,ξ(x,)=b)=(a1b1)pxbqxabD(x)(11D(x))b1,\displaystyle\mathbb{P}(\xi(x)=a,\xi(x,\uparrow)=b)=\binom{a-1}{b-1}\frac{p_{x}^{b}q_{x}^{a-b}}{D(x)}\left(1-\frac{1}{D(x)}\right)^{b-1}, (72)
(ξ(x,)=b)=1D(x)(11D(x))b1.\displaystyle\mathbb{P}(\xi(x,\uparrow)=b)=\frac{1}{D(x)}\left(1-\frac{1}{D(x)}\right)^{b-1}. (73)

Proof.  For x1x\geq 1 and ab1,a\geq b\geq 1, {ξ(x)=a,ξ(x,)=b}\{\xi(x)=a,\xi(x,\uparrow)=b\} if and only if the following events occur consecutively: starting from 0,0, the walk hits xx for the first time (with probability 1); restarting from x,x, the walk forms aba-b backward loops BLxBL_{x} at x,x, and b1b-1 forward loops FLxFL_{x} at x;x; finally, the walk forms an escape FxF_{x} from xx to .\infty.

We next determine the possible orders of those aba-b backward loops BLx,BL_{x}, b1b-1 forward loops FLxFL_{x} and one escape Fx.F_{x}. To this end, notice that there are aa jumps of the walk from xx to x+1x+1 or x1.x-1. We know exactly that when standing at xx for the last time, the walk must take a jump from xx to x+1,x+1, which forms the beginning of the escape FxF_{x} at x.x. But there are (a1b1)\binom{a-1}{b-1} possible ways to choose b1b-1 jumps from the other a1a-1 jumps to jump from xx to x+1,x+1, which are the beginnings of those b1b-1 forward loops FLx.FL_{x}.

To sum up the above discussion, owing to (37), (40) and (41), we get

(ξ\displaystyle\mathbb{P}(\xi (x)=a,ξ(x,)=b)=(a1b1)((BLx))ab((FLx))b1(Fx)\displaystyle(x)=a,\xi(x,\uparrow)=b)=\binom{a-1}{b-1}(\mathbb{P}(BL_{x}))^{a-b}(\mathbb{P}(FL_{x}))^{b-1}\mathbb{P}(F_{x})
=(a1b1)qxab(pxPx+1(x,,))b1pxPx+1(x,,+)\displaystyle=\binom{a-1}{b-1}q_{x}^{a-b}\left(p_{x}P_{x+1}(x,\infty,-)\right)^{b-1}p_{x}P_{x+1}(x,\infty,+)
=(a1b1)pxbqxabD(x)(11D(x))b1.\displaystyle=\binom{a-1}{b-1}\frac{p_{x}^{b}q_{x}^{a-b}}{D(x)}\left(1-\frac{1}{D(x)}\right)^{b-1}.

Finally, taking summation on both sides of (72) over aa from bb to ,\infty, we obtain (73). The lemma is proved. \Box

The lemma below gives the joint probability of {xC(a,b),yC(n,m)},\{x\in C(a,b),y\in C(n,m)\}, or {xC(,b),yC(,m)}.\{x\in C(*,b),y\in C(*,m)\}.

Lemma 9.

For integers 1ba,1\leq b\leq a, 1mn1\leq m\leq n and 1x<y,1\leq x<y, we have

\displaystyle\mathbb{P} (ξ(x)=a,ξ(x,)=b,ξ(y)=n,ξ(y,)=m)\displaystyle(\xi(x)=a,\xi(x,\uparrow)=b,\xi(y)=n,\xi(y,\uparrow)=m)
=(a1b1)i=1b(nm+1)(b1i1)(nim1)(n1i1)qxabpyD(y)\displaystyle=\binom{a-1}{b-1}\sum_{i=1}^{b\wedge(n-m+1)}\binom{b-1}{i-1}\binom{n-i}{m-1}\binom{n-1}{i-1}q_{x}^{a-b}\frac{p_{y}}{D(y)}
×(px(11D(x,y)))bi(pxD(x,y))i(py(11D(y)))m1\displaystyle\times\left(p_{x}\left(1-\frac{1}{D(x,y)}\right)\right)^{b-i}\left(\frac{p_{x}}{D(x,y)}\right)^{i}\left(p_{y}\left(1-\frac{1}{D(y)}\right)\right)^{m-1}
×(qy(D(x,y1)D(x,y)))nm(i1)(qy(1D(x,y1)D(x,y)))i1\displaystyle\times\left(q_{y}\left(\frac{D(x,y-1)}{D(x,y)}\right)\right)^{n-m-(i-1)}\left(q_{y}\left(1-\frac{D(x,y-1)}{D(x,y)}\right)\right)^{i-1} (74)

and

\displaystyle\mathbb{P} (ξ(x,)=b,ξ(y,)=m)\displaystyle(\xi(x,\uparrow)=b,\xi(y,\uparrow)=m)
=i=1b(b1i1)(m+i2i1)((11D(x,y)))bi(1D(x,y))i\displaystyle=\sum_{i=1}^{b}\binom{b-1}{i-1}\binom{m+i-2}{i-1}\left(\left(1-\frac{1}{D(x,y)}\right)\right)^{b-i}\left(\frac{1}{D(x,y)}\right)^{i}
×(py(11D(y))1qyD(x,y1)D(x,y))m1(qy(1D(x,y1)D(x,y))1qyD(x,y1)D(x,y))i1pyD(y)1qyD(x,y1)D(x,y).\displaystyle\times\left(\frac{p_{y}\left(1-\frac{1}{D(y)}\right)}{1-q_{y}\frac{D(x,y-1)}{D(x,y)}}\right)^{m-1}\left(\frac{q_{y}\left(1-\frac{D(x,y-1)}{D(x,y)}\right)}{1-q_{y}\frac{D(x,y-1)}{D(x,y)}}\right)^{i-1}\frac{\frac{p_{y}}{D(y)}}{1-q_{y}\frac{D(x,y-1)}{D(x,y)}}. (75)
Remark 6.

Setting n=an=a and taking summation on both sides of (9) over bb and mm from 11 to a,a, by some tedious manipulation, we get

(ξ\displaystyle\mathbb{P}(\xi (x)=a,ξ(y)=a)\displaystyle(x)=a,\xi(y)=a)
=i=1a(a1i1)2(pxD(x,y))i(1pxD(x,y))ai(qy(1D(x,y1)D(x,y)))i1\displaystyle=\sum_{i=1}^{a}\binom{a-1}{i-1}^{2}\left(\frac{p_{x}}{D(x,y)}\right)^{i}\left(1-\frac{p_{x}}{D(x,y)}\right)^{a-i}\left(q_{y}\left(1-\frac{D(x,y-1)}{D(x,y)}\right)\right)^{i-1}
×(qyD(x,y1)D(x,y)+py(11D(y)))aipyD(y).\displaystyle\quad\quad\times\left(q_{y}\frac{D(x,y-1)}{D(x,y)}+p_{y}\left(1-\frac{1}{D(y)}\right)\right)^{a-i}\frac{p_{y}}{D(y)}.

But we do not need such a formula because C(a,)C(a,*) is finite if and only if C(a,b)C(a,b) is finite for all b{1,,a},b\in\{1,...,a\}, while C(a,)C(a,*) is infinite if and only if C(a,b)C(a,b) is infinite for some b{1,,a}.b\in\{1,...,a\}.

The formula in (75) looks heavy. If we take summation on both sides of (75) over mm from 11 to ,\infty, then by some careful computation, we get (ξ(x,)=b)=(11D(x))b11D(x),b1,\mathbb{P}(\xi(x,\uparrow)=b)=\left(1-\frac{1}{D(x)}\right)^{b-1}\frac{1}{D(x)},b\geq 1, which coincides with (73).

We give two proofs of Lemma 9 from different scopes, i.e., the combinatorial one and the path decomposition one.

Proof of Lemma 9(Combinatorial scope). For simplicity, write

𝒜={ξ(x)=a,ξ(x,)=b,ξ(y)=n,ξ(y,)=m}.\displaystyle\mathcal{A}=\{\xi(x)=a,\xi(x,\uparrow)=b,\xi(y)=n,\xi(y,\uparrow)=m\}.

Let FLxy,FL_{xy}, Fxy,F_{xy}, FLy,FL_{y}, FyF_{y} and BLx,BL_{x}, BLyx,BL_{yx}, ByxB_{yx} be the loops, moves and escapes defined in Definition 3 whose probabilities are given in (37)-(43).

Notice that the event 𝒜\mathcal{A} occurs if and only if for some i,i, there are aba-b backward loops BLxBL_{x} at x,x, bib-i forward loops FLxyFL_{xy} at xx avoiding y,y, ii forward moves FxyF_{xy} from xx to yy, m1m-1 forward loops FLyFL_{y} at y,y, nm(i1)n-m-(i-1) backward loops BLyxBL_{yx} at yy avoiding x,x, i1i-1 backward moves ByxB_{yx} from yy to x,x, and finally an escape FyF_{y} from yy to .\infty. See Figure 3 for the case a=9,a=9, n=8,n=8, b=5,b=5, m=4m=4 and i=3.i=3.

Refer to caption
Figure 3: Suppose a=9,a=9, n=8,n=8, b=5b=5 and m=4.m=4. On the event 𝒜,\mathcal{A}, if i=3,i=3, then, there must be 44 backward loops BLxBL_{x} at x,x, 22 forward loops FLxyFL_{xy} at xx avoiding y,y, 33 forward moves FxyF_{xy} from xx to yy, 33 forward loops FLyFL_{y} at y,y, 22 backward loops BLyxBL_{yx} at yy avoiding x,x, 22 backward moves ByxB_{yx} from yy to x,x, and finally an escape FyF_{y} from yy to .\infty. This figure is also helpful to understand the proof of (78).

Clearly,

1i(nm+1)b.\displaystyle 1\leq i\leq(n-m+1)\wedge b. (76)

If the order of those events is determined, in view of (37)-(43), by the Marov property, the probability of these events occurring one by one is

qxab(px(11D(x,y)))bi(pxD(x,y))i(py(11D(y)))m1\displaystyle q_{x}^{a-b}\left(p_{x}\left(1-\frac{1}{D(x,y)}\right)\right)^{b-i}\left(\frac{p_{x}}{D(x,y)}\right)^{i}\left(p_{y}\left(1-\frac{1}{D(y)}\right)\right)^{m-1}
×(qy(D(x,y1)D(x,y)))n(m1)i(qy(1D(x,y1)D(x,y)))i1pyD(y)\displaystyle\times\left(q_{y}\left(\frac{D(x,y-1)}{D(x,y)}\right)\right)^{n-(m-1)-i}\left(q_{y}\left(1-\frac{D(x,y-1)}{D(x,y)}\right)\right)^{i-1}\frac{p_{y}}{D(y)} (77)

which does not depend on the order of these events.

Next, fix such a number i.i. We claim that

the possible number of the orders of these moves, loops,and the escape is(a1b1)(b1i1)(n1i1)(nim1).\begin{split}&\text{the possible number of the orders of these moves, loops,}\\ &\text{and the escape is}\binom{a-1}{b-1}\binom{b-1}{i-1}\binom{n-1}{i-1}\binom{n-i}{m-1}.\end{split} (78)

Putting (76), (5.2.1) and (78) together, we get (9).

What is left for us to do is to prove (78). Indeed, fix 1ib(nm+1).1\leq i\leq b\wedge(n-m+1). On one hand, since the local time at xx is a,a, the walk should visit xx for aa times, or in other words, it must jump away from xx to x+1x+1 or x1x-1 for aa times. On the other hand, since the upcrossing time at xx is b,b, the walk should jump from xx to x+1x+1 for bb times and jump from xx to x1x-1 for aba-b times. Those jumps from xx to x1x-1 form the beginnings of those aba-b backward loops BLx,BL_{x}, while those bb jumps from xx to x+1x+1 form the beginnings of those ii forward moves FxyF_{xy} and bib-i forward loops FLxy.FL_{xy}. But we know exactly that when standing at xx for the last time, it jumps from xx to x+1x+1 and never returns to x.x. Except for such a jump, we do not know exactly which aba-b jumps of the other a1a-1 jumps are from xx to x1.x-1. There are (a1b1)\binom{a-1}{b-1} ways to choose aba-b jumps from those a1a-1 jumps to jump from xx to x1x-1 and form the beginnings of those aba-b backward loops BLx.BL_{x}.

Up to now, we already know which bb jumps (out of aa jumps) of the walk are from xx to x+1.x+1. They form the beginnings of those bib-i forward loops FLxyFL_{xy} and ii forward moves Fxy.F_{xy}. As mentioned above, the last jump of those bb jumps is from xx to x+1,x+1, which forms the beginning of an upward move Fxy.F_{xy}. There are (b1i1)\binom{b-1}{i-1} ways to choose bib-i jumps from the left b1b-1 jumps to form the beginning of those bib-i forward loops FLxyFL_{xy} and the other i1i-1 jumps to form the beginnings of the other i1i-1 forward moves Fxy.F_{xy}.

We now know there are bb jumps that the walk takes from xx to x+1x+1 and already know which bib-i jumps of them form the beginnings of the forward loops FLxyFL_{xy} (which end with a jump from x+1x+1 to xx) and which ii jumps of them form the beginnings of the forward moves Fxy.F_{xy}. But since the upcrossing time at xx is b,b, except for the last time, every time the walk jumps forward from xx to x+1,x+1, after a number of steps, it must return to x.x. Thus, except for the last forward move FxyF_{xy} from xx to y,y, the other i1i-1 forward moves FxyF_{xy} from xx to yy must be followed by a number (possibly 0) of forward loops FLyFL_{y} and/or a number(possibly 0) of backward loops BLyx,BL_{yx}, and then, with probability 11 a backward move FyxF_{yx} from yy to x.x. The walk jumps from yy to y+1y+1 or y1y-1 for nn times. We also know that when the last time the walk standing at y,y, it jumps from yy to y+1y+1 and never returns to y.y. There are (n1i1)\binom{n-1}{i-1} possible ways to choose i1i-1 jumps from the other n1n-1 jumps to form the beginning of those backward moves ByxB_{yx} from yy to x.x.

Until now, we have fixed i1i-1 jumps from yy to y1,y-1, which are the beginnings of those i1i-1 backward moves FyxF_{yx} from yy to xx and one jump from yy to y+1,y+1, which is the beginning of the escape FyF_{y} from yy to .\infty. But the walk will leave yy for nn times. Therefore, except for the above already fixed ii jumps, the other nin-i jumps that the walk jumps away from yy should be the beginnings of those m1m-1 forward loops FLyFL_{y} and ni(m1)n-i-(m-1) backward loops at y.y. There are (nim1)\binom{n-i}{m-1} ways to choose m1m-1 jumps from those nin-i jumps to serve as the beginnings of those forward loops FLyFL_{y} at y.y. So far, we have already determined the order of that escape and those loops and moves.

Summing up the above discussion, we get (78). Thus, (9) is proved.

Finally, taking summation on both sides of (9) over aa and nn from bb to \infty and mm to ,\infty, respectively, and using the formula j=0(k+jj)tj=1(1t)k+1,\sum_{j=0}^{\infty}\binom{k+j}{j}t^{j}=\frac{1}{(1-t)^{k+1}}, by some tedious but elementary computation, we get (75). The lemma is proved. \Box

Remark 7.

The above combinatorial proof of Lemma 9 related to loops, moves and escapes is relatively short and easy to understand. The idea of such a proof comes from an anonymous referee of an early unpublished version of our manuscript [13], which contains only criterion for the finiteness of C(a,),C(a,*), and the limit distribution of C(1,1).C(1,1). But we worry that some of the readers may doubt the rigorousness of such a combinatorial proof. So we also provide a path decomposition proof here, which is much longer.

Proof of Lemma 9(Path decomposition scope). In this proof, for a jj-dimensional vector 𝐯,\vec{\mathbf{v}}, we stipulate that its kkth coordinate is vk,v_{k}, 1kj.1\leq k\leq j. For two jj-dimensional vectors 𝐰\vec{\mathbf{w}} and 𝐮\vec{\mathbf{u}} we write 𝐰𝐮\vec{\mathbf{w}}\prec\vec{\mathbf{u}} if wj<ujw_{j}<u_{j} and wkuk,w_{k}\leq u_{k}, 1kj1.1\leq k\leq j-1.

We show only (9), since (75) can be derived from (9). Denote again 𝒜={ξ(x)=a,ξ(x,)=b,ξ(y)=n,ξ(y,)=m}.\mathcal{A}=\{\xi(x)=a,\xi(x,\uparrow)=b,\xi(y)=n,\xi(y,\uparrow)=m\}. Let τ0=0,\tau_{0}=0, and for k1,k\geq 1, let

τk=inf{n>τk1:Xn1=x,Xn=x+1}.\tau_{k}=\inf\{n>\tau_{k-1}:X_{n-1}=x,X_{n}=x+1\}.

Then we have

0=τ0τ1τ2τbτb+1,0=\tau_{0}\leq\tau_{1}\leq\tau_{2}\leq...\leq\tau_{b}\leq\tau_{b+1}\leq\infty,

where for k1,k\geq 1, τk<τk+1\tau_{k}<\tau_{k+1} if τk+1<.\tau_{k+1}<\infty. Notice that on 𝒜,\mathcal{A}, τb<τb+1=.\tau_{b}<\tau_{b+1}=\infty. For 1kb,1\leq k\leq b, let

Vk\displaystyle V_{k} =#{τk1n<τk:Xn=x},\displaystyle=\#\{\tau_{k-1}\leq n<\tau_{k}:X_{n}=x\}, (79)
Uk\displaystyle U_{k} =#{τkn<τk+1:Xn=y}.\displaystyle=\#\{\tau_{k}\leq n<\tau_{k+1}:X_{n}=y\}. (80)

Furthermore, let

Wb+1=#{τbn<τb+1:Xn1=y,Xn=y+1}\displaystyle W_{b}+1=\#\{\tau_{b}\leq n<\tau_{b+1}:X_{n-1}=y,X_{n}=y+1\} (81)

and for 1kb1,1\leq k\leq b-1, set

Wk=#{τkn<τk+1:Xn1=y,Xn=y+1}.\displaystyle W_{k}=\#\{\tau_{k}\leq n<\tau_{k+1}:X_{n-1}=y,X_{n}=y+1\}. (82)

Clearly, 𝒜\mathcal{A} occurs if and only if

Vk1,k=1,,b,V1++Vb=a,\displaystyle V_{k}\geq 1,\ k=1,...,b,\ V_{1}+...+V_{b}=a,
Ub1,Uk0,k=1,,b1,U1++Ub=n,\displaystyle U_{b}\geq 1,\ U_{k}\geq 0,\ k=1,...,b-1,\ U_{1}+...+U_{b}=n,
0Wb<Ub, 0WkUk,k=0,,b1,W1++Wb=m1,\displaystyle 0\leq W_{b}<U_{b},\ 0\leq W_{k}\leq U_{k},\ k=0,...,b-1,\ W_{1}+...+W_{b}=m-1,

or equivalently,

𝐕S(a,b),𝐔S~(n,b),𝐖S(m1,b) and 𝐖𝐔.\vec{\mathbf{V}}\in S(a,b),\ \vec{\mathbf{U}}\in\tilde{S}(n,b),\ \vec{\mathbf{W}}\in S(m-1,b)\text{ and }\vec{\mathbf{W}}\prec\vec{\mathbf{U}}.

Now fix 𝐯S(a,b),\vec{\mathbf{v}}\in S(a,b), 𝐮S~(n,b)\vec{\mathbf{u}}\in\tilde{S}(n,b) and 𝐰S(m1,b).\vec{\mathbf{w}}\in S(m-1,b). Denote

𝒜(𝐯,𝐮,𝐰)={𝐕=𝐯,𝐔=𝐮,𝐖=𝐰,𝐰𝐮}.\displaystyle\mathcal{A}(\vec{\mathbf{v}},\vec{\mathbf{u}},\vec{\mathbf{w}})=\{\vec{\mathbf{V}}=\vec{\mathbf{v}},\vec{\mathbf{U}}=\vec{\mathbf{u}},\vec{\mathbf{W}}=\vec{\mathbf{w}},\vec{\mathbf{w}}\prec\vec{\mathbf{u}}\}. (83)

In order to compute the probability of the event 𝒜(𝐯,𝐮,𝐰),\mathcal{A}(\vec{\mathbf{v}},\vec{\mathbf{u}},\vec{\mathbf{w}}), for v,u,w0,v,u,w\geq 0, we define

(x,v)\displaystyle\mathcal{B}(x,v) ={starting from x, before it hits x+1, the walk visits xv times and then hits x+1};\displaystyle=\left\{\begin{array}[]{l}\text{starting from }x,\text{ before it hits }x+1,\text{ the }\\ \text{walk visits }x\ v\text{ times and then hits }x+1\end{array}\right\};
𝒞(x,y,u,w)\displaystyle\mathcal{C}(x,y,u,w) ={starting from x+1, before it hits x, the walk visits yu times, jumps from y to y+1w timesand then, after a number of steps, it hits x};\displaystyle=\left\{\begin{array}[]{l}\text{starting from }x+1,\text{ before it hits }x,\text{ the walk }\\ \text{visits }y\ u\text{ times, jumps from }y\text{ to }y+1\ w\text{ times}\\ \text{and then, after a number of steps, it hits }x\end{array}\right\};
𝒞~(x,y,u,w)\displaystyle\tilde{\mathcal{C}}(x,y,u,w) ={starting from x+1, the walk never returns back to xand visits yu times, jumps from y to y+1w times}.\displaystyle=\left\{\begin{array}[]{l}\text{starting from }x+1,\text{ the walk never returns back}\text{ to }x\\ \text{and visits }y\ u\text{ times, jumps from }y\text{ to }y+1\ w\text{ times}\end{array}\right\}.

It is easily seen that

((x,v))=qxv1px,v1.\displaystyle\mathbb{P}(\mathcal{B}(x,v))=q_{x}^{v-1}p_{x},\ v\geq 1. (84)

Furthermore, we claim that

(𝒞(x,y,0,0))\displaystyle\mathbb{P}(\mathcal{C}(x,y,0,0)) =Px+1(x,y,)\displaystyle=P_{x+1}(x,y,-) (85)

and

(𝒞(x,y,u,w))\displaystyle\mathbb{P}(\mathcal{C}(x,y,u,w)) =(u1w)Px+1(x,y,+)(qyPy1(x,y,+))uw1\displaystyle=\binom{u-1}{w}P_{x+1}(x,y,+)(q_{y}P_{y-1}(x,y,+))^{u-w-1}
×(pyPy+1(y,,))wqyPy1(x,y,) for 0w<u.\displaystyle\times(p_{y}P_{y+1}(y,\infty,-))^{w}q_{y}P_{y-1}(x,y,-)\text{ for }0\leq w<u. (86)

Indeed, the event 𝒞(x,y,0,0)\mathcal{C}(x,y,0,0) occurs if and only if starting from x+1,x+1, the walk hits xx before it hits y,y, which occurs with probability Px+1(x,y,).P_{x+1}(x,y,-). Thus, we get (85). Next, fixing u>w0,u>w\geq 0, we show (5.2.1). Notice that the event 𝒞(x,y,u,w)\mathcal{C}(x,y,u,w) occurs if and only if the following events occur:

(i) starting from x+1,x+1, the walk hits yy before it hits x;x;

(ii) restarting from y,y, before it hits x,x, it has uu jumps away from y,y, of which the last one is from yy to y1y-1, and ww jumps are from yy to y+1,y+1, and the other uw1u-w-1 jumps are from yy to y1y-1.

Clearly, the probability of the first event equals Px+1(x,y,+)P_{x+1}(x,y,+)). To see the probability of the second event, notice that standing at yy for the last time, the walk jumps to y1,y-1, and restarting from y1,y-1, it hits xx before it hits y,y, with probability qyPy1(x,y,).q_{y}P_{y-1}(x,y,-). Next, we see what happens after the other u1u-1 times that the walk stands at y.y. For ww times, it moves from yy to y+1y+1 and returns back to y,y, with probability pyPy+1(y,,).p_{y}P_{y+1}(y,\infty,-). And for uw1u-w-1 times, it moves from yy to y1,y-1, and restarting from y1,y-1, it returns back to yy before it hits x,x, with probability qyPy1(x,y,+).q_{y}P_{y-1}(x,y,+). But there are (u1w)\binom{u-1}{w} possible ways to choose ww jumps from those u1u-1 jumps to move forward from yy to y+1.y+1. Consequently, the probability of the second event equals

(u1w)(qyPy1(x,y,+))uw1(pyPy+1(y,,))wqyPy1(x,y,).\displaystyle\binom{u-1}{w}(q_{y}P_{y-1}(x,y,+))^{u-w-1}(p_{y}P_{y+1}(y,\infty,-))^{w}q_{y}P_{y-1}(x,y,-).

Therefore, (5.2.1) is true.

Putting (85) and (5.2.1) together, we get

(𝒞(x,y,u,w))\displaystyle\mathbb{P}(\mathcal{C}(x,y,u,w)) =1{u=w=0}Px+1(x,y,)\displaystyle=1_{\{u=w=0\}}P_{x+1}(x,y,-)
+1{0w<u}(u1w)Px+1(x,y,+)(qyPy1(x,y,+))uw1\displaystyle\quad\quad+1_{\{0\leq w<u\}}\binom{u-1}{w}P_{x+1}(x,y,+)(q_{y}P_{y-1}(x,y,+))^{u-w-1}
×(pyPy+1(y,,))wqyPy1(x,y,)\displaystyle\quad\quad\quad\quad\quad\quad\quad\quad\quad\times(p_{y}P_{y+1}(y,\infty,-))^{w}q_{y}P_{y-1}(x,y,-) (87)

Notice that if 𝒞~(x,y,u,w)\tilde{\mathcal{C}}(x,y,u,w) occur, when standing at yy for the last time, the walk moves forward from yy to y+1y+1 and never returns back to y,y, with probability pyPy+1(y,,+).p_{y}P_{y+1}(y,\infty,+). Thus similar to (5.2.1), we obtain

(𝒞~(x,y,u,w))=\displaystyle\mathbb{P}(\tilde{\mathcal{C}}(x,y,u,w))= (u1w)Px+1(x,y,+)(qyPy1(x,y,+))uw1\displaystyle\binom{u-1}{w}P_{x+1}(x,y,+)(q_{y}P_{y-1}(x,y,+))^{u-w-1}
×(pyPy+1(y,,))wpyPy+1(y,,+) for 0w<u.\displaystyle\times(p_{y}P_{y+1}(y,\infty,-))^{w}p_{y}P_{y+1}(y,\infty,+)\text{ for }0\leq w<u. (88)

We are now ready to compute the probability of the event 𝒜(𝐯,𝐮,𝐰)\mathcal{A}(\vec{\mathbf{v}},\vec{\mathbf{u}},\vec{\mathbf{w}}) defined in (83). Notice that 𝐰𝐮.\vec{\mathbf{w}}\prec\vec{\mathbf{u}}. Thus, if ui=0,u_{i}=0, then wi=0,w_{i}=0, for 1ib1.1\leq i\leq b-1.

In view of (79)-(82), 𝒜(𝐯,𝐮,𝐰)\mathcal{A}(\vec{\mathbf{v}},\vec{\mathbf{u}},\vec{\mathbf{w}}) occurs if and only if the event

k=1b1(x,vk)𝒞(x,y,uk,wk)(x,vb)𝒞~(x,y,ub,wb)\bigcap_{k=1}^{b-1}\mathcal{B}(x,v_{k})\mathcal{C}(x,y,u_{k},w_{k})\mathcal{B}(x,v_{b})\tilde{\mathcal{C}}(x,y,u_{b},w_{b})

occurs. Thus, by the Markov property, on accounting of (84), (5.2.1) and (5.2.1), we have

(𝒜\displaystyle\mathbb{P}(\mathcal{A} (𝐯,𝐮,𝐰))=(k=1b1(x,vk)𝒞(x,y,uk,wk)(x,vb)𝒞~(x,y,ub,wb))\displaystyle(\vec{\mathbf{v}},\vec{\mathbf{u}},\vec{\mathbf{w}}))=\mathbb{P}\left(\bigcap_{k=1}^{b-1}\mathcal{B}(x,v_{k})\mathcal{C}(x,y,u_{k},w_{k})\mathcal{B}(x,v_{b})\tilde{\mathcal{C}}(x,y,u_{b},w_{b})\right)
=(k=1b1P((x,vk))(C(x,y,uk,wk)))P((x,vb))(𝒞~(x,y,ub,wb))\displaystyle=\left(\prod_{k=1}^{b-1}P(\mathcal{B}(x,v_{k}))\mathbb{P}(C(x,y,u_{k},w_{k}))\right)P(\mathcal{B}(x,v_{b}))\mathbb{P}(\tilde{\mathcal{C}}(x,y,u_{b},w_{b}))
=(k=1b1qxvk1px(1{uk=0}Px+1(x,y,)\displaystyle=\bigg{(}\prod_{k=1}^{b-1}q_{x}^{v_{k}-1}p_{x}\bigg{(}1_{\{u_{k}=0\}}P_{x+1}(x,y,-)
+1{uk0}(uk1wk)Px+1(x,y,+)(pyPy+1(y,,))wk\displaystyle\quad\quad\quad\quad\quad\quad+1_{\{u_{k}\neq 0\}}\binom{u_{k}-1}{w_{k}}P_{x+1}(x,y,+)(p_{y}P_{y+1}(y,\infty,-))^{w_{k}}
×(qyPy1(x,y,+))ukwk1qypy1(x,y,)))\displaystyle\quad\quad\quad\quad\quad\quad\quad\quad\times(q_{y}P_{y-1}(x,y,+))^{u_{k}-w_{k}-1}q_{y}p_{y-1}(x,y,-)\bigg{)}\bigg{)}
×(qxvb1px(ub1wb)Px+1(x,y,+)\displaystyle\quad\quad\times\bigg{(}q_{x}^{v_{b}-1}p_{x}\binom{u_{b}-1}{w_{b}}P_{x+1}(x,y,+)
×(pyPy+1(y,,))wb(qyPy1(x,y,+))ubwb1pypy+1(y,,+))\displaystyle\quad\quad\times(p_{y}P_{y+1}(y,\infty,-))^{w_{b}}(q_{y}P_{y-1}(x,y,+))^{u_{b}-w_{b}-1}p_{y}p_{y+1}(y,\infty,+)\bigg{)}
=qxabpxb(k=1b(uk1wk))(Px+1(x,y,))k=1b11uk=0\displaystyle=q_{x}^{a-b}p_{x}^{b}\left(\prod_{k=1}^{b}\binom{u_{k}-1}{w_{k}}\right)(P_{x+1}(x,y,-))^{\sum_{k=1}^{b-1}1_{u_{k}=0}}
×(Px+1(x,y,+))k=1b1uk0(pyPy+1(y,,))k=1b1wk1uk0\displaystyle\quad\quad\times(P_{x+1}(x,y,+))^{\sum_{k=1}^{b}1_{u_{k}\neq 0}}\left(p_{y}P_{y+1}(y,\infty,-)\right)^{\sum_{k=1}^{b-1}w_{k}1_{u_{k}\neq 0}}
×(qyPy1(x,y,+))k=1b1(ukwk1)1uk0(qyPy1(x,y,))k=1b11uk0\displaystyle\quad\quad\times\left(q_{y}P_{y-1}(x,y,+)\right)^{\sum_{k=1}^{b-1}(u_{k}-w_{k}-1)1_{u_{k}\neq 0}}(q_{y}P_{y-1}(x,y,-))^{\sum_{k=1}^{b-1}1_{u_{k}\neq 0}}
×(pyPy+1(y,,))wb(qyPy1(x,y,+))ubwb1pypy+1(y,,+),\displaystyle\quad\quad\times(p_{y}P_{y+1}(y,\infty,-))^{w_{b}}(q_{y}P_{y-1}(x,y,+))^{u_{b}-w_{b}-1}p_{y}p_{y+1}(y,\infty,+),

where and in what follows, we use the convention that (uk1wk)=1\binom{u_{k}-1}{w_{k}}=1 if wkuk1,w_{k}\geq u_{k}-1, k=1,,b.k=1,...,b.

Since 𝐯S(a,b),\vec{\mathbf{v}}\in S(a,b), 𝐮S~(n,b),\vec{\mathbf{u}}\in\tilde{S}(n,b), 𝐰S(m1,b),\vec{\mathbf{w}}\in S(m-1,b), and wkukw_{k}\leq u_{k} for all 1kb,1\leq k\leq b, thus, we have ub1,u_{b}\geq 1, wb+k=1b1wk1uk0=k=1bwb=m1w_{b}+\sum_{k=1}^{b-1}w_{k}1_{u_{k}\neq 0}=\sum_{k=1}^{b}w_{b}=m-1 and ub+k=1b1ub1ub0=n.u_{b}+\sum_{k=1}^{b-1}u_{b}1_{u_{b}\neq 0}=n. Therefore, we get

(𝒜\displaystyle\mathbb{P}(\mathcal{A} (𝐯,𝐮,𝐰))\displaystyle(\vec{\mathbf{v}},\vec{\mathbf{u}},\vec{\mathbf{w}}))
=qxabpxb(k=1b(uk1wk))(Px+1(x,y,))k=1b1uk=0\displaystyle=q_{x}^{a-b}p_{x}^{b}\left(\prod_{k=1}^{b}\binom{u_{k}-1}{w_{k}}\right)(P_{x+1}(x,y,-))^{\sum_{k=1}^{b}1_{u_{k}=0}}
×(Px+1(x,y,+))k=1b1uk0(pyPy+1(y,,))m1\displaystyle\quad\quad\times(P_{x+1}(x,y,+))^{\sum_{k=1}^{b}1_{u_{k}\neq 0}}\left(p_{y}P_{y+1}(y,\infty,-)\right)^{m-1}
×(qyPy1(x,y,+))n(m1)k=1b1uk0(qyPy1(x,y,))k=1b1uk01\displaystyle\quad\quad\times\left(q_{y}P_{y-1}(x,y,+)\right)^{n-(m-1)-\sum_{k=1}^{b}1_{u_{k}\neq 0}}(q_{y}P_{y-1}(x,y,-))^{\sum_{k=1}^{b}1_{u_{k}\neq 0}-1}
×pypy+1(y,,+).\displaystyle\quad\quad\times p_{y}p_{y+1}(y,\infty,+).

Consequently, taking (16) and Lemma 4 into account, we conclude that

(ξ\displaystyle\mathbb{P}(\xi (x)=a,ξ(x,)=b,ξ(y)=n,ξ(y,)=m)\displaystyle(x)=a,\xi(x,\uparrow)=b,\xi(y)=n,\xi(y,\uparrow)=m)
=𝐯S(a,b),𝐮S~(n,b),𝐰S(m1,b),𝐰𝐮(𝒜(𝐯,𝐮,𝐰))\displaystyle=\sum_{\vec{\mathbf{v}}\in S(a,b),\vec{\mathbf{u}}\in\tilde{S}(n,b),\vec{\mathbf{w}}\in S(m-1,b),\vec{\mathbf{w}}\prec\vec{\mathbf{u}}}\mathbb{P}(\mathcal{A}(\vec{\mathbf{v}},\vec{\mathbf{u}},\vec{\mathbf{w}}))
=𝐯S(a,b),𝐮S~(n,b),𝐰S(m1,b),𝐰𝐮qxabpxb(k=1b(uk1wk))pyD(y)\displaystyle=\sum_{\vec{\mathbf{v}}\in S(a,b),\vec{\mathbf{u}}\in\tilde{S}(n,b),\vec{\mathbf{w}}\in S(m-1,b),\vec{\mathbf{w}}\prec\vec{\mathbf{u}}}q_{x}^{a-b}p_{x}^{b}\left(\prod_{k=1}^{b}\binom{u_{k}-1}{w_{k}}\right)\frac{p_{y}}{D(y)}
×(11D(x,y))k=1b1uk=0(1D(x,y))k=1b1uk0(py(11D(y)))m1\displaystyle\times\left(1-\frac{1}{D(x,y)}\right)^{\sum_{k=1}^{b}1_{u_{k}=0}}\left(\frac{1}{D(x,y)}\right)^{\sum_{k=1}^{b}1_{u_{k}\neq 0}}\left(p_{y}\left(1-\frac{1}{D(y)}\right)\right)^{m-1}
×(qyD(x,y1)D(x,y))n(m1)k=1b1uk0(qy(1D(x,y1)D(x,y)))k=1b1uk01\displaystyle\times\left(q_{y}\frac{D(x,y-1)}{D(x,y)}\right)^{n-(m-1)-\sum_{k=1}^{b}1_{u_{k}\neq 0}}\left(q_{y}\left(1-\frac{D(x,y-1)}{D(x,y)}\right)\right)^{\sum_{k=1}^{b}1_{u_{k}\neq 0}-1}
=i=1b(nm+1)(a1b1)𝐮S~i(n,b),𝐰S(m1,b),𝐰𝐮qxabpxb(k=1b(uk1wk))pyD(y)\displaystyle=\sum_{i=1}^{b\wedge(n-m+1)}\binom{a-1}{b-1}\sum_{\vec{\mathbf{u}}\in\tilde{S}_{i}(n,b),\vec{\mathbf{w}}\in S(m-1,b),\vec{\mathbf{w}}\prec\vec{\mathbf{u}}}q_{x}^{a-b}p_{x}^{b}\left(\prod_{k=1}^{b}\binom{u_{k}-1}{w_{k}}\right)\frac{p_{y}}{D(y)}
×(11D(x,y))bi(1D(x,y))i(py(11D(y)))m1\displaystyle\times\left(1-\frac{1}{D(x,y)}\right)^{b-i}\left(\frac{1}{D(x,y)}\right)^{i}\left(p_{y}\left(1-\frac{1}{D(y)}\right)\right)^{m-1}
×(qyD(x,y1)D(x,y))n(m1)i(qy(1D(x,y1)D(x,y)))i1\displaystyle\times\left(q_{y}\frac{D(x,y-1)}{D(x,y)}\right)^{n-(m-1)-i}\left(q_{y}\left(1-\frac{D(x,y-1)}{D(x,y)}\right)\right)^{i-1}
=i=1b(nm+1)(a1b1)(b1i1)𝐮S(n,i),𝐰S(m1,i),𝐰𝐮qxabpxb(k=1i(uk1wk))\displaystyle=\sum_{i=1}^{b\wedge(n-m+1)}\binom{a-1}{b-1}\binom{b-1}{i-1}\sum_{\vec{\mathbf{u}}\in S(n,i),\vec{\mathbf{w}}\in S(m-1,i),\vec{\mathbf{w}}\prec\vec{\mathbf{u}}}q_{x}^{a-b}p_{x}^{b}\left(\prod_{k=1}^{i}\binom{u_{k}-1}{w_{k}}\right)
×pyD(y)(11D(x,y))bi(1D(x,y))i(py(11D(y)))m1\displaystyle\times\frac{p_{y}}{D(y)}\left(1-\frac{1}{D(x,y)}\right)^{b-i}\left(\frac{1}{D(x,y)}\right)^{i}\left(p_{y}\left(1-\frac{1}{D(y)}\right)\right)^{m-1}
×(qyD(x,y1)D(x,y))n(m1)i(qy(1D(x,y1)D(x,y)))i1\displaystyle\times\left(q_{y}\frac{D(x,y-1)}{D(x,y)}\right)^{n-(m-1)-i}\left(q_{y}\left(1-\frac{D(x,y-1)}{D(x,y)}\right)\right)^{i-1}
=(a1b1)i=1b(nm+1)(b1i1)(n1i1)(nim1)qxabpxbpyD(y)\displaystyle=\binom{a-1}{b-1}\sum_{i=1}^{b\wedge(n-m+1)}\binom{b-1}{i-1}\binom{n-1}{i-1}\binom{n-i}{m-1}q_{x}^{a-b}p_{x}^{b}\frac{p_{y}}{D(y)}
×(11D(x,y))bi(1D(x,y))i(py(11D(y)))b1\displaystyle\times\left(1-\frac{1}{D(x,y)}\right)^{b-i}\left(\frac{1}{D(x,y)}\right)^{i}\left(p_{y}\left(1-\frac{1}{D(y)}\right)\right)^{b-1}
×(qyD(x,y1)D(x,y))a(b1)i(qy(1D(x,y1)D(x,y)))i1,\displaystyle\times\left(q_{y}\frac{D(x,y-1)}{D(x,y)}\right)^{a-(b-1)-i}\left(q_{y}\left(1-\frac{D(x,y-1)}{D(x,y)}\right)\right)^{i-1},

where for the last equality we use the fact that

𝐰S(m1,i),𝐰𝐮k=1i(uk1wk)=(nim1), for each 𝐮S(n,i).\sum_{\vec{\mathbf{w}}\in S(m-1,i),\vec{\bf w}\prec\vec{\bf u}}\prod_{k=1}^{i}\binom{u_{k}-1}{w_{k}}=\binom{n-i}{m-1},\text{ for each }\vec{\bf u}\in S(n,i).

The lemma is proved. \Box

5.2.2 Proof of Proposition 4 for C=C(a,b)C=C(a,b) or C(,a)C(*,a)

Proof.  Fix ab1.a\geq b\geq 1. Suppose ρk1\rho_{k}\rightarrow 1 as k.k\rightarrow\infty. Then we have px1/2,p_{x}\rightarrow 1/2, qx1/2,q_{x}\rightarrow 1/2, D(x)D(x)\rightarrow\infty as xx\rightarrow\infty and pkpk1qkqk1<c,p_{k}\vee p_{k}^{-1}\vee q_{k}\vee q_{k}^{-1}<c, ρkρk1c,\rho_{k}\vee\rho_{k}^{-1}\leq c, for all k1.k\geq 1. Firstly, it follows directly from (72) and (73) that

(xC(a,b))<(a1b1)1D(x),(xC(,a))<1D(x)\displaystyle\mathbb{P}(x\in C(a,b))<\binom{a-1}{b-1}\frac{1}{D(x)},\ \mathbb{P}(x\in C(*,a))<\frac{1}{D(x)} (89)

and

limx(xC(a,b))D(x)=12a(a1b1),limx(xC(,a))D(x)=1.\displaystyle\lim_{x\rightarrow\infty}\mathbb{P}(x\in C(a,b))D(x)=\frac{1}{2^{a}}\binom{a-1}{b-1},\ \lim_{x\rightarrow\infty}\mathbb{P}(x\in C(*,a))D(x)=1.

We thus get (58) for C=C(a,b)C=C(a,b) or C(,a).C(*,a).

Furthermore, it is straightforward to see from (72), (9) and (73), (75) that

(yC|xC)<cD(x)D(x,y)D(y), for yx0,C=C(a,b) or C(,a),\displaystyle\mathbb{P}(y\in C|x\in C)<\frac{cD(x)}{D(x,y)D(y)},\text{ for }y\geq x\geq 0,\ C=C(a,b)\text{ or }C(*,a),

which proves (59). Putting (59) and (89) together, on accounting of (47) and (50)-(51), we get (61).

Next, we show (62). For this purpose, on one hand, from (72) and (9), we get

(xC(a,b),yC(a,b))(xC(a,b))(yC(a,b))\displaystyle\frac{\mathbb{P}(x\in C(a,b),y\in C(a,b))}{\mathbb{P}(x\in C(a,b))\mathbb{P}(y\in C(a,b))}
=D(x)D(x,y)(11D(x,y))b1(D(x,y1)D(x,y))ab(11D(x))1b\displaystyle=\frac{D(x)}{D(x,y)}\left(1-\frac{1}{D(x,y)}\right)^{b-1}\left(\frac{D(x,y-1)}{D(x,y)}\right)^{a-b}\left(1-\frac{1}{D(x)}\right)^{1-b}
+D(x)D(x,y)(a1b1)1i=2b(ab+1)(b1i1)(aib1)(a1i1)\displaystyle+\frac{D(x)}{D(x,y)}\binom{a-1}{b-1}^{-1}\sum_{i=2}^{b\wedge(a-b+1)}\binom{b-1}{i-1}\binom{a-i}{b-1}\binom{a-1}{i-1}
×(1D(x,y))i1(11D(x,y))bi(D(x,y1)D(x,y))ab+1i\displaystyle\quad\quad\times\left(\frac{1}{D(x,y)}\right)^{i-1}\left(1-\frac{1}{D(x,y)}\right)^{b-i}\left(\frac{D(x,y-1)}{D(x,y)}\right)^{a-b+1-i}
×(1D(x,y1)D(x,y))i1(11D(x))1b\displaystyle\quad\quad\times\left(1-\frac{D(x,y-1)}{D(x,y)}\right)^{i-1}\left(1-\frac{1}{D(x)}\right)^{1-b}
=:(A)+(B).\displaystyle=:(A)+(B). (90)

On the other hand, taking (73) and (75) together, we infer that

(xC(,a),yC(,a))(xC(,a))(yC(,a))\displaystyle\frac{\mathbb{P}(x\in C(*,a),y\in C(*,a))}{\mathbb{P}(x\in C(*,a))\mathbb{P}(y\in C(*,a))}
=D(x)D(x,y)(11D(x,y))a1(py1qyD(x,y1)D(x,y))a(11D(x))1a\displaystyle=\frac{D(x)}{D(x,y)}\left(1-\frac{1}{D(x,y)}\right)^{a-1}\left(\frac{p_{y}}{1-q_{y}\frac{D(x,y-1)}{D(x,y)}}\right)^{a}\left(1-\frac{1}{D(x)}\right)^{1-a}
+D(x)D(x,y)i=2a(a1i1)(a+i2i1)(11D(x,y))ai(1D(x,y))i1\displaystyle+\frac{D(x)}{D(x,y)}\sum_{i=2}^{a}\binom{a-1}{i-1}\binom{a+i-2}{i-1}\left(1-\frac{1}{D(x,y)}\right)^{a-i}\left(\frac{1}{D(x,y)}\right)^{i-1}
×(py1qyD(x,y1)D(x,y))a(qy(1D(x,y1)D(x,y))1qyD(x,y1)D(x,y))i1(11D(x))1a\displaystyle\quad\quad\times\left(\frac{p_{y}}{1-q_{y}\frac{D(x,y-1)}{D(x,y)}}\right)^{a}\left(\frac{q_{y}\left(1-\frac{D(x,y-1)}{D(x,y)}\right)}{1-q_{y}\frac{D(x,y-1)}{D(x,y)}}\right)^{i-1}\left(1-\frac{1}{D(x)}\right)^{1-a}
=:(D)+(E).\displaystyle=:(D)+(E). (91)

Fix ε>0\varepsilon>0 and δ>0\delta>0. We claim that there exists M1>0M_{1}>0 such that (A)(A), (B)(B) in (90) and (D),(D), (E)(E) in (91) satisfy

D(x,y)D(x)×max{(B),(E)}δ/2,x>M1,y>x+M1,\displaystyle\frac{D(x,y)}{D(x)}\times\max\{(B),(E)\}\leq\delta/2,\ x>M_{1},\ y>x+M_{1}, (92)
1δ/2D(x,y)D(x)×(A)1+δ/2,x>M1,y>x+M1,\displaystyle 1-\delta/2\leq\frac{D(x,y)}{D(x)}\times(A)\leq 1+\delta/2,\ x>M_{1},\ y>x+M_{1}, (93)

and

1δ/2D(x,y)D(x)×(D)1+δ/2,x>M1,y>x+M1.\displaystyle 1-\delta/2\leq\frac{D(x,y)}{D(x)}\times(D)\leq 1+\delta/2,\ x>M_{1},\ y>x+M_{1}. (94)

Indeed, choose η>0\eta>0 such that

(a1b1)1i=2b(ab+1)(b1i1)(aib1)(a1i1)η(1+η)b1<δ/2,\displaystyle\binom{a-1}{b-1}^{-1}\sum_{i=2}^{b\wedge(a-b+1)}\binom{b-1}{i-1}\binom{a-i}{b-1}\binom{a-1}{i-1}\eta(1+\eta)^{b-1}<\delta/2, (95)
i=2a(a1i1)(a+i2i1)η(1+η)2a2<δ/2,\displaystyle\sum_{i=2}^{a}\binom{a-1}{i-1}\binom{a+i-2}{i-1}\eta\left(1+\eta\right)^{2a-2}<\delta/2, (96)
(1+η)a1<1+δ/2,(1η)2a1>1δ/2.\displaystyle(1+\eta)^{a-1}<1+\delta/2,\ (1-\eta)^{2a-1}>1-\delta/2. (97)

Since limkρk=1,\lim_{k\rightarrow\infty}\rho_{k}=1, by Lemma 3, there exists M1>0M_{1}>0 such that

(11D(x))1<1+η,(11D(y))1<1+η,\displaystyle\left(1-\frac{1}{D(x)}\right)^{-1}<1+\eta,\ \left(1-\frac{1}{D(y)}\right)^{-1}<1+\eta, (98)
1D(x,y)<η, 1D(x,y1)D(x,y)<η,qy1qyD(x,y1)D(x,y)<1+η,\displaystyle\frac{1}{D(x,y)}<\eta,\ 1-\frac{D(x,y-1)}{D(x,y)}<\eta,\ \frac{q_{y}}{1-q_{y}\frac{D(x,y-1)}{D(x,y)}}<1+\eta, (99)
py1qyD(x,y1)D(x,y)>1η,x>M1,y>x+M1.\displaystyle\frac{p_{y}}{1-q_{y}\frac{D(x,y-1)}{D(x,y)}}>1-\eta,\ x>M_{1},\ y>x+M_{1}. (100)

Thus, on accounting of (95) and (96), we have

D(x,y)D(x)×(B)\displaystyle\frac{D(x,y)}{D(x)}\times(B)
(a1b1)1i=2b(ab+1)(b1i1)(aib1)(a1i1)η2i2(1+η)b1<δ2,\displaystyle\quad\quad\leq\binom{a-1}{b-1}^{-1}\sum_{i=2}^{b\wedge(a-b+1)}\binom{b-1}{i-1}\binom{a-i}{b-1}\binom{a-1}{i-1}\eta^{2i-2}(1+\eta)^{b-1}<\frac{\delta}{2},
D(x,y)D(x)×(E)i=2a(a1i1)(a+i2i1)η2i2(1+η)2a2<δ2,\displaystyle\frac{D(x,y)}{D(x)}\times(E)\leq\sum_{i=2}^{a}\binom{a-1}{i-1}\binom{a+i-2}{i-1}\eta^{2i-2}\left(1+\eta\right)^{2a-2}<\frac{\delta}{2},

for all x>M1,y>x+M1.x>M_{1},y>x+M_{1}. We thus get (92). To prove (93) and (94), it is easy to see from (98)-(100) that

(1η)a1D(x,y)D(x)×(A)(1+η)b1,x>M1,y>x+M1,\displaystyle(1-\eta)^{a-1}\leq\frac{D(x,y)}{D(x)}\times(A)\leq(1+\eta)^{b-1},\ x>M_{1},\ y>x+M_{1}, (101)
(1η)2a1D(x,y)D(x)×(D)(1+η)a1,x>M1,y>x+M1.\displaystyle(1-\eta)^{2a-1}\leq\frac{D(x,y)}{D(x)}\times(D)\leq(1+\eta)^{a-1},\ x>M_{1},\ y>x+M_{1}. (102)

Consequently, putting (97), (101) and (102) together, we get (93) and (94). Finally, substituting (92)-(93) into (90) and (91), we get

1δD(x,y)D(x)(xC,yC)(xC)(yC)1+δ,\displaystyle 1-\delta\leq\frac{D(x,y)}{D(x)}\frac{\mathbb{P}(x\in C,y\in C)}{\mathbb{P}(x\in C)\mathbb{P}(y\in C)}\leq 1+\delta, (103)

for x>M1,y>x+M1,x>M_{1},y>x+M_{1}, C=C(a,b)C=C(a,b) or C(,a).C(*,a). But by (58), there exists a number M2>0M_{2}>0 such that

(λCδ)/D(k)(kC)(λC+δ)/D(k),\displaystyle(\lambda_{C}-\delta)/D(k)\leq\mathbb{P}(k\in C)\leq(\lambda_{C}+\delta)/D(k), (104)

for k>M2,k>M_{2}, C=C(a,b)C=C(a,b) or C(,a).C(*,a). Therefore, letting N=M1M2,N=M_{1}\vee M_{2}, from (103), we obtain

(λCδ)(1δ)D(x)D(x,y)D(y)(yC|xC)(λC+δ)(1+δ)D(x)D(x,y)D(y),\displaystyle\frac{(\lambda_{C}-\delta)(1-\delta)D(x)}{D(x,y)D(y)}\leq\mathbb{P}(y\in C|x\in C)\leq\frac{(\lambda_{C}+\delta)(1+\delta)D(x)}{D(x,y)D(y)}, (105)

for x>N,x>N, y>x+N,y>x+N, C=C(a,b)C=C(a,b) or C(,a).C(*,a). Taking (104) and (105) together, thanks to (47) and (50)-(51), we conclude that

((λCδ)(1δ))kD(jk)i=1k1D(ji,ji+1)\displaystyle\frac{((\lambda_{C}-\delta)(1-\delta))^{k}}{D(j_{k})\prod_{i=1}^{k-1}D(j_{i},j_{i+1})} (j1C,,jkC)((λC+δ)(1+δ))kD(jk)i=1k1D(ji,ji+1),\displaystyle\leq\mathbb{P}(j_{1}\in C,...,j_{k}\in C)\leq\frac{((\lambda_{C}+\delta)(1+\delta))^{k}}{D(j_{k})\prod_{i=1}^{k-1}D(j_{i},j_{i+1})}, (106)

for 0=j0<j1<<jk,jiji1>N,i=1,,k,0=j_{0}<j_{1}<...<j_{k},\ j_{i}-j_{i-1}>N,\ i=1,...,k, C=C(a,b)C=C(a,b) or C(,a).C(*,a). Choosing δ\delta small enough such that (λCδ)(1δ)>λCε(\lambda_{C}-\delta)(1-\delta)>\lambda_{C}-\varepsilon and (λC+δ)(1+δ)<λC+ε,(\lambda_{C}+\delta)(1+\delta)<\lambda_{C}+\varepsilon, from (103), (105) and (106), we get (62), (63) and (64).

Finally, by multiplying (xC),\mathbb{P}(x\in C), C=C(a,b)C=C(a,b) or C(,a)C(*,a) on both sides of (90) or (91), respectively, and discarding the second term on the righthand side, owing to Lemma 8, we have

(x\displaystyle\mathbb{P}(x C(a,b)|yC(a,b))(A)×(xC(a,b))\displaystyle\in C(a,b)|y\in C(a,b))\geq(A)\times\mathbb{P}(x\in C(a,b))
=(a1b1)pxbqxab1D(x,y)(11D(x,y))b1(D(x,y1)D(x,y))ab\displaystyle=\binom{a-1}{b-1}p_{x}^{b}q_{x}^{a-b}\frac{1}{D(x,y)}\left(1-\frac{1}{D(x,y)}\right)^{b-1}\left(\frac{D(x,y-1)}{D(x,y)}\right)^{a-b}
(a1b1)cD(x,y)(111+ρx+1)b1(11+ρy1)ab\displaystyle\geq\binom{a-1}{b-1}\frac{c}{D(x,y)}\left(1-\frac{1}{1+\rho_{x+1}}\right)^{b-1}\left(\frac{1}{1+\rho_{y-1}}\right)^{a-b}
cD(x,y),yx0,\displaystyle\geq\frac{c}{D(x,y)},\ y\geq x\geq 0,

and

(x\displaystyle\mathbb{P}(x C(,a)|yC(,a))(D)×(xC(,a))\displaystyle\in C(*,a)|y\in C(*,a))\geq(D)\times\mathbb{P}(x\in C(*,a))
=1D(x,y)(11D(x,y))a1(py1qyD(x,y1)D(x,y))a\displaystyle=\frac{1}{D(x,y)}\left(1-\frac{1}{D(x,y)}\right)^{a-1}\left(\frac{p_{y}}{1-q_{y}\frac{D(x,y-1)}{D(x,y)}}\right)^{a}
1D(x,y)(111+ρx+1)a1pyacD(x,y),yx0.\displaystyle\geq\frac{1}{D(x,y)}\left(1-\frac{1}{1+\rho_{x+1}}\right)^{a-1}p_{y}^{a}\geq\frac{c}{D(x,y)},\ y\geq x\geq 0.

That is, (60) is true for C=C(a,b)C=C(a,b) or C(,a).C(*,a). We thus finish the proof of Proposition 4 for C=C(a,b)C=C(a,b) or C(,a).C(*,a). \Box

6 Proof of Theorem 1

Based on Lemma 6 and Proposition 4, we give the proof of Theorem 1 in this section.

Proof.  Let Γn={nC(a,b)},\Gamma_{n}=\{n\in C(a,b)\}, or {nCw},\{n\in C_{w}\}, or {nC(,a)},n1.\{n\in C(*,a)\},n\geq 1. Let D(m)D(m) and D(m,n)D(m,n) be as in (4) and (5). Suppose that ρk\rho_{k} is increasing in k>N0k>N_{0} for some N0>0N_{0}>0 and ρk1\rho_{k}\rightarrow 1 as k.k\rightarrow\infty. Then clearly, we have

D(n)D(m),n>m>N0,\displaystyle D(n)\geq D(m),\ n>m>N_{0}, (107)
D(m,n)c(nm),n>m0,\displaystyle D(m,n)\leq c(n-m),\ n>m\geq 0, (108)

and from (58), we see that (27) holds.

Suppose now n=21D(n)logn<.\sum_{n=2}^{\infty}\frac{1}{D(n)\log n}<\infty. If D(0)=,D(0)=\infty, then by Proposition 1, the chain XX is recurrent almost surely. Therefore, |C(a,b)|=|C(,a)|=|Cw|=0|C(a,b)|=|C(*,a)|=|C_{w}|=0 almost surely, for ab1.a\geq b\geq 1. Assume next D(0)<.D(0)<\infty. From Lemma 6, we see that (28) is fulfilled. Putting (60) and (108) together, we deduce that (29) is fulfilled. Therefore, applying part (i) of Theorem 3, we conclude that

|C(a,b)||C(,a)||Cw|<,ab1,\displaystyle|C(a,b)|\vee|C(*,a)|\vee|C_{w}|<\infty,\ a\geq b\geq 1, (109)

almost surely. Notice that for each A{a,a+1,},A\subseteq\{a,a+1,...\}, B{1,2,,a},B\subset\{1,2,...,a\}, we must have |C(A,a)|<C(,a)|C(A,a)|<C(*,a) and |C(a,B)|=bB|C(a,b)|.|C(a,B)|=\sum_{b\in B}|C(a,b)|. Consequently, we infer from (109) that |C(A,a)|<|C(A,a)|<\infty and |C(a,B)|<|C(a,B)|<\infty almost surely for each a{1,2,}a\in\{1,2,...\} A{a,a+1,},A\subseteq\{a,a+1,...\}, and B{1,2,,a}.B\subset\{1,2,...,a\}. The convergent part of Theorem 1 is proved.

Next, we prove the divergent part. To this end, suppose that there exist n0>0n_{0}>0 and δ>0\delta>0 such that D(n)δnlognD(n)\leq\delta n\log n for all nn0,n\geq n_{0}, and n=21D(n)logn=.\sum_{n=2}^{\infty}\frac{1}{D(n)\log n}=\infty. From (17), we see that (30) is fulfilled. Moreover, from (107), we know that D(n)D(n) is increasing in n>N0.n>N_{0}. Finally, from (62), we infer that (31) holds. Thus, it follows from part (ii) of Theorem 3 that

|C(a,b)|=,|Cw|=, for ab1,\displaystyle|C(a,b)|=\infty,\ |C_{w}|=\infty,\text{ for }a\geq b\geq 1, (110)

almost surely. Now, consider a1,a\geq 1, ϕA{a,a+1,}\phi\neq A\subseteq\{a,a+1,...\} and ϕB{1,2,,a}.\phi\neq B\subseteq\{1,2,...,a\}. Obviously, from (110), we obtain |C(A,a)|=xA|C(x,a)|=|C(A,a)|=\sum_{x\in A}|C(x,a)|=\infty and |C(a,B)|=bB|C(a,b)|=,|C(a,B)|=\sum_{b\in B}|C(a,b)|=\infty, almost surely. The divergent part of Theorem 1 is proved. \Box

7 Expectations-Proof of Proposition 2

In this section, letting the perturbations rn,n1r_{n},n\geq 1 be as in (9), we estimate the expectations of the cardinalities of the sets C(a,b)[1,n]C(a,b)\cap[1,n] and Cw[1,n].C_{w}\cap[1,n]. For this purpose, we first give the proof of Lemma 1.

7.1 Proof of Lemma 1

Proof.  It is shown in Csáki et al. [4] (see page 637 therein) that c1n(loglogn)β<D(n)<c2n(loglogn)βc_{1}n(\log\log n)^{\beta}<D(n)<c_{2}n(\log\log n)^{\beta} for some constants 0<c1<c2<.0<c_{1}<c_{2}<\infty. The proof of Lemma 1 is a refinement of such a result. We need the following lemma.

Lemma 10.

Suppose that f(x),x0f(x),x\geq 0 is a nonnegative function, f(x)f(x) is decreasing in x>Mx>M for some M>0,M>0, and k=1(f(k))2<.\sum_{k=1}^{\infty}(f(k))^{2}<\infty. Set ρn=1f(n)+O((f(n))2).\rho_{n}=1-f(n)+O((f(n))^{2}). Then,

ρ1ρncexp{1nf(x)𝑑x} as n.\displaystyle\rho_{1}\cdots\rho_{n}\sim c\exp\left\{-\int_{1}^{n}f(x)dx\right\}\text{ as }n\rightarrow\infty. (111)

Proof.  Under the given conditions, by checking carefully, we can show that the series

j=1(logρj+j1jf(x)𝑑x)\sum_{j=1}^{\infty}\left(\log\rho_{j}+\int_{j-1}^{j}f(x)dx\right)

is convergent. Thus, (111) is true. \Box

Now we are ready to finish the proof of Lemma 1. Notice that ρn=14rn+O(rn2)\rho_{n}=1-4r_{n}+O(r_{n}^{2}) as n.n\rightarrow\infty. Letting f(x)=1x+1x(loglogx)β,x3,f(x)=\frac{1}{x}+\frac{1}{x(\log\log x)^{\beta}},\ x\geq 3, then we have ρn=1f(n)+O((f(n))2),\rho_{n}=1-f(n)+O((f(n))^{2}), as n.n\rightarrow\infty. Thus, applying Lemma 10 we get

ρ1ρncnexp{3n1x(loglogx)β𝑑x} as n.\displaystyle\rho_{1}\cdots\rho_{n}\sim\frac{c}{n}\exp\left\{-\int_{3}^{n}\frac{1}{x(\log\log x)^{\beta}}dx\right\}\text{ as }n\rightarrow\infty. (112)

Set Bn=:i=nρ1ρiB_{n}=:\sum_{i=n}^{\infty}\rho_{1}\cdots\rho_{i} for n1.n\geq 1. Fix ε>0.\varepsilon>0. By (112) there exists M1>0M_{1}>0 such that

(cε)\displaystyle(c-\varepsilon) 1nexp{3n1x(loglogx)β𝑑x}\displaystyle\frac{1}{n}\exp\left\{-\int_{3}^{n}\frac{1}{x(\log\log x)^{\beta}}dx\right\}
ρ1ρn(c+ε)1nexp{3n1x(loglogx)β𝑑x},n>M1.\displaystyle\leq\rho_{1}\cdots\rho_{n}\leq(c+\varepsilon)\frac{1}{n}\exp\left\{-\int_{3}^{n}\frac{1}{x(\log\log x)^{\beta}}dx\right\},n>M_{1}. (113)

Thus, we have

(cε)\displaystyle(c-\varepsilon) i=n1iexp{3i1x(loglogx)β𝑑x}\displaystyle\sum_{i=n}^{\infty}\frac{1}{i}\exp\left\{-\int_{3}^{i}\frac{1}{x(\log\log x)^{\beta}}dx\right\}
Bn(c+ε)i=n1iexp{3i1x(loglogx)β𝑑x},n>M1.\displaystyle\leq B_{n}\leq(c+\varepsilon)\sum_{i=n}^{\infty}\frac{1}{i}\exp\left\{-\int_{3}^{i}\frac{1}{x(\log\log x)^{\beta}}dx\right\},n>M_{1}. (114)

Let g(y)=1yexp{3y1x(loglogx)β𝑑x},y3.g(y)=\frac{1}{y}\exp\left\{-\int_{3}^{y}\frac{1}{x(\log\log x)^{\beta}}dx\right\},\ y\geq 3. Since g(y)g(y) is decreasing in y3,y\geq 3, then we have 0<g(l)j=nlg(j)nlg(y)𝑑yg(n)0<g(l)\leq\sum_{j=n}^{l}g(j)-\int_{n}^{l}g(y)dy\leq g(n) for l>n3.l>n\geq 3. Thus, we deduce that

1j=ng(j)ng(y)𝑑y1+g(n)ng(y)𝑑y,n3.\displaystyle 1\leq\frac{\sum_{j=n}^{\infty}g(j)}{\int_{n}^{\infty}g(y)dy}\leq 1+\frac{g(n)}{\int_{n}^{\infty}g(y)dy},\ n\geq 3.

But by L’Hospital’s rule, limng(n)ng(y)𝑑y=0.\lim_{n\rightarrow\infty}\frac{g(n)}{\int_{n}^{\infty}g(y)dy}=0. Therefore, there exists a number M2>0M_{2}>0 such that

1j=ng(j)ng(y)𝑑y1+ε,n>M2.\displaystyle 1\leq\frac{\sum_{j=n}^{\infty}g(j)}{\int_{n}^{\infty}g(y)dy}\leq 1+\varepsilon,\ n>M_{2}. (115)

Substituting (115) into (7.1), we infer that for n>M1M2,n>M_{1}\vee M_{2},

(cε)\displaystyle(c-\varepsilon) n1yexp{3y1x(loglogx)β𝑑x}𝑑y\displaystyle\int_{n}^{\infty}\frac{1}{y}\exp\left\{-\int_{3}^{y}\frac{1}{x(\log\log x)^{\beta}}dx\right\}dy
Bn(c+ε)(1+ε)n1yexp{3y1x(loglogx)β𝑑x}𝑑y.\displaystyle\leq B_{n}\leq(c+\varepsilon)(1+\varepsilon)\int_{n}^{\infty}\frac{1}{y}\exp\left\{-\int_{3}^{y}\frac{1}{x(\log\log x)^{\beta}}dx\right\}dy.

Since

n1yexp{3y1x(loglogx)β𝑑x}𝑑y(loglogn)βexp{3n1x(loglogx)β𝑑x}\displaystyle\int_{n}^{\infty}\frac{1}{y}\exp\left\{-\int_{3}^{y}\frac{1}{x(\log\log x)^{\beta}}dx\right\}dy\sim(\log\log n)^{\beta}\exp\left\{\int_{3}^{n}\frac{1}{x(\log\log x)^{\beta}}dx\right\}

as nn\rightarrow\infty (see Csáki et al. [4], page 637), there is M3>M1M2M_{3}>M_{1}\vee M_{2} such that for n>M3n>M_{3}

(cε)(1ε)\displaystyle(c-\varepsilon)(1-\varepsilon) (loglogn)βexp{3n1x(loglogx)β𝑑x}\displaystyle(\log\log n)^{\beta}\exp\left\{\int_{3}^{n}\frac{1}{x(\log\log x)^{\beta}}dx\right\}
Bn(c+ε)(1+ε)2(loglogn)βexp{3n1x(loglogx)β𝑑x}.\displaystyle\leq B_{n}\leq(c+\varepsilon)(1+\varepsilon)^{2}(\log\log n)^{\beta}\exp\left\{\int_{3}^{n}\frac{1}{x(\log\log x)^{\beta}}dx\right\}. (116)

Noticing that D(n)=Bnρ1ρn,D(n)=\frac{B_{n}}{\rho_{1}\cdots\rho_{n},} then from (7.1) and (7.1), we get

(cε)(1ε)c+εn(loglogn)β<D(n)<(c+ε)(1+ε)2cεn(loglogn)β,n>M3.\displaystyle\frac{(c-\varepsilon)(1-\varepsilon)}{c+\varepsilon}n(\log\log n)^{\beta}<D(n)<\frac{(c+\varepsilon)(1+\varepsilon)^{2}}{c-\varepsilon}n(\log\log n)^{\beta},\ n>M_{3}.

We thus come to the conclusion that

D(n)n(loglogn)β as n.\displaystyle D(n)\sim n(\log\log n)^{\beta}\text{ as }n\rightarrow\infty.

Lemma 1 is proved. \Box

7.2 Proof of Proposition 2

Proof.  Fix integers ab1.a\geq b\geq 1. For k1,k\geq 1, we set ηk=1{kC(a,b)},\eta_{k}=1_{\{k\in C(a,b)\}}, φk=1{kCw}.\varphi_{k}=1_{\{k\in C_{w}\}}. Then we have

|C(a,b)[1,n]|=k=1nηk and |Cw[1,n]|=k=1nφk.|C(a,b)\cap[1,n]|=\sum_{k=1}^{n}\eta_{k}\text{ and }|C_{w}\cap[1,n]|=\sum_{k=1}^{n}\varphi_{k}.

It follows from Lemma 8 that

(ηk=1)\displaystyle\mathbb{P}(\eta_{k}=1) =(kC(a,b))=(ξ(k)=a,ξ(k,)=b)\displaystyle=\mathbb{P}(k\in C(a,b))=\mathbb{P}(\xi(k)=a,\xi(k,\uparrow)=b)
=(a1b1)pkbqkabD(k)(11D(k))b1,\displaystyle=\binom{a-1}{b-1}\frac{p_{k}^{b}q_{k}^{a-b}}{D(k)}\left(1-\frac{1}{D(k)}\right)^{b-1},
(φk=1)\displaystyle\mathbb{P}(\varphi_{k}=1) =(kCw)=1pkD(k1).\displaystyle=\mathbb{P}(k\in C_{w})=\frac{1}{p_{k}D(k-1)}.

Noting that pk1/2,p_{k}\rightarrow 1/2, as k,k\rightarrow\infty, thus applying Lemma 1, we see that

(ηk=1)(a1b1)12ak(loglogk)β as k,\displaystyle\mathbb{P}(\eta_{k}=1)\sim\binom{a-1}{b-1}\frac{1}{2^{a}k(\log\log k)^{\beta}}\text{ as }k\rightarrow\infty,
(φk=1)2k(loglogk)β as k.\displaystyle\mathbb{P}(\varphi_{k}=1)\sim\frac{2}{k(\log\log k)^{\beta}}\text{ as }k\rightarrow\infty.

Consequently, for ε>0,\varepsilon>0, there exists a number M1>0M_{1}>0 such that

(a1b1)1ε2ak(loglogk)β\displaystyle\binom{a-1}{b-1}\frac{1-\varepsilon}{2^{a}k(\log\log k)^{\beta}}\leq (ηk=1)(a1b1)1+ε2ak(loglogk)β,kM1,\displaystyle\mathbb{P}(\eta_{k}=1)\leq\binom{a-1}{b-1}\frac{1+\varepsilon}{2^{a}k(\log\log k)^{\beta}},\ k\geq M_{1},
2(1ε)k(loglogk)β\displaystyle\frac{2(1-\varepsilon)}{k(\log\log k)^{\beta}}\leq (φk=1)2(1+ε)k(loglogk)β,kM1.\displaystyle\mathbb{P}(\varphi_{k}=1)\leq\frac{2(1+\varepsilon)}{k(\log\log k)^{\beta}},\ k\geq M_{1}.

But

k=M1n\displaystyle\sum_{k=M_{1}}^{n} 1k(loglogk)βM1ndxx(loglogx)βlogn(loglogn)β,n.\displaystyle\frac{1}{k(\log\log k)^{\beta}}\sim\int_{M_{1}}^{n}\frac{dx}{x(\log\log x)^{\beta}}\sim\frac{\log n}{(\log\log n)^{\beta}},\ n\rightarrow\infty.

Therefore, there exists a number M2>M1M_{2}>M_{1} such that for n>M2,n>M_{2},

(a1b1)(1ε)2logn2a(loglogn)β\displaystyle\binom{a-1}{b-1}\frac{(1-\varepsilon)^{2}\log n}{2^{a}(\log\log n)^{\beta}}\leq k=M2n(ηk=1)(a1b1)(1+ε)2logn2a(loglogn)β,\displaystyle\sum_{k=M_{2}}^{n}\mathbb{P}(\eta_{k}=1)\leq\binom{a-1}{b-1}\frac{(1+\varepsilon)^{2}\log n}{2^{a}(\log\log n)^{\beta}}, (117)
2(1ε)2logn(loglogn)β\displaystyle\frac{2(1-\varepsilon)^{2}\log n}{(\log\log n)^{\beta}}\leq k=M2n(φk=1)2(1+ε)2logn(loglogn)β.\displaystyle\sum_{k=M_{2}}^{n}\mathbb{P}(\varphi_{k}=1)\leq\frac{2(1+\varepsilon)^{2}\log n}{(\log\log n)^{\beta}}. (118)

As a result, we deduce from (117) that

(1ε)2\displaystyle(1-\varepsilon)^{2} lim¯n2a(loglogn)β(a1b1)logn𝔼|C(a,b)[1,n]|\displaystyle\leq\varliminf_{n\rightarrow\infty}\frac{2^{a}(\log\log n)^{\beta}}{\binom{a-1}{b-1}\log n}\mathbb{E}|C(a,b)\cap[1,n]|
lim¯n2a(loglogn)β(a1b1)logn𝔼|C(a,b)[1,n]|(1+ε)2.\displaystyle\leq\varlimsup_{n\rightarrow\infty}\frac{2^{a}(\log\log n)^{\beta}}{\binom{a-1}{b-1}\log n}\mathbb{E}|C(a,b)\cap[1,n]|\leq(1+\varepsilon)^{2}.

Since ε\varepsilon is arbitrary, letting ε0,\varepsilon\rightarrow 0, we get (10). Similarly, from (118) we get (11). Proposition 2 is proved. \Box

8 Limit distributions-Proof of Theorem 2

This section is devoted to proving Theorem 2. The proof is based on the moment method. To start with, we give an auxiliary lemma to estimate D(i)D(i) and D(i,j).D(i,j).

8.1 An auxiliary lemma

Lemma 11.

Set r1=1/4,r_{1}=1/4, and for n2,n\geq 2, let rn=12n.r_{n}=\frac{1}{2n}. Suppose that pn=1/2+rn,n1.p_{n}=1/2+r_{n},\ n\geq 1. Then there exists a number i0>0i_{0}>0 such that

(1ε)i(ji)jD(i\displaystyle(1-\varepsilon)\frac{i(j-i)}{j}\leq D(i ,j)(1+ε)i(ji)j,j>ii0,\displaystyle,j)\leq(1+\varepsilon)\frac{i(j-i)}{j},\ j>i\geq i_{0}, (119)
(1ε)iD(\displaystyle(1-\varepsilon)i\leq D( i)(1+ε)i,ii0.\displaystyle i)\leq(1+\varepsilon)i,\ i\geq i_{0}. (120)

Proof.  Notice that

ρn14rn+O(rn2)=12n+O(n2),n.\rho_{n}\sim 1-4r_{n}+O(r_{n}^{2})=1-\frac{2}{n}+O(n^{-2}),\ n\rightarrow\infty.

Then for j>i1,j>i\geq 1, choosing θn(0(ρn1),0(ρn1)),inj\theta_{n}\in(0\wedge(\rho_{n}-1),0\vee(\rho_{n}-1)),\ i\leq n\leq j properly, we have

ρiρj\displaystyle\rho_{i}\cdots\rho_{j} =en=ijlogρn=en=ij(ρn1)22(1+θn)2en=ij(ρn1)\displaystyle=e^{\sum_{n=i}^{j}\log\rho_{n}}=e^{-\sum_{n=i}^{j}\frac{(\rho_{n}-1)^{2}}{2(1+\theta_{n})^{2}}}e^{\sum_{n=i}^{j}(\rho_{n}-1)}
en=icn22(1+θn)2e2n=ij1nen=icn2\displaystyle\geq e^{-\sum_{n=i}^{\infty}\frac{cn^{-2}}{2(1+\theta_{n})2}}e^{-2\sum_{n=i}^{j}\frac{1}{n}}e^{-\sum_{n=i}^{\infty}cn^{-2}}
en=icn22(1+θn)2e2ij+11x𝑑xen=icn2\displaystyle\geq e^{-\sum_{n=i}^{\infty}\frac{cn^{-2}}{2(1+\theta_{n})2}}e^{-2\int_{i}^{j+1}\frac{1}{x}dx}e^{-\sum_{n=i}^{\infty}cn^{-2}}
=(ij+1)2en=icn22(1+θn)2en=icn2.\displaystyle=\left(\frac{i}{j+1}\right)^{2}e^{-\sum_{n=i}^{\infty}\frac{cn^{-2}}{2(1+\theta_{n})2}}e^{-\sum_{n=i}^{\infty}cn^{-2}}.

Fix ε>0.\varepsilon>0. We can find a number i1i_{1} such that ii1jj+1en=icn22(1+θn)2en=icn2>(1ε)1/2\frac{i}{i-1}\frac{j}{j+1}e^{-\sum_{n=i}^{\infty}\frac{cn^{-2}}{2(1+\theta_{n})2}}e^{-\sum_{n=i}^{\infty}cn^{-2}}>(1-\varepsilon)^{1/2} for all ii1.i\geq i_{1}. Thus we have

ρiρj(1ε)1/2(i1j)2, for all jii1.\displaystyle\rho_{i}\cdots\rho_{j}\geq(1-\varepsilon)^{1/2}\left(\frac{i-1}{j}\right)^{2},\text{ for all }j\geq i\geq i_{1}.

Similarly, we can find a number i2>i1i_{2}>i_{1} such that

ρiρj(1+ε)1/2(i1j)2, for all jii2.\displaystyle\rho_{i}\cdots\rho_{j}\leq(1+\varepsilon)^{1/2}\left(\frac{i-1}{j}\right)^{2},\text{ for all }j\geq i\geq i_{2}.

Consequently, we get

(1ε)1/2(i1j)2ρiρj(1+ε)1/2(i1j)2, for all jii2.\displaystyle(1-\varepsilon)^{1/2}\left(\frac{i-1}{j}\right)^{2}\leq\rho_{i}\cdots\rho_{j}\leq(1+\varepsilon)^{1/2}\left(\frac{i-1}{j}\right)^{2},\text{ for all }j\geq i\geq i_{2}.

As a result, we obtain

(1ε)1/2n=ij1(in)2D(i,j)(1+ε)1/2n=ij1(in)2,j>ii2.\displaystyle(1-\varepsilon)^{1/2}\sum_{n=i}^{j-1}\left(\frac{i}{n}\right)^{2}\leq D(i,j)\leq(1+\varepsilon)^{1/2}\sum_{n=i}^{j-1}\left(\frac{i}{n}\right)^{2},\ j>i\geq i_{2}.

Notice that for some number i0>i2,i_{0}>i_{2}, we have

(1ε)1/2i(ji)jn=ij1(in)2(1+ε)1/2i(ji)j,j>ii0.\displaystyle(1-\varepsilon)^{1/2}\frac{i(j-i)}{j}\leq\sum_{n=i}^{j-1}\left(\frac{i}{n}\right)^{2}\leq(1+\varepsilon)^{1/2}\frac{i(j-i)}{j},\ j>i\geq i_{0}.

Therefore, we get

(1ε)i(ji)jD(i,j)(1+ε)i(ji)j,j>ii0\displaystyle(1-\varepsilon)\frac{i(j-i)}{j}\leq D(i,j)\leq(1+\varepsilon)\frac{i(j-i)}{j},\ j>i\geq i_{0}

which proves (119). Letting jj\rightarrow\infty in (119), we get (120). Lemma 11 is proved. \Box

8.2 Convergence of the moments

An important step to prove Theorem 2 is to show the convergence of 𝔼(|C[1,n]|k)(logn)k,\frac{\mathbb{E}(|C\cap[1,n]|^{k})}{(\log n)^{k}}, for k1,k\geq 1, C=C(a,b),C=C(a,b), C(,a)C(*,a) or Cw,C_{w}, as n.n\rightarrow\infty. We have the following lemma.

Lemma 12.

Fix integers ab1.a\geq b\geq 1. Let r1=1/4r_{1}=1/4 and rn=12nr_{n}=\frac{1}{2n} for n2.n\geq 2. Set pn=12+rnp_{n}=\frac{1}{2}+r_{n} for n1.n\geq 1. Then we have

limn𝔼(|C[1,n]|k)(logn)k={k!2ak(a1b1)kif C=C(a,b),k!if C=C(,a),2kk!if C=Cw, for k1.\displaystyle\lim_{n\rightarrow\infty}\frac{\mathbb{E}(|C\cap[1,n]|^{k})}{(\log n)^{k}}=\left\{\begin{array}[]{cl}\frac{k!}{2^{ak}}\binom{a-1}{b-1}^{k}&\text{if }\ C=C(a,b),\\ k!&\text{if }\ C=C(*,a),\\ 2^{k}k!&\text{if }\ C=C_{w},\end{array}\right.\text{ for }\ k\geq 1.

Proof.  Consider C=C(a,b),CwC=C(a,b),C_{w} or C(,a).C(*,a). Let λC\lambda_{C} be the one in (58). For k1,k\geq 1, set ηk=1{kC},\eta_{k}=1_{\{k\in C\}}, clearly we have |C[1,n]|=k=1nηk.|C\cap[1,n]|=\sum_{k=1}^{n}\eta_{k}.

Let S(a,j)S(a,j) be the one defined in (18). Then

missingE|C[1,n]|k\displaystyle\mathbb{\mathbb{missing}}E|C\cap[1,n]|^{k} =𝔼((j=1nηj)k)=0<j1,j2,,jkn𝔼(ηj1ηj2ηjk)\displaystyle=\mathbb{E}\left(\left(\sum_{j=1}^{n}\eta_{j}\right)^{k}\right)=\sum_{0<j_{1},j_{2},...,j_{k}\leq n}\mathbb{E}\left(\eta_{j_{1}}\eta_{j_{2}}\cdots\eta_{j_{k}}\right)
=m=1kl1++lm=k,li1,i=1,,mm!(km)0<j1<<jmn𝔼(ηj1l1ηjmlm)\displaystyle=\sum_{m=1}^{k}\sum_{\begin{subarray}{c}l_{1}+\dots+l_{m}=k,\\ l_{i}\geq 1,i=1,...,m\end{subarray}}m!\binom{k}{m}\sum_{0<j_{1}<...<j_{m}\leq n}\mathbb{E}\left(\eta_{j_{1}}^{l_{1}}\cdots\eta_{j_{m}}^{l_{m}}\right)
=m=1k(l1,,lm)S(k,m)m!(km)0<j1<<jmn𝔼(ηj1l1ηjmlm).\displaystyle=\sum_{m=1}^{k}\sum_{(l_{1},...,l_{m})\in S(k,m)}m!\binom{k}{m}\sum_{0<j_{1}<...<j_{m}\leq n}\mathbb{E}\left(\eta_{j_{1}}^{l_{1}}\cdots\eta_{j_{m}}^{l_{m}}\right).

Since the values of ηj,j=1,,n\eta_{j},j=1,...,n are either 0 or 1,1, then taking Lemma 4 into account, we have

𝔼|C[1,n]|k=m=1k(k1m1)m!(km)0<j1<<jmn𝔼(ηj1ηjm)\displaystyle\mathbb{E}|C\cap[1,n]|^{k}=\sum_{m=1}^{k}\binom{k-1}{m-1}m!\binom{k}{m}\sum_{0<j_{1}<...<j_{m}\leq n}\mathbb{E}(\eta_{j_{1}}\cdots\eta_{j_{m}})
=m=1k(k1m1)m!(km)0<j1<<jmn(j1C,..,jmC)\displaystyle=\sum_{m=1}^{k}\binom{k-1}{m-1}m!\binom{k}{m}\sum_{0<j_{1}<...<j_{m}\leq n}\mathbb{P}(j_{1}\in C,..,j_{m}\in C)
=:m=1k(k1m1)m!(km)G(n,m).\displaystyle=:\sum_{m=1}^{k}\binom{k-1}{m-1}m!\binom{k}{m}G(n,m). (121)

Fix ε>0.\varepsilon>0. Let NN and i0i_{0} be as in Proposition 4 and Lemma 11, respectively. Set n2=Ni0n_{2}=N\vee i_{0} and denote temporarily

A={(j1,,jm)0<j1<.<jmn,ji+1ji>n2,0im1},\displaystyle A=\{(j_{1},...,j_{m})\mid 0<j_{1}<....<j_{m}\leq n,j_{i+1}-j_{i}>n_{2},\forall 0\leq i\leq m-1\},
B={(j1,,jm)0<j1<.<jmn,(j1,,jm)A},\displaystyle B=\{(j_{1},...,j_{m})\mid 0<j_{1}<....<j_{m}\leq n,(j_{1},...,j_{m})\notin A\},
Bi={(j1,,jm)j1<<jin2<ji+1<<jmn},i=1,,m1,\displaystyle B_{i}=\{(j_{1},...,j_{m})\mid j_{1}<...<j_{i}\leq n_{2}<j_{i+1}<...<j_{m}\leq n\},i=1,...,m-1,
Bm={(j1,,jm)|n2<j1<..<jm,{1sm1|js+1js<n2}ϕ},\displaystyle B_{m}=\{(j_{1},...,j_{m})|n_{2}<j_{1}<..<j_{m},\{1\leq s\leq m-1|j_{s+1}-j_{s}<n_{2}\}\neq\phi\},

where and in the remainder of this proof we set j0=0.j_{0}=0. Obviously, we have B=i=1mBi.B=\bigcup_{i=1}^{m}B_{i}. Therefore, we can write

G(n,m)\displaystyle G(n,m) =(j1,,jm)A+i=1m(j1,,jm)Bi(j1C,..,jmC)\displaystyle=\sum_{(j_{1},...,j_{m})\in A}+\sum_{i=1}^{m}\sum_{(j_{1},...,j_{m})\in B_{i}}\mathbb{P}(j_{1}\in C,..,j_{m}\in C)
=:GA(n,m)+i=1mGBi(n,m).\displaystyle=:G_{A}(n,m)+\sum_{i=1}^{m}G_{B_{i}}(n,m). (122)

Consider first the term GA(n,m).G_{A}(n,m). From (64), (119) and (120), we get

GA(n,m)\displaystyle G_{A}(n,m) =(j1,,jm)A(j1C,,jmC)\displaystyle=\sum_{(j_{1},...,j_{m})\in A}\mathbb{P}(j_{1}\in C,...,j_{m}\in C)
(λC+ε)m(1+ε)m(j1,,jm)A1(i=1m1D(ji,ji+1))D(jm)\displaystyle\leq(\lambda_{C}+\varepsilon)^{m}(1+\varepsilon)^{m}\sum_{(j_{1},...,j_{m})\in A}\frac{1}{\left(\prod_{i=1}^{m-1}D(j_{i},j_{i+1})\right)D(j_{m})}
(λC+ε)m(1+ε)m(j1,,jm)A1j1(j2j1)(jmjm1).\displaystyle\leq(\lambda_{C}+\varepsilon)^{m}(1+\varepsilon)^{m}\sum_{(j_{1},...,j_{m})\in A}\frac{1}{j_{1}(j_{2}-j_{1})\cdots(j_{m}-j_{m-1})}.

Applying Lemma 5, we obtain

lim¯nGA(n,m)(logn)m(λC+ε)m(1+ε)m.\displaystyle\varlimsup_{n\rightarrow\infty}\frac{G_{A}(n,m)}{(\log n)^{m}}\leq(\lambda_{C}+\varepsilon)^{m}(1+\varepsilon)^{m}. (123)

Similarly, we have

lim¯nGA(n,m)(logn)m(λCε)m(1ε)m.\displaystyle\varliminf_{n\rightarrow\infty}\frac{G_{A}(n,m)}{(\log n)^{m}}\geq(\lambda_{C}-\varepsilon)^{m}(1-\varepsilon)^{m}. (124)

Next, we claim that

limnGBi(n,m)(logn)m=0,i=1,,m.\displaystyle\lim_{n\rightarrow\infty}\frac{G_{B_{i}}(n,m)}{(\log n)^{m}}=0,i=1,...,m. (125)

Indeed, for each 1im,1\leq i\leq m, from (61), we see that

GBi(n,m)(j1,,jm)Bic(i=1m1D(ji,ji+1))D(jm).\displaystyle G_{B_{i}}(n,m)\leq\sum_{(j_{1},...,j_{m})\in B_{i}}\frac{c}{\left(\prod_{i=1}^{m-1}D(j_{i},j_{i+1})\right)D(j_{m})}. (126)

Then, in view of (26), (119) and (120), we have

lim¯nGBm(n,m)(logn)mlim¯nc(logn)m(j1,,jm)Bm1j1(j2j1)(jmjm1)=0.\displaystyle\varlimsup_{n\rightarrow\infty}\frac{G_{B_{m}}(n,m)}{(\log n)^{m}}\leq\varlimsup_{n\rightarrow\infty}\frac{c}{(\log n)^{m}}\sum_{(j_{1},...,j_{m})\in B_{m}}\frac{1}{j_{1}(j_{2}-j_{1})\cdots(j_{m}-j_{m-1})}=0.

For 1im1,1\leq i\leq m-1, from (119), (120) and (126), we deduce that

GBi(n,m)\displaystyle G_{B_{i}}(n,m) c0j1<<jin2,njm>>ji+1>n21(s=1m1D(js,js+1))D(jm),\displaystyle\leq c\sum_{\begin{subarray}{c}0\leq j_{1}<...<j_{i}\leq n_{2},\\ n\geq j_{m}>...>j_{i+1}>n_{2}\end{subarray}}\frac{1}{\left(\prod_{s=1}^{m-1}D(j_{s},j_{s+1})\right)D(j_{m})},
c0<j1<<jin21s=1iD(js,js+1)\displaystyle\leq c\sum_{0<j_{1}<...<j_{i}\leq n_{2}}\frac{1}{\prod_{s=1}^{i}D(j_{s},j_{s+1})}
×njm>>ji+1>n21jms=i+1m1(js+1js)\displaystyle\quad\quad\quad\quad\times\sum_{n\geq j_{m}>...>j_{i+1}>n_{2}}\frac{1}{j_{m}\prod_{s=i+1}^{m-1}(j_{s+1}-j_{s})}
c(n2i)max0<j1<<jin21s=1iD(js,js+1)\displaystyle\leq c\binom{n_{2}}{i}\max_{0<j_{1}<...<j_{i}\leq n_{2}}\frac{1}{\prod_{s=1}^{i}D(j_{s},j_{s+1})}
×njmi>>j1>n21jmis=1mi1(js+1js).\displaystyle\quad\quad\quad\quad\times\sum_{n\geq j_{m-i}>...>j_{1}>n_{2}}\frac{1}{j_{m-i}\prod_{s=1}^{m-i-1}(j_{s+1}-j_{s})}.

But it follows from Lemma 5 that

limn1(logn)minjmi>>j1>n21jmis=1mi1(js+1js)=1.\displaystyle\lim_{n\rightarrow\infty}\frac{1}{(\log n)^{m-i}}\sum_{n\geq j_{m-i}>...>j_{1}>n_{2}}\frac{1}{j_{m-i}\prod_{s=1}^{m-i-1}(j_{s+1}-j_{s})}=1.

Therefore, we obtain

lim¯nGBi(n,m)(logn)m=0,1im1.\displaystyle\varlimsup_{n\rightarrow\infty}\frac{G_{B_{i}}(n,m)}{(\log n)^{m}}=0,1\leq i\leq m-1.

We thus finish the proof of (125).

Now, putting (123), (124) and (125) together, we infer from (8.2) that

(λCε)m(1ε)mlim¯G(n,m)(logn)mlim¯G(n,m)(logn)m(λC+ε)m(1+ε)m.\displaystyle(\lambda_{C}-\varepsilon)^{m}(1-\varepsilon)^{m}\leq\varliminf\frac{G(n,m)}{(\log n)^{m}}\leq\varlimsup\frac{G(n,m)}{(\log n)^{m}}\leq(\lambda_{C}+\varepsilon)^{m}(1+\varepsilon)^{m}.

Since ε\varepsilon is arbitrary, letting ε0,\varepsilon\rightarrow 0, we get

limnG(n,m)(logn)m=λCm.\displaystyle\lim_{n\rightarrow\infty}\frac{G(n,m)}{(\log n)^{m}}=\lambda_{C}^{m}.

Consequently, dividing by (logn)k(\log n)^{k} on both sides of (121) and taking the limit, we conclude that

limn𝔼|C[1,n]|k(logn)k=k!λCk.\displaystyle\lim_{n\rightarrow\infty}\frac{\mathbb{E}|C\cap[1,n]|^{k}}{(\log n)^{k}}=k!\lambda_{C}^{k}.

Lemma 12 is proved. \Box

8.3 Proof of Theorem 2

Proof.  We have shown in Lemma 12 that for k1,k\geq 1, 𝔼(|C(a,b)[1,n]|2a(a1b1)logn)kk!\mathbb{E}\left(\frac{|C(a,b)\cap[1,n]|}{2^{-a}\binom{a-1}{b-1}\log n}\right)^{k}\rightarrow k!, 𝔼(|C(,a)[1,n]|logn)kk!\mathbb{E}\left(\frac{|C(*,a)\cap[1,n]|}{\log n}\right)^{k}\rightarrow k! and 𝔼(|Cw[1,n]|2logn)kk!\mathbb{E}\left(\frac{|C_{w}\cap[1,n]|}{2\log n}\right)^{k}\rightarrow k! as n.n\rightarrow\infty. Since ((2n)!)12ne2n,((2n)!)^{-\frac{1}{2n}}\sim\frac{e}{2n}, as n,n\rightarrow\infty, we thus have

k=1((2k)!)12k=.\displaystyle\sum_{k=1}^{\infty}((2k)!)^{-\frac{1}{2k}}=\infty.

Consequently, using Carleman’s test for the uniqueness of the moment problem (see e.g., [12, Chap.II, §12]), we conclude that

|C(a,b)[1,n]|2a(a1b1)logn𝑑S,|C(,a)[1,n]|logn𝑑S and |Cw[1,n]|2logn𝑑S\displaystyle\frac{\left|C(a,b)\cap[1,n]\right|}{2^{-a}\binom{a-1}{b-1}\log n}\overset{d}{\rightarrow}S,\ \frac{|C(*,a)\cap[1,n]|}{\log n}\overset{d}{\rightarrow}S\text{ and }\frac{\left|C_{w}\cap[1,n]\right|}{2\log n}\overset{d}{\rightarrow}S

as n,n\rightarrow\infty, where SS is an exponential random variable with (S>t)=et,t>0.\mathbb{P}(S>t)=e^{-t},t>0. Theorem 2 is proved. \Box

Acknowledgements: The authors are in debt to two anonymous referees who read carefully an earlier unpublished version of our manuscript [13], provided us with very useful suggestions and comments, and especially helped us to improve the proofs of Lemma 5 and Lemma 9 in the current manuscript, respectively. We also thank Professor A. Földes for confirming to us that an additional monotonicity condition of D(n)D(n) is required for [4, Theorem 1.1]. This project is partially supported by the National Natural Science Foundation of China (Grant No. 11501008) and the Nature Science Foundation of Anhui Educational Committee (Grant No. 2023AH040025).

References

  • [1] K.L. Chung, Markov chains with stationary transition probabilities, 2nd ed. Springer, New York, 1967.
  • [2] E. Csáki, A. Földes, P. Révész, Transient nearest neighbor random walk on the line, J. Theoret. Probab. 22 (1) (2009) 100-122.
  • [3] E. Csáki, A. Földes, P. Révész, Transient nearest neighbor random walk and Bessel process, J. Theoret. Probab. 22 (4) (2009) 992-1009.
  • [4] E. Csáki, A. Földes, P. Révész, On the number of cutpoints of transient nearest neighbor random walk on the line, J. Theoret. Probab. 23 (2) (2010) 624-638.
  • [5] M. Dwass, Branching processes in simple random walk, Proc. Amer. Math. Soc. 51 (2) (1975) 270-274.
  • [6] N. James, R. Lyons, Y. Peres, A transient Markov chain with finitely many cutpoints, In: IMS Collections Probability and Statistics: Essays in Honor of David A. Freedman, 2 (2008) 24-29, Institute of Mathematical Statistics.
  • [7] F.B. Knight, Random walks and a sojourn density process of Brownian motion, Trans. Amer. Math. Soc. 109 (1963) 56-86.
  • [8] J. Lamperti, Criteria for the recurrence or transience of stochastic processes. I, J. Math. Anal. Appl. 1 (3-4) (1960) 314-330.
  • [9] J. Lamperti, A new class of probability limit theorems, J. Math. Mech. 11 (5) (1962) 749-772.
  • [10] J. Lamperti, Criteria for stochastic processes II: Passage-time moments, J. Math. Anal. Appl. 7 (1) (1963) 127-145.
  • [11] V.V. Petrov, A generalization of the Borel-Cantelli lemma, Statist. Probab. Lett. 67 (3) (2004) 233-239.
  • [12] A.N. Shiryaev, Probability, 2nd ed. Springer-Verlag, Berlin/Heidelberg/New York, 1996.
  • [13] H.-M. Wang, Local time of transient random walk on the lattice of positive half line. Preprint, (2023) arXiv: 2306.14376.