This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Multi-level reflecting Brownian motion on the half line and its stationary distribution

Masakiyo Miyazawa**footnotemark: *
(, updated)
Abstract

A semi-martingale reflecting Brownian motion is a popular process for diffusion approximations of queueing models including their networks. In this paper, we are concerned with the case that it lives on the nonnegative half-line, but the drift and variance of its Brownian component discontinuously change at its finitely many states. This reflecting diffusion process naturally arises from a state-dependent single server queue, studied by the author [12]. Our main interest is in its stationary distribution, which is important for application. We define this reflecting diffusion process as the solution of a stochastic integral equation, and show that it uniquely exists in the weak sense. This result is also proved in a different way by Atar et al. [1]. In this paper, we consider its Harris irreducibility and stability, that is, positive recurrence, and derive its stationary distribution under this stability condition. The stationary distribution has a simple analytic expression, likely extendable to a more general state-dependent SRBM. Our proofs rely on the generalized Ito formula for a convex function and local time.

Keywords: reflecting Brownian motion, multi level, discontinuous diffusion coefficients, stationary distribution, stochastic integral equation, generalized Ito formula, Tanaka formula, Harris irreducibility.

MSC Classification: 60H20, 60J25, 60J55, 60H30, 60K25

11footnotetext: Department of Information Sciences, Tokyo University of Science, Noda, Chiba, Japan

1 Introduction

We are concerned with a semi-martingale reflecting Brownian motion (SRBM for short) on the nonnegative half-line in which the drift and variance of its Brownian component discontinuously change at its finitely many states. The partitions of its state space which is separated by these states are called levels. This reflecting SRBM is denoted by Z(){Z(t);t0}Z(\cdot)\equiv\{Z(t);t\geq 0\}, and will be called a one-dimensional multi-level SRBM (see Definition 2.1). In particular, if the number of the levels is kk, then it is called a one-dimensional kk-level SRBM. Note that the one-dimensional 1-level SRBM is just the standard SRBM on the half line.

Let Z()Z(\cdot) be a one-dimesional kk-level SRBM. This reflecting process for k=2k=2 arises in the recent study of Miyazawa [12] for asymptotic analysis of a state dependent single server queue, called 22-level GI/G/1GI/G/1 queue, in heavy traffic. This queueing model was motivated by an energy saving problem on servers for internet.

In [12], it is conjectured that the reflecting process Z()Z(\cdot) for k=2k=2 is the weak solution of a stochastic integral equation (see (2.1) in Section 2) and, if its stationary distribution exists, then this distribution agrees with the limit of the scaled stationary distribution of the 22-level GI/G/1GI/G/1 queue under heavy traffic, which is obtained under some extra conditions in Theorem 3.1 of [12]. While writing this paper, we have known that the weak existence of the solution is shown by Atar et al. [1] for a more general model than we have studied here, and its uniqueness is proved in [2, Lemma 4.1] under one of the conditions of this paper.

We refer to these results of Atar et al. [1, 2] as Lemma 2.1. However, we here prove a slightly different lemma, Lemma 2.2, which is restrictive for the existence but less restrictive for the uniqueness. Lemma 2.2 includes some further results which will be used. Furthermore, its proof is sample path based and different from that of Lemma 2.1. We then show in Lemma 2.3 that Z()Z(\cdot) is Harris irreducible, and give a necessary and sufficient condition for it to be positive recurrent in Lemma 2.4. These three lemmas are bases for our stochastic analysis.

The main results of this paper are Theorem 3.2 and Corollary 3.1 for the kk-level SRBM, which derive the stationary distribution of Z()Z(\cdot) without any extra condition under the stability condition obtained in Lemma 2.4. However, we first focus on the case for k=2k=2 in Theorem 3.1, then consider the case for general kk in Theorem 3.2. This is because the presentation and proof for general kk are notationally complicated while the proof for k=2k=2 can be used with minor modifications for general kk. The stationary distribution for k=2k=2 is rather simple, it is composed of two mutually singular measures, one is truncated exponential or uniform on the interval [0,1][0,\ell_{1}], and the other is exponential on [1,)[\ell_{1},\infty), where 1\ell_{1} is the right endpoint of the first level at which the variance of the Brownian component and drift of the Z()Z(\cdot) discontinuously change. One may easily guess these measures, but it is not easy to compute their weights by which the stationary distribution is uniquely determined (see [12]). We resolve this computation problem using the local time of the semi-martingale Z()Z(\cdot) at 1\ell_{1}.

The key tools for the proofs of the lemmas and theorems are the generalized Ito formula for a convex function and local time due to Tanaka [13]. We also use the notion of a weak solution of a stochastic integral equation. These formulas and notions are standard in stochastic analysis nowadays (e.g, see [4, 5, 6, 7, 8]), but they are briefly supplemented in the appendix because they play major roles in our proofs.

This paper is made up by five sections. In Section 2, we formally introduce a one-dimensional reflecting SRBM with state-dependent Brownian component which includes the one-dimensional multi-level SRBM as a special case, and present preliminary results including Lemmas 2.2, 2.3 and 2.4, which are proved in Section 4. Theorems 3.1 and 3.2 are presented and proved in Section 3. Finally, a related problems and a generalization of Theorem 3.2 are discussed in Section 5. In the appendix, the definitions of a weak solution for a stochastic integral equation and local time of a semi-martingale are briefly discussed in Sections A.1 and A.2, respectively.

2 Problem and preliminary lemmas

Let σ(x)\sigma(x) and b(x)b(x) be measurable positive and real valued functions, respectively, of xx\in\mathbb{R}, where \mathbb{R} is the set of all real numbers. We are interested in the solution Z(){Z(t);t0}Z(\cdot)\equiv\{Z(t);t\geq 0\} of the following stochastic integral equation, SIE for short.

Z(t)=Z(0)\displaystyle Z(t)=Z(0) +0tσ(Z(u))𝑑W(u)+0tb(Z(u))𝑑u+Y(t)0,t0,\displaystyle+\int_{0}^{t}\sigma(Z(u))dW(u)+\int_{0}^{t}b(Z(u))du+Y(t)\geq 0,\qquad t\geq 0, (2.1)

where W()W(\cdot) is the standard Brownian motion, and Y(){Y(t);t0}Y(\cdot)\equiv\{Y(t);t\geq 0\} is a non-deceasing process satisfying that 0t1(Z(u)>0)𝑑Y(u)=0\int_{0}^{t}1(Z(u)>0)dY(u)=0 for t0t\geq 0. We refer to this Y()Y(\cdot) as a regulator. The state space of Z()Z(\cdot) is denoted by S+S\equiv\mathbb{R}_{+}, where +={x;x0}\mathbb{R}_{+}=\{x\in\mathbb{R};x\geq 0\}.

As usual, we assume that all continuous-time processes are defined on stochastic basis (Ω,,𝔽,)(\Omega,{\mathcal{F}},\mathbb{F},\mathbb{P}), and right-continuous with left-limits and 𝔽\mathbb{F}-adapted, where 𝔽{t;t0}\mathbb{F}\equiv\{{\mathcal{F}}_{t};t\geq 0\} is a right-continuous filtration. Note that there are two kinds of solutions, strong and weak ones, for the SIE (2.1). See Appendix A.1 for their definitions. In this paper, we call weak solution simply by solution unless stated otherwise.

If functions σ(x)\sigma(x) and b(x)b(x) are Lipschitz continuous and their squares are bounded by K(1+x2)K(1+x^{2}) for some constant K>0K>0, then the SIE (2.1) has a unique solution even for the multidimensional SRBM which lives on a convex region (see [14, Theorem 4.1]). However, we are interested in the case that σ(x)\sigma(x) and b(x)b(x) discontinuously change. In this case, any solution Z()Z(\cdot) may not exist in general, so we need some condition. As we discussed in Section 1, we are particularly interested when they satisfy the following conditions. Let ¯={,+}\overline{\mathbb{R}}=\mathbb{R}\cup\{-\infty,+\infty\}.

Condition 2.1.

There are an integer k2k\geq 2 and a strictly increasing sequence {j¯;j=0,1,,k}\{\ell_{j}\in\overline{\mathbb{R}};j=0,1,\ldots,k\} satisfying 0=\ell_{0}=-\infty, j>0\ell_{j}>0 for j=1,2,,k1j=1,2,\ldots,k-1 and k=\ell_{k}=\infty such that functions σ(x)>0\sigma(x)>0 and b(x)b(x)\in\mathbb{R} for xx\in\mathbb{R} are given by

σ(x)=j=1kσj1(j1x<j),b(x)=j=1kbj1(j1x<j),\displaystyle\sigma(x)=\sum_{j=1}^{k}\sigma_{j}1(\ell_{j-1}\leq x<\ell_{j}),\qquad b(x)=\sum_{j=1}^{k}b_{j}1(\ell_{j-1}\leq x<\ell_{j}), (2.2)

where 1()1(\cdot) is the indicator function of proposition “\cdot”, and σj>0\sigma_{j}>0, bjb_{j}\in\mathbb{R} for j=1,2,,kj=1,2,\ldots,k are constants.

Since Z(t)Z(t) of (2.1) is nonnegative, σ(x)\sigma(x) and b(x)b(x) are only used for x0x\geq 0 in (2.1). Taking this into account, we partition the state space S+S\equiv\mathbb{R}_{+} of Z()Z(\cdot) by 1,2,,k1\ell_{1},\ell_{2},\ldots,\ell_{k-1} under Condition 2.1 as follows.

S1=[0,1),Sj=[j1,j),j=2,3,,k1,Sk=[k1,+).\displaystyle S_{1}=[0,\ell_{1}),\quad S_{j}=[\ell_{j-1},\ell_{j}),\;\;j=2,3,\ldots,k-1,\quad S_{k}=[\ell_{k-1},+\infty). (2.3)

We call these SjS_{j}’s levels. Note that σ(x)\sigma(x) and b(x)b(x) are constants in xx on each level, and they may discontinuously change at state j\ell_{j} for j=1,2,,k1j=1,2,\ldots,k-1 under Condition 2.1.

We start with the existence of the solution Z()Z(\cdot) of (2.1), which will be implied by the existence of the solution X(){X(t);t0}X(\cdot)\equiv\{X(t);t\geq 0\} of the following stochastic integral equation in the weak sense.

X(t)=X(0)\displaystyle X(t)=X(0) +0tσ(X(u))𝑑W(u)+0tb(X(u))𝑑u,t0.\displaystyle+\int_{0}^{t}\sigma(X(u))dW(u)+\int_{0}^{t}b(X(u))du,\qquad t\geq 0. (2.4)

See Remark 4.1. Taking this into account, we will also consider the condition below for the existence of the weak solution (X(),W())(X(\cdot),W(\cdot)).

Condition 2.2.

The functions σ(x)\sigma(x) and b(x)b(x) are measurable functions satisfying that

infxσ(x)>0,supx|b(x)|<.\displaystyle\inf_{x\in\mathbb{R}}\sigma(x)>0,\qquad\sup_{x\in\mathbb{R}}|b(x)|<\infty. (2.5)

For the unique existence of the weak solution (X(),W())(X(\cdot),W(\cdot)), Condition 2.2 is further weakened to Condition 5.1 by Theorem 5.15 of [8] (see Section 5.2 for its details). However, the latter condition is quite complicated. So, we take the simpler condition (2.5), which is sufficient for our arguments.

Definition 2.1.

The solutions Z()Z(\cdot) of (2.1) under Condition 2.1 is called a one-dimensional multi-level SRBM, in particular, called one-dimensional kk-level SRBM if it has kk levels, namely, the total number of partitions of (2.3) is kk, while it under Condition 2.2 is called a one-dimensional state-dependent SRBM with bounded drifts.

Using the weak solution (X(),W())(X(\cdot),W(\cdot)) of (2.4), Atar et al. [1, 2] proves:

Lemma 2.1 (Lemma 4.3 of [1] and Lemma 4.1 of [2]).

(i) Under Condition 2.2, the stochastic integral equation (2.1) has a weak solution such that Y(t)Y(t) is continuous in t0t\geq 0. (ii) Under Condition 2.1, the solution is weakly unique.

The proof of (i) is easy (see Remark 4.1) while the proof of (ii) is quite technical. Instead of this lemma, we will use the following lemma, in which (i) and (ii) of Lemma 2.1 are proved under more restrictive and less restrictive conditions, respectively.

Lemma 2.2.

Under Condition 2.2, if there are constants σ1,1>0\sigma_{1},\ell_{1}>0 and b1b_{1}\in\mathbb{R} such that

σ(x)=σ1>0,b(x)=b1,x<1,\displaystyle\sigma(x)=\sigma_{1}>0,b(x)=b_{1},\qquad\forall x<\ell_{1}, (2.6)

then the stochastic integral equation (2.1) has a unique weak solution such that Y(t)Y(t) is continuous in t0t\geq 0 and Z()Z(\cdot) is a strong Markov process.

We prove this lemma in Section 4.1, which is different from the proof of Lemma 2.1 by Atar et al. [1, 2].

The main interest of this paper is to derive the stationary distribution of the Z()Z(\cdot) for the one-dimensional multi-level SRBM under an appropriate stability condition. Since this reflecting diffusion process satisfies the conditions of Lemma 2.2, Z()Z(\cdot) is a strong Markov process. Hence, our first task for deriving its stationary distribution is to consider its irreducibility and positive recurrence. To this end, we introduce Harris irreducibility and recurrence following [11]. Let (+){\mathcal{B}}(\mathbb{R}_{+}) be the Borel field, that is, the minimal σ\sigma-algebra on +\mathbb{R}_{+} which contains all open sets of +\mathbb{R}_{+}. Then, a real valued process X()X(\cdot) which is right-continuous with left-limits is called Harris irreducible if there is a non-trivial σ\sigma-finite measure ψ\psi on (+,(+))(\mathbb{R}_{+},{\mathcal{B}}(\mathbb{R}_{+})) such that, for B(+)B\in{\mathcal{B}}(\mathbb{R}_{+}), ψ(B)>0\psi(B)>0 implies

𝔼x[01(X(u)B)𝑑u]>0,x+,\displaystyle\mathbb{E}_{x}\left[\int_{0}^{\infty}1(X(u)\in B)du\right]>0,\qquad\forall x\in\mathbb{R}_{+}, (2.7)

while it is called Harris recurrent if (2.7) can be replaced by

x[01(X(u)B)𝑑u=]=1,x+,\displaystyle\mathbb{P}_{x}\left[\int_{0}^{\infty}1(X(u)\in B)du=\infty\right]=1,\qquad\forall x\in\mathbb{R}_{+}, (2.8)

where x(A)=(A|X(0)=x)\mathbb{P}_{x}(A)=\mathbb{P}(A|X(0)=x) for AA\in{\mathcal{F}}, and 𝔼x[H|X(0)=x]\mathbb{E}_{x}[H|X(0)=x] for a random variable HH.

Harris conditions (2.7) and (2.8) are related to hitting times. Define the hitting time at a subset of the state space SS as

τB=inf{t0;X(t)B},B(+),\displaystyle\tau_{B}=\inf\{t\geq 0;X(t)\in B\},\qquad B\in{\mathcal{B}}(\mathbb{R}_{+}), (2.9)

where τB=\tau_{B}=\infty if X(t)BX(t)\not\in B for all t0t\geq 0. We denote τB\tau_{B} simply by τa\tau_{a} for B={a}B=\{a\}. Then, it is known that Harris recurrent condition (2.8) is equivalent to

x[τB<]=1,x0,\displaystyle\mathbb{P}_{x}[\tau_{B}<\infty]=1,\qquad\forall x\geq 0, (2.10)

See Theorem 1 of [9] (see also [11]) for the proof of this equivalence. However, Harris irreducible condition (2.7) may not be equivalent to x[τB<]>0\mathbb{P}_{x}[\tau_{B}<\infty]>0. In what follows, τB\tau_{B} is defined for the process to be discussed unless stated otherwise.

Using those notions and notations, we present the following basic facts for the one-dimensional state-dependent SRBM Z()Z(\cdot) with bounded drifts, where Z()Z(\cdot) is strong Markov by Lemma 2.2.

Lemma 2.3.

For the one-dimensional state-dependent SRBM with bounded drifts, if the condition (2.6) of Lemma 2.2 is satisfied, (i) it is Harris irreducible, and (ii) 𝔼x[τa]<\mathbb{E}_{x}[\tau_{a}]<\infty for 0x<a0\leq x<a.

Remark 2.1.

(ii) is not surprising because we can intuitively see that the drift is pushed to the upward direction by reflection at the origin and the positive-valued variances.

Lemma 2.4.

For the one-dimensional state-dependent SRBM with bounded drifts, if the condition (2.6) of Lemma 2.2 is satisfied and if there are constants ,b\ell_{*},b_{*} and σ\sigma_{*} such that

σ(x)=σ>0,b(x)=b,x>1,\displaystyle\sigma(x)=\sigma_{*}>0,b(x)=b_{*},\qquad\forall x\geq\ell_{*}>\ell_{1}, (2.11)

then Z()Z(\cdot) has a stationary distribution if and only if b<0b_{*}<0. In particular, the one-dimensional kk-level SRBM has a stationary distribution if and only if bk<0b_{k}<0.

These lemmas may be intuitively clear, but their proofs may have own interests because they are not immediate and we observe that the Ito formula nicely work. So we prove Lemmas 2.3 and 2.4 in Sections 4.2 and 4.3, respectively. We are now ready to study the stationary distribution of Z()Z(\cdot).

3 Stationary distribution of multi-level SRBM

We are concerned with the multi-level SRBM. Denote the number of its levels by kk. We first introduce basic notations. Let 𝒩k={1,2,,k}{\mathcal{N}}_{k}=\{1,2,\ldots,k\}, and define

βj=2bj/σj2,j𝒩k.\displaystyle\beta_{j}=2b_{j}/\sigma_{j}^{2},\quad j\in{\mathcal{N}}_{k}.

In this section, we derive the stationary distribution of the one-dimensional kk-level SRBM for arbitrary k2k\geq 2. We first focus on the case for k=2k=2 because this is the simplest case but its proof contains all ideas will be used for general kk.

3.1 Stationary distribution for k=2k=2

Throughout Section 3.1, we assumed that k=2k=2.

Theorem 3.1 (The case for k=2k=2).

The Z()Z(\cdot) of the one-dimensional 22-level SRBM has a stationary distribution if and only if b2<0b_{2}<0, equivalently, β2<0\beta_{2}<0. Assume that b2<0b_{2}<0, and let ν\nu be the stationary distribution of Z(t)Z(t), then ν\nu is unique and has a probability density function hh which is given below.
(i) If b10b_{1}\not=0, then

h(x)=d11h11(x)+d12h2(x),x0,\displaystyle h(x)=d_{11}h_{11}(x)+d_{12}h_{2}(x),\qquad x\geq 0, (3.1)

where h11h_{11} and h2h_{2} are probability density functions defined as

h11(x)=eβ1(x1)β1eβ1111(0x<1),h2(x)=β2eβ2(x1)1(x1),\displaystyle h_{11}(x)=\frac{e^{\beta_{1}(x-\ell_{1})}\beta_{1}}{e^{\beta_{1}\ell_{1}}-1}1(0\leq x<\ell_{1}),\qquad h_{2}(x)=-\beta_{2}e^{\beta_{2}(x-\ell_{1})}1(x\geq\ell_{1}), (3.2)

and d1jd_{1j} for j=1,2j=1,2 are positive constants defined by

d11=b2(eβ111)b1+b2(eβ111),d12=1d11=b1b1+b2(eβ111).\displaystyle d_{11}=\frac{b_{2}(e^{-\beta_{1}\ell_{1}}-1)}{b_{1}+b_{2}(e^{-\beta_{1}\ell_{1}}-1)},\qquad d_{12}=1-d_{11}=\frac{b_{1}}{b_{1}+b_{2}(e^{-\beta_{1}\ell_{1}}-1)}. (3.3)

(ii) If b1=0b_{1}=0, then

h(x)=d01h01(x)+d02h2(x),\displaystyle h(x)=d_{01}h_{01}(x)+d_{02}h_{2}(x), (3.4)

where h2h_{2} is defined in (3.2), and

h01(x)=111(0x<1),\displaystyle h_{01}(x)=\frac{1}{\ell_{1}}1(0\leq x<\ell_{1}), (3.5)
d01=2b21σ122b21,d02=1d01=σ12σ122b21.\displaystyle d_{01}=\frac{-2b_{2}\ell_{1}}{\sigma_{1}^{2}-2b_{2}\ell_{1}},\qquad d_{02}=1-d_{01}=\frac{\sigma_{1}^{2}}{\sigma_{1}^{2}-2b_{2}\ell_{1}}. (3.6)
Remark 3.1.

(a) (3.5) and (3.6) are obtained from (3.2) and (3.3) by letting b10b_{1}\to 0.
(b) Assume that Z()Z(\cdot) is a stationary process, and define the moment generating functions (mgf for short):

φ(θ)=𝔼[eθZ(1)],\displaystyle\varphi(\theta)=\mathbb{E}[e^{\theta Z(1)}],
φ1(θ)=𝔼[eθZ(1)1(0Z(1)<1)],φ2(θ)=𝔼[eθZ(1)1(Z(1)1)].\displaystyle\varphi_{1}(\theta)=\mathbb{E}[e^{\theta Z(1)}1(0\leq Z(1)<\ell_{1})],\qquad\varphi_{2}(\theta)=\mathbb{E}[e^{\theta Z(1)}1(Z(1)\geq\ell_{1})].

Here, φ(θ)\varphi(\theta) and φ2(θ)\varphi_{2}(\theta) are finite for θ0\theta\leq 0, and φ1(θ)\varphi_{1}(\theta) does so for θ\theta\in\mathbb{R}. However, all of them are uniquely identified for θ0\theta\leq 0 as Laplace transforms. So, in what follows, we always assume that θ0\theta\leq 0 unless stated otherwise.

For i=0,1i=0,1, let h^i1\widehat{h}_{i1} and h^2\widehat{h}_{2} be the moment generating functions of hi1h_{i1} and h2h_{2}, respectively, then

h^i1(θ)={eθ1eβ11β1+θβ11eβ111(θβ1),i=1,eθ111θ1(θ0),i=0,\displaystyle\widehat{h}_{i1}(\theta)=\begin{cases}\frac{e^{\theta\ell_{1}}-e^{-\beta_{1}\ell_{1}}}{\beta_{1}+\theta}\frac{\beta_{1}}{1-e^{-\beta_{1}\ell_{1}}}1(\theta\not=-\beta_{1}),\quad&i=1,\\ \frac{e^{\theta\ell_{1}}-1}{\ell_{1}\theta}1(\theta\not=0),&i=0,\end{cases} (3.7)
h^2(θ)=eθ1β2β2+θ,θ0,\displaystyle\widehat{h}_{2}(\theta)=e^{\theta\ell_{1}}\frac{\beta_{2}}{\beta_{2}+\theta},\qquad\theta\leq 0, (3.8)

where the singular points θ=β1,0\theta=-\beta_{1},0 in (3.7) are negligible to determine hi1h_{i1}, so we take the convention that h^i1(θ)\widehat{h}_{i1}(\theta) exists for these θ\theta.

Hence, (3.1) for b10b_{1}\not=0 and (3.4) for b1=0b_{1}=0 are equivalent to

φ1(θ)/φ1(0)={h^11(θ),b10,h^01(θ),b1=0,,φ2(θ)/φ2(0)=h^2(θ),\displaystyle\varphi_{1}(\theta)/\varphi_{1}(0)=\begin{cases}\widehat{h}_{11}(\theta),&b_{1}\not=0,\\ \widehat{h}_{01}(\theta),&b_{1}=0,\end{cases},\qquad\varphi_{2}(\theta)/\varphi_{2}(0)=\widehat{h}_{2}(\theta), (3.9)
φ1(0)=d11,φ2(0)=d12, for b10,\displaystyle\varphi_{1}(0)=d_{11},\quad\varphi_{2}(0)=d_{12},\;\mbox{ for }\;b_{1}\not=0, (3.10)
φ1(0)=d01,φ2(0)=d02, for b1=0.\displaystyle\varphi_{1}(0)=d_{01},\quad\varphi_{2}(0)=d_{02},\>\mbox{ for }\;b_{1}=0. (3.11)

Thus, Theorem 3.1 is proved by showing these equalities.

Remark 3.2.

Miyazawa [12] conjectures that the diffusion scaled process limit of the queue length of the 2-level GI/G/1GI/G/1 queue in heavy traffic is the solution of the stochastic integral equation of (5.2) in [12]. This stochastic equation corresponds to (2.1), but 1\ell_{1}, bib_{i} and σi\sigma_{i} of the present paper needs to replace by 0\ell_{0}, bi-b_{i}, ciσi\sqrt{c_{i}}\sigma_{i} for i=1,2i=1,2, respectively. Under these replacements, βi\beta_{i} also needs to replace by 2bi/(ciσi2)-2b_{i}/(c_{i}\sigma_{i}^{2}). Then, it follows from (3.3), (3.6), (3.10) and (3.11) that, under the setting of Miyazawa [12], for b10b_{1}\not=0,

φ1(0)=d11=c1b2(eβ111)c2b1+c1b2(eβ111),φ2(0)=d12=c2b1c2b1+c1b2(eβ111),\displaystyle\varphi_{1}(0)=d_{11}=\frac{c_{1}b_{2}(e^{\beta_{1}\ell_{1}}-1)}{c_{2}b_{1}+c_{1}b_{2}(e^{\beta_{1}\ell_{1}}-1)},\quad\varphi_{2}(0)=d_{12}=\frac{c_{2}b_{1}}{c_{2}b_{1}+c_{1}b_{2}(e^{\beta_{1}\ell_{1}}-1)},

and, for b1=0b_{1}=0,

φ1(0)=d01=2b21c2σ12+2b21,φ2(0)=d02=c2σ12c2σ12+2b21.\displaystyle\varphi_{1}(0)=d_{01}=\frac{2b_{2}\ell_{1}}{c_{2}\sigma_{1}^{2}+2b_{2}\ell_{1}},\quad\varphi_{2}(0)=d_{02}=\frac{c_{2}\sigma_{1}^{2}}{c_{2}\sigma_{1}^{2}+2b_{2}\ell_{1}}.

Hence, the limiting distributions in (ii) of Theorem 3.1 of [12] are identical with the stationary distributions in Theorem 3.1 here. Note that the limiting distributions in [12] are obtained under some extra conditions, which are not needed for Theorem 3.1.

3.2 Proof of Theorem 3.1

By Remark 3.1, it is sufficient to show (3.9), (3.10) and (3.11) for the proof of Theorem 3.1. We will do it in three steps.

3.2.1 1st step of the proof

In this subsection, we derive two stochastic equations from (2.1). For this, we use the generalized Ito formulas for a continuous semi-martingale X()X(\cdot) with finite quadratic variations [X]t[X]_{t} for all t0t\geq 0. For a convex test function ff, this Ito formula is given by

f(X(t))=f(X(0))+0tf(X(u))𝑑X(u)+120Lx(t)μf(dx),t0,\displaystyle f(X(t))=f(X(0))+\int_{0}^{t}f^{\prime}(X(u)-)dX(u)+\frac{1}{2}\int_{0}^{\infty}L_{x}(t)\mu_{f}(dx),\qquad t\geq 0, (3.12)

where Lx(t)L_{x}(t) is the local time of X()X(\cdot) which is right-continuous in xx\in\mathbb{R}, and μf\mu_{f} on (+,(+))(\mathbb{R}_{+},{\mathcal{B}}(\mathbb{R}_{+})) is a measure on (,()(\mathbb{R},{\mathcal{B}}(\mathbb{R}), defined by

μf([x,y))=f(y)f(x),x<y in ,\displaystyle\mu_{f}([x,y))=f^{\prime}(y-)-f^{\prime}(x-),\qquad x<y\mbox{ in }\mathbb{R}, (3.13)

where f(x)f^{\prime}(x-) is the left derivative of ff at xx. See Appendix A.2 for the definition of local time and more about its connection to the generalized Ito formula (3.12).

Furthermore, if f(x)f(x) is twice differentiable, then (3.12) can be written as

f(X(t))=f(X(0))+0tf(X(u))𝑑X(u)+120f′′(X(u))d[X]u,t0,\displaystyle f(X(t))=f(X(0))+\int_{0}^{t}f^{\prime}(X(u))dX(u)+\frac{1}{2}\int_{0}^{\infty}f^{\prime\prime}(X(u))d[X]_{u},\qquad t\geq 0, (3.14)

which is well known Ito formula.

In our application of the generalized Ito formula, we first take the following convex function ff with parameter θ0\theta\leq 0 as a test function.

f(x)=eθx1(x<1)+eθ11(x1).x.\displaystyle f(x)=e^{\theta x}1(x<\ell_{1})+e^{\theta\ell_{1}}1(x\geq\ell_{1}).\qquad x\in\mathbb{R}. (3.15)

Since f(1+)=0f^{\prime}(\ell_{1}+)=0 and f(1)=θeθ1f^{\prime}(\ell_{1}-)=\theta e^{\theta\ell_{1}}, it follows from (3.13) that

μf({1})=limε0f((1+ε))f(1)=f(1+)f(1)=θeθ1.\displaystyle\mu_{f}(\{\ell_{1}\})=\lim_{\varepsilon\downarrow 0}f^{\prime}((\ell_{1}+\varepsilon)-)-f^{\prime}(\ell_{1}-)=f^{\prime}(\ell_{1}+)-f^{\prime}(\ell_{1}-)=-\theta e^{\theta\ell_{1}}.

On the other hand, f′′(x)=θ2eθxf^{\prime\prime}(x)=\theta^{2}e^{\theta x} for x<1x<\ell_{1}. Hence,

0Lx(t)μf(dx)=0Lx(t)f′′(x)1(x<1)𝑑x+L1(t)μf({1}).\displaystyle\int_{0}^{\infty}L_{x}(t)\mu_{f}(dx)=\int_{0}^{\infty}L_{x}(t)f^{\prime\prime}(x-)1(x<\ell_{1})dx+L_{\ell_{1}}(t)\mu_{f}(\{\ell_{1}\}).

Then, applying local time characterization (A.1) to this formula, we have

0Lx(t)μf(dx)=θ20teθZ(u)1(0Z(u)<1)d[Z]uθeθ1L1(t).\displaystyle\int_{0}^{\infty}L_{x}(t)\mu_{f}(dx)=\theta^{2}\int_{0}^{t}e^{\theta Z(u)}1(0\leq Z(u)<\ell_{1})d[Z]_{u}-\theta e^{\theta\ell_{1}}L_{\ell_{1}}(t). (3.16)

We next compute the quadratic variation [Z]t\big{[}Z\big{]}_{t} of Z()Z(\cdot). Define M(){M(t);t0}M(\cdot)\equiv\{M(t);t\geq 0\} by

M(t)0tσ(Z(u))𝑑W(u),t0,\displaystyle M(t)\equiv\int_{0}^{t}\sigma(Z(u))dW(u),\qquad t\geq 0, (3.17)

then M()M(\cdot) is a martingale. Denote the quadratic variations of Z()Z(\cdot) and M()M(\cdot), respectively, by [Z]t\big{[}Z\big{]}_{t} and [M]t\big{[}M\big{]}_{t}. Since Z(t)Z(t) and Y(t)Y(t) are continuous in tt, it follows from (2.1) that

[Z]t\displaystyle\big{[}Z\big{]}_{t} =[M]t=0tσ2(Z(u))𝑑u\displaystyle=\big{[}M\big{]}_{t}=\int_{0}^{t}\sigma^{2}(Z(u))du
=0tσ121(0Z(u)<1)+σ221(Z(u)1)]du,t0.\displaystyle=\int_{0}^{t}\sigma_{1}^{2}1(0\leq Z(u)<\ell_{1})+\sigma_{2}^{2}1(Z(u)\geq\ell_{1})]du,\qquad t\geq 0. (3.18)

Hence, from f(θ)=θeθx1(x<1)f^{\prime}(\theta)=\theta e^{\theta x}1(x<\ell_{1}), (3.16) and (3.17), the generalized Ito formula (3.12) becomes

f(Z(t))\displaystyle f(Z(t)) =f(Z(0))+0tθeθZ(u)1(0Z(u)<1)σ1𝑑W(u)\displaystyle=f(Z(0))+\int_{0}^{t}\theta e^{\theta Z(u)}1(0\leq Z(u)<\ell_{1})\sigma_{1}dW(u)
+0t(b1θ+12σ12θ2)eθZ(u)1(0Z(u)<1)𝑑u+θY(t)\displaystyle\quad+\int_{0}^{t}\Big{(}b_{1}\theta+\frac{1}{2}\sigma^{2}_{1}\theta^{2}\Big{)}e^{\theta Z(u)}1(0\leq Z(u)<\ell_{1})du+\theta Y(t)
12θeθ1L1(t),t0,θ0.\displaystyle\quad-\frac{1}{2}\theta e^{\theta\ell_{1}}L_{\ell_{1}}(t),\qquad t\geq 0,\theta\leq 0. (3.19)

We next applying Ito formula for the test function f(x)=eθxf(x)=e^{\theta x} to (2.1). In this case, we use Ito formula (3.14) because f(x)f(x) is twice continuously differentiable. Then, we have, for θ0\theta\leq 0,

f(Z(t))\displaystyle f(Z(t)) =f(Z(0))+0tθeθZ(u)σ(u)𝑑W(u)\displaystyle=f(Z(0))+\int_{0}^{t}\theta e^{\theta Z(u)}\sigma(u)dW(u)
+0t(b(Z(u))θ+12σ2(Z(u))θ2)eθZ(u)𝑑u+θY(t).\displaystyle\quad+\int_{0}^{t}\Big{(}b(Z(u))\theta+\frac{1}{2}\sigma^{2}(Z(u))\theta^{2}\Big{)}e^{\theta Z(u)}du+\theta Y(t). (3.20)

3.2.2 2nd step of the proof

The first statement of Theorem 3.1 is immediate from Lemma 2.4. Hence, under b2>0b_{2}>0, we can assume that Z()Z(\cdot) is a stationary process by taking its stationary distribution for the distribution of Z(0)Z(0). In what follows, this is always assumed.

Recall the moment generating functions φ,φ1\varphi,\varphi_{1} and φ2\varphi_{2}, which are defined in Remark 3.1. We first consider the stochastic integral equation (3.2.1) to compute φ1\varphi_{1}. Since 𝔼[L1(1)]\mathbb{E}[L_{\ell_{1}}(1)] is finite by Lemma A.1, taking the expectation of (3.2.1) for t=1t=1 and θ0\theta\leq 0 yields

12σ12(β1+θ)φ1(θ)12eθ1𝔼[L1(1)]+𝔼[Y(1)]=0,\displaystyle\frac{1}{2}\sigma_{1}^{2}(\beta_{1}+\theta)\varphi_{1}(\theta)-\frac{1}{2}e^{\theta\ell_{1}}\mathbb{E}[L_{\ell_{1}}(1)]+\mathbb{E}[Y(1)]=0, (3.21)

because β1σ12=2b1\beta_{1}\sigma_{1}^{2}=2b_{1}. Note that this equation implies that 𝔼[Y(1)]\mathbb{E}[Y(1)] is also finite.

Using (3.21), we consider φ1(θ)\varphi_{1}(\theta) separately for b10b_{1}\not=0 and b1=0b_{1}=0. First, assume that b10b_{1}\not=0. Then, from (3.21) and β1>0\beta_{1}>0, we have

φ1(θ)=eθ1𝔼[L1(1)]2𝔼[Y(1)]σ12(β1+θ),θβ1.\displaystyle\varphi_{1}(\theta)=\frac{e^{\theta\ell_{1}}\mathbb{E}[L_{\ell_{1}}(1)]-2\mathbb{E}[Y(1)]}{\sigma_{1}^{2}\left(\beta_{1}+\theta\right)},\qquad\theta\not=\beta_{1}. (3.22)

This equation can be written as

φ1(θ)=eβ11𝔼[L1(1)]2𝔼[Y(1)]σ12(β1+θ)+(eθ1eβ11)𝔼[L1(1)]σ12(β1+θ).\displaystyle\varphi_{1}(\theta)=\frac{e^{-\beta_{1}\ell_{1}}\mathbb{E}[L_{\ell_{1}}(1)]-2\mathbb{E}[Y(1)]}{\sigma_{1}^{2}\left(\beta_{1}+\theta\right)}+\frac{(e^{\theta\ell_{1}}-e^{-\beta_{1}\ell_{1}})\mathbb{E}[L_{\ell_{1}}(1)]}{\sigma_{1}^{2}(\beta_{1}+\theta)}\frac{}{}. (3.23)

Observe that the first term in the right-hand side of (3.23) is proportional to the moment generating function (mgf) of the signed measure on [0,)[0,\infty) whose density function is exponential while its second term is the mgf of a measure on [0,1][0,\ell_{1}], but the left-hand side of (3.23) is the mgf of a probability measure on [0,1)[0,\ell_{1}). Hence, we must have

2𝔼[Y(1)]=eβ11𝔼[L1(1)],\displaystyle 2\mathbb{E}[Y(1)]=e^{-\beta_{1}\ell_{1}}\mathbb{E}[L_{\ell_{1}}(1)], (3.24)

and therefore (3.23) yields

φ1(θ)=h^11(θ)(1eβ11)𝔼[L1(1)]β1σ12,θ0,\displaystyle\varphi_{1}(\theta)=\widehat{h}_{11}(\theta)\frac{(1-e^{-\beta_{1}\ell_{1}})\mathbb{E}[L_{\ell_{1}}(1)]}{\beta_{1}\sigma_{1}^{2}},\qquad\theta\leq 0, (3.25)

where h^11(θ)\widehat{h}_{11}(\theta) is defined in (3.7), but also exists for θ=β1\theta=-\beta_{1} by our convention.

We next assume that b1=0b_{1}=0. In this case, β1=0\beta_{1}=0, and it follows from (3.22) that

φ1(θ)=eθ111θ×1𝔼[L1(1)]σ12+𝔼[L1(1)]2𝔼[Y(1)]σ12θ,θ0.\displaystyle\varphi_{1}(\theta)=\frac{e^{\theta\ell_{1}}-1}{\ell_{1}\theta}\times\frac{\ell_{1}\mathbb{E}[L_{\ell_{1}}(1)]}{\sigma_{1}^{2}}+\frac{\mathbb{E}[L_{\ell_{1}}(1)]-2\mathbb{E}[Y(1)]}{\sigma_{1}^{2}\theta},\qquad\theta\leq 0. (3.26)

Since eθ111θ\frac{e^{\theta\ell_{1}}-1}{\ell_{1}\theta} is the mgf of the uniform distribution on [0,1)[0,\ell_{1}), by the same reason as in the case for b10b_{1}\not=0, we must have

2𝔼[Y(1)]=𝔼[L1(1)].\displaystyle 2\mathbb{E}[Y(1)]=\mathbb{E}[L_{\ell_{1}}(1)].

Note that this equation is identical with (3.24) for b1=0b_{1}=0. Furthermore, limθ0eθ111θ=1\lim_{\theta\uparrow 0}\frac{e^{\theta\ell_{1}}-1}{\ell_{1}\theta}=1 and limθ0φ1(θ)=φ1(0)\lim_{\theta\uparrow 0}\varphi_{1}(\theta)=\varphi_{1}(0). Hence, (3.26) implies that

φ1(θ)=h^01(θ)1𝔼[L1(1)]σ12,θ0,\displaystyle\varphi_{1}(\theta)=\widehat{h}_{01}(\theta)\frac{\ell_{1}\mathbb{E}[L_{\ell_{1}}(1)]}{\sigma_{1}^{2}},\qquad\theta\leq 0, (3.27)

by our convention for h^01(0)\widehat{h}_{01}(0) similar to h^11(β1)\widehat{h}_{11}(-\beta_{1}). Thus, we have the following lemma.

Lemma 3.1.

The mgf φ1\varphi_{1} is obtained as

φ1(θ)={h^11(θ)(1eβ11)𝔼[L1(1)]β1σ12,b10,h^01(θ)1𝔼[L1(1)]σ12,b1=0,θ0.\displaystyle\varphi_{1}(\theta)=\begin{cases}\displaystyle\widehat{h}_{11}(\theta)\frac{(1-e^{\beta_{1}\ell_{1}})\mathbb{E}[L_{\ell_{1}}(1)]}{\beta_{1}\sigma_{1}^{2}},\quad&b_{1}\not=0,\vskip 6.0pt plus 2.0pt minus 2.0pt\\ \displaystyle\widehat{h}_{01}(\theta)\frac{\ell_{1}\mathbb{E}[L_{\ell_{1}}(1)]}{\sigma_{1}^{2}},\quad&b_{1}=0,\end{cases}\qquad\theta\leq 0. (3.28)

We next consider the stochastic integral equation (3.2.1) to derive φ2(θ)\varphi_{2}(\theta). In this case, we use (3.2.1). Note that φ1(θ)\varphi_{1}(\theta) and φ2(θ)\varphi_{2}(\theta) are finite for θ0\theta\leq 0. Hence, taking the expectations of both sides of (3.2.1) for t=1t=1 and θ0\theta\leq 0 yields

12i=1,2σi2(βi+θ)φi(θ)+𝔼[Y(1)]=0,θ0.\displaystyle\frac{1}{2}\sum_{i=1,2}\sigma_{i}^{2}\left(\beta_{i}+\theta\right)\varphi_{i}(\theta)+\mathbb{E}[Y(1)]=0,\qquad\theta\leq 0. (3.29)

Substituting 12σ12(β1+θ)\frac{1}{2}\sigma_{1}^{2}\left(\beta_{1}+\theta\right) of (3.21) and 𝔼[Y(1)]\mathbb{E}[Y(1)] of (3.24) into this equation, we have

σ22(β2+θ)φ2(θ)+eθ1𝔼[L1(1)]=0,θ0.\displaystyle\sigma_{2}^{2}\left(\beta_{2}+\theta\right)\varphi_{2}(\theta)+e^{\theta\ell_{1}}\mathbb{E}[L_{\ell_{1}}(1)]=0,\qquad\theta\leq 0. (3.30)

The following lemma is immediate from this equation since β2<0\beta_{2}<0.

Lemma 3.2.

The mgf φ2\varphi_{2} is obtained as

φ2(θ)=h^2(θ)𝔼[L1(1)]β2σ22,θ0,\displaystyle\varphi_{2}(\theta)=\widehat{h}_{2}(\theta)\frac{\mathbb{E}[L_{\ell_{1}}(1)]}{-\beta_{2}\sigma_{2}^{2}},\qquad\theta\leq 0, (3.31)

where recall that h^2\widehat{h}_{2} is defined by (3.8).

3.2.3 3rd step of the proof

We now prove (3.9), (3.10) and (3.11). Since h^11(0)=h^01(0)=h^2(0)=1\widehat{h}_{11}(0)=\widehat{h}_{01}(0)=\widehat{h}_{2}(0)=1, (3.9) is immediate from Lemmas 3.1 and 3.2. To prove (3.10), assumed that b10b_{1}\not=0. In this case, from (3.25) and (3.31), we have

φ1(0)=(1eβ11)β1σ12𝔼[L1(1)],φ2(0)=𝔼[L1(1)]β2σ22.\displaystyle\varphi_{1}(0)=\frac{(1-e^{-\beta_{1}\ell_{1}})}{\beta_{1}\sigma_{1}^{2}}\mathbb{E}[L_{\ell_{1}}(1)],\quad\varphi_{2}(0)=\frac{\mathbb{E}[L_{\ell_{1}}(1)]}{-\beta_{2}\sigma_{2}^{2}}.

Taking the ratios of both sides, we have

φ1(0)φ2(0)=β2σ22(1eβ11)β1σ12.\displaystyle\frac{\varphi_{1}(0)}{\varphi_{2}(0)}=\frac{-\beta_{2}\sigma_{2}^{2}(1-e^{-\beta_{1}\ell_{1}})}{\beta_{1}\sigma_{1}^{2}}.

Since φ1(0)+φ2(0)=1\varphi_{1}(0)+\varphi_{2}(0)=1, this and βiσi2=2bi\beta_{i}\sigma_{i}^{2}=2b_{i} yield

φ1(0)=β2σ22(eβ111)β1σ12+β2σ22(eβ111)=b2(eβ111)b1+b2(eβ111)=d11,\displaystyle\varphi_{1}(0)=\frac{\beta_{2}\sigma_{2}^{2}(e^{-\beta_{1}\ell_{1}}-1)}{\beta_{1}\sigma_{1}^{2}+\beta_{2}\sigma_{2}^{2}(e^{-\beta_{1}\ell_{1}}-1)}=\frac{b_{2}(e^{-\beta_{1}\ell_{1}}-1)}{b_{1}+b_{2}(e^{-\beta_{1}\ell_{1}}-1)}=d_{11},
φ2(0)=β1σ12β1σ12+β2σ22(eβ111)=b1b1+b2(eβ111)=d12.\displaystyle\varphi_{2}(0)=\frac{\beta_{1}\sigma_{1}^{2}}{\beta_{1}\sigma_{1}^{2}+\beta_{2}\sigma_{2}^{2}(e^{-\beta_{1}\ell_{1}}-1)}=\frac{b_{1}}{b_{1}+b_{2}(e^{-\beta_{1}\ell_{1}}-1)}=d_{12}.

This proves (3.10). We next assume that b1=0b_{1}=0, then it follows from (3.27) and (3.31) that

φ1(0)φ2(0)=β2σ221σ12.\displaystyle\frac{\varphi_{1}(0)}{\varphi_{2}(0)}=\frac{-\beta_{2}\sigma_{2}^{2}\ell_{1}}{\sigma_{1}^{2}}.

Similarly to the case for b10b_{1}\not=0, this yields

φ1(0)=β2σ221σ12β2σ221=2b21σ122b21=d01,\displaystyle\varphi_{1}(0)=\frac{-\beta_{2}\sigma_{2}^{2}\ell_{1}}{\sigma_{1}^{2}-\beta_{2}\sigma_{2}^{2}\ell_{1}}=\frac{-2b_{2}\ell_{1}}{\sigma_{1}^{2}-2b_{2}\ell_{1}}=d_{01}, (3.32)
φ2(0)=σ12σ12β2σ221=σ12σ122b21=d02,\displaystyle\varphi_{2}(0)=\frac{\sigma_{1}^{2}}{\sigma_{1}^{2}-\beta_{2}\sigma_{2}^{2}\ell_{1}}=\frac{\sigma_{1}^{2}}{\sigma_{1}^{2}-2b_{2}\ell_{1}}=d_{02}, (3.33)

This proves (3.11). Thus, the proof of Theorem 3.1 is completed.

3.3 Stationary distribution for general kk

We now derive the stationary distribution of the one-dimensional kk-level SRBM for a general positive integer kk. Recall the definition of βj\beta_{j}, and define ηj\eta_{j} as

βj=2bj/σj2,j𝒩k,η0=1,ηj=i=1jeβi(ii1+),j𝒩k1,ηk=0,\displaystyle\beta_{j}=2b_{j}/\sigma_{j}^{2},\;j\in{\mathcal{N}}_{k},\qquad\eta_{0}=1,\;\eta_{j}=\prod_{i=1}^{j}e^{\beta_{i}(\ell_{i}-\ell_{i-1}^{+})},\;j\in{\mathcal{N}}_{k-1},\;\eta_{k}=0,

where x+=0xmax(0,x)x^{+}=0\vee x\equiv\max(0,x) for xx\in\mathbb{R}. Also recall that the state space SS is partitioned to SjS_{j} defined in (2.3) for j𝒩kj\in{\mathcal{N}}_{k}.

Theorem 3.2 (The case for general k2k\geq 2).

The Z()Z(\cdot) of the one-dimensional kk-level SRBM has a stationary distribution if and only if bk<0b_{k}<0, equivalently, βk<0\beta_{k}<0. Let J={i𝒩k;bi=0}J=\{i\in{\mathcal{N}}_{k};b_{i}=0\}, and assume that bk<0b_{k}<0, then denote the stationary distribution of Z(t)Z(t) by ν\nu, then ν\nu is unique and has a probability density function hJh^{J} for which is given below.
(i) If J=J=\emptyset, that is, bj0b_{j}\not=0 for all j𝒩kj\in{\mathcal{N}}_{k}, then

h(x)=h(x)j=1kdjhj(x),x0,\displaystyle h^{\emptyset}(x)=h(x)\equiv\sum_{j=1}^{k}d_{j}h_{j}(x),\qquad x\geq 0, (3.34)

where hjh_{j} for j𝒩kj\in{\mathcal{N}}_{k} are probability density functions defined as

hj(x)={βjeβj(xj1+)eβj(jj1+)11(xSj),j𝒩k1,βkeβk(xk1)1(xSk),j=k,\displaystyle h_{j}(x)=\begin{cases}\displaystyle\frac{\beta_{j}e^{\beta_{j}(x-\ell_{j-1}^{+})}}{e^{\beta_{j}(\ell_{j}-\ell_{j-1}^{+})}-1}1(x\in S_{j}),\qquad&j\in{\mathcal{N}}_{k-1},\\ -\beta_{k}e^{\beta_{k}(x-\ell_{k-1})}1(x\in S_{k}),&j=k,\end{cases} (3.35)

and djd_{j} for j𝒩kj\in{\mathcal{N}}_{k} are positive constants defined as

dj=bj1(ηjηj1)i=1k1bi1(ηiηi1)bk1ηk1,j𝒩k.\displaystyle d_{j}=\frac{b_{j}^{-1}(\eta_{j}-\eta_{j-1})}{\sum_{i=1}^{k-1}b_{i}^{-1}(\eta_{i}-\eta_{i-1})-b_{k}^{-1}\eta_{k-1}},\qquad j\in{\mathcal{N}}_{k}. (3.36)

(ii) If JJ\not=\emptyset, that is, bi=0b_{i}=0 for some iJi\in J, then

hJ(x)=j=1kdjJhjJ(x),x0,\displaystyle h^{J}(x)=\sum_{j=1}^{k}d^{J}_{j}h^{J}_{j}(x),\qquad x\geq 0, (3.37)

where hjJ(x)=limbi0,iJhj(x)\displaystyle h^{J}_{j}(x)=\lim_{b_{i}\to 0,i\in J}h_{j}(x) and djJ=limbi0,iJdj\displaystyle d^{J}_{j}=\lim_{b_{i}\to 0,i\in J}d_{j} for j𝒩kj\in{\mathcal{N}}_{k}.

Before proving this theorem in Section 3.4, we note that the density hh^{\emptyset} has a simple expression, which is further discussed in Section 5.2.

Corollary 3.1.

Under the assumptions of Theorem 3.2, the probability density function hh^{\emptyset} of the stationary distribution of the kk-level SRBMSRBM on +\mathbb{R}_{+} when b(x)0b(x)\not=0 for all x0x\geq 0 is given by

h(x)=1Ckσ2(x)exp(0x2b(y)σ2(y)𝑑y),x0.\displaystyle h^{\emptyset}(x)=\frac{1}{C_{k}\sigma^{2}(x)}\exp\left(\int_{0}^{x}\frac{2b(y)}{\sigma^{2}(y)}dy\right),\qquad x\geq 0. (3.38)

where

Ck=01σ2(x)exp(0x2b(y)σ2(y)𝑑y)𝑑x.\displaystyle C_{k}=\int_{0}^{\infty}\frac{1}{\sigma^{2}(x)}\exp\left(\int_{0}^{x}\frac{2b(y)}{\sigma^{2}(y)}dy\right)dx. (3.39)
Proof.

Let C=i=1kbi1(ηiηi1)C=\sum_{i=1}^{k}b_{i}^{-1}(\eta_{i}-\eta_{i-1}), which is finite, and write ηj\eta_{j} for j𝒩k1j\in{\mathcal{N}}_{k-1} as

ηj=exp(i=1jβi(ii1+))=exp(0j2b(y)σ2(y)𝑑y),\displaystyle\eta_{j}=\exp\left(\sum_{i=1}^{j}\beta_{i}(\ell_{i}-\ell^{+}_{i-1})\right)=\exp\left(\int_{0}^{\ell_{j}}\frac{2b(y)}{\sigma^{2}(y)}dy\right),

Then, from (i) of Theorem 3.2, we have, for x[j1+,j)x\in[\ell_{j-1}^{+},\ell_{j}) with jk1j\leq k-1.

djhj(x)\displaystyle d_{j}h_{j}(x) =1Cbj(ηjηj1)βjeβj(xj1+)eβj(jj1+)1\displaystyle=\frac{1}{Cb_{j}}(\eta_{j}-\eta_{j-1})\frac{\beta_{j}e^{\beta_{j}(x-\ell_{j-1}^{+})}}{e^{\beta_{j}(\ell_{j}-\ell_{j-1}^{+})}-1}
=1Cbj(ηjηj1)eβj(xj1+)ηj1ηjηj1βj=2Cσ2(x)exp(0x2b(y)σ2(y)𝑑y),\displaystyle=\frac{1}{Cb_{j}}(\eta_{j}-\eta_{j-1})\frac{e^{\beta_{j}(x-\ell_{j-1}^{+})}\eta_{j-1}}{\eta_{j}-\eta_{j-1}}\beta_{j}=\frac{2}{C\sigma^{2}(x)}\exp\left(\int_{0}^{x}\frac{2b(y)}{\sigma^{2}(y)}dy\right), (3.40)

because βj/bj=2/σj2=2/σ2(x)\beta_{j}/b_{j}=2/\sigma_{j}^{2}=2/\sigma^{2}(x) for x[j1,j)x\in[\ell_{j-1},\ell_{j}). Similarly, for xk1x\geq\ell_{k-1},

dkhk(x)=ηk1Cbk(βk)eβk(xk1)=2Cσ2(x)exp(0x2b(y)σ2(y)𝑑y).\displaystyle d_{k}h_{k}(x)=\frac{\eta_{k-1}}{Cb_{k}}(-\beta_{k})e^{\beta_{k}(x-\ell_{k-1})}=\frac{2}{C\sigma^{2}(x)}\exp\left(\int_{0}^{x}\frac{2b(y)}{\sigma^{2}(y)}dy\right). (3.41)

Hence, putting Ck=C/2C_{k}=C/2, we have (3.38). ∎

Remark 3.3.

djd_{j} defined by (3.36) must be positive, which is easily checked. Nevertheless, it is interesting that their positivity is visible through (3.3) and (3.41) of this corollary.

3.4 Proof of Theorem 3.2

Similar to the proof of Theorem 3.1, the first statement is immediate from Lemma 2.4, and we can assume that Z()Z(\cdot) is a stationary process since bk<0b_{k}<0. We also always assume that θ0\theta\leq 0. Define moment generating functions (mgf):

φj(θ)=𝔼[eθZ(0)1(Z(0)Sj)],j𝒩k,\displaystyle\varphi_{j}(\theta)=\mathbb{E}[e^{\theta Z(0)}1(Z(0)\in S_{j})],\qquad j\in{\mathcal{N}}_{k},

which are obviously finite because θ0\theta\leq 0. Then, the mgf φ(θ)\varphi(\theta) of Z(0)Z(0) is expressed as

φ(θ)=j=1kφj(θ),θ0,\displaystyle\varphi(\theta)=\sum_{j=1}^{k}\varphi_{j}(\theta),\qquad\theta\leq 0,

and dj=φj(0)d_{j}=\varphi_{j}(0) for j𝒩kj\in{\mathcal{N}}_{k}.

We first prove (i). In this case, let h^j\widehat{h}_{j} be the mgf of hjh_{j} for j𝒩kj\in{\mathcal{N}}_{k}, then

h^j(θ)={βjβj+θe(θ+βj)(jj1+)1eβj(jj1+)1eβjj,j𝒩k1βkβk+θeθk1,j=k.\displaystyle\widehat{h}_{j}(\theta)=\begin{cases}\displaystyle\frac{\beta_{j}}{\beta_{j}+\theta}\frac{e^{(\theta+\beta_{j})(\ell_{j}-\ell_{j-1}^{+})}-1}{e^{\beta_{j}(\ell_{j}-\ell_{j-1}^{+})}-1}e^{\beta_{j}\ell_{j}},\quad&j\in{\mathcal{N}}_{k-1}\\ \displaystyle\frac{\beta_{k}}{\beta_{k}+\theta}e^{\theta\ell_{k-1}},&j=k.\end{cases} (3.42)

Hence, (3.34) is obtained if we show that, for j𝒩kj\in{\mathcal{N}}_{k},

φj(θ)/φj(0)=h^j(θ),\displaystyle\varphi_{j}(\theta)/\varphi_{j}(0)=\widehat{h}_{j}(\theta), (3.43)
φj(0)=dj.\displaystyle\varphi_{j}(0)=d_{j}. (3.44)

To prove (3.43) and (3.44), we use the following convex function fjf_{j} with parameter θ0\theta\leq 0 as a test function for the generalized Ito formula similar to (3.15).

fj(x)=eθx1(xj)+eθj1(x>j).x,j𝒩k1.\displaystyle f_{j}(x)=e^{\theta x}1(x\leq\ell_{j})+e^{\theta\ell_{j}}1(x>\ell_{j}).\qquad x\in\mathbb{R},\;j\in{\mathcal{N}}_{k-1}. (3.45)

Since fj(j)=θeθjf_{j}^{\prime}(\ell_{j}-)=\theta e^{\theta\ell_{j}} and fj(1+)=0f_{j}^{\prime}(\ell_{1}+)=0, it follows from (3.13) that

μf({j})=limε0f((j+ε))f(j)=f(j+)f(j)=θeθj,\displaystyle\mu_{f}(\{\ell_{j}\})=\lim_{\varepsilon\downarrow 0}f^{\prime}((\ell_{j}+\varepsilon)-)-f^{\prime}(\ell_{j}-)=f^{\prime}(\ell_{j}+)-f^{\prime}(\ell_{j}-)=-\theta e^{\theta\ell_{j}},

and, f′′(x)=θ2eθxf^{\prime\prime}(x)=\theta^{2}e^{\theta x} for x<jx<\ell_{j}. Hence, similarly to (3.2.1), the generalized Ito formula (3.12) for f=fjf=f_{j} becomes

fj(Z(t))=fj(Z(0))+0tθeθZ(u)1(0Z(u)<j)σ(Z(u))𝑑W(u)\displaystyle f_{j}(Z(t))=f_{j}(Z(0))+\int_{0}^{t}\theta e^{\theta Z(u)}1(0\leq Z(u)<\ell_{j})\sigma(Z(u))dW(u)
+0t(b(Z(u))θ+12σ2(Z(u))θ2)eθZ(u)1(0Z(u)<j)𝑑u\displaystyle\quad+\int_{0}^{t}\Big{(}b(Z(u))\theta+\frac{1}{2}\sigma^{2}(Z(u))\theta^{2}\Big{)}e^{\theta Z(u)}1(0\leq Z(u)<\ell_{j})du
12θeθjLj(t)+θY(t),t0,θ0,j𝒩k1.\displaystyle\quad-\frac{1}{2}\theta e^{\theta\ell_{j}}L_{\ell_{j}}(t)+\theta Y(t),\qquad t\geq 0,\;\theta\leq 0,\;j\in{\mathcal{N}}_{k-1}. (3.46)

Similarly to the proof of Theorem 3.1, we next apply Ito formula for test function f(x)=eθxf(x)=e^{\theta x} to (2.1), then we have (3.2.1) for b(x)b(x) and σ(x)\sigma(x) which are defined by (2.2). From (3.4) and (3.2.1), we will compute the stationary distribution of Z(t)Z(t).

We first consider this equation for j=1j=1. In this case, (3.4) becomes

f1(Z(t))=f1(Z(0))+0tθeθZ(u)1(0Z(u)<1)σ1𝑑W(u)\displaystyle f_{1}(Z(t))=f_{1}(Z(0))+\int_{0}^{t}\theta e^{\theta Z(u)}1(0\leq Z(u)<\ell_{1})\sigma_{1}dW(u)
+0t(b1θ+12σ12θ2)eθZ(u)1(0Z(u)<1)𝑑u\displaystyle\quad+\int_{0}^{t}\Big{(}b_{1}\theta+\frac{1}{2}\sigma_{1}^{2}\theta^{2}\Big{)}e^{\theta Z(u)}1(0\leq Z(u)<\ell_{1})du
12θeθ1L1(t)+θY(t),t0,θ0.\displaystyle\quad-\frac{1}{2}\theta e^{\theta\ell_{1}}L_{\ell_{1}}(t)+\theta Y(t),\qquad t\geq 0,\;\theta\leq 0. (3.47)

Then, by the same arguments in the proof of Theorem 3.1, we have (3.21) and (3.24), which imply

φ1(θ)=eθ1eβ11β1+θ1σ12𝔼[L1(1)],φ1(0)=1eβ11σ12β1𝔼[L1(1)].\displaystyle\varphi_{1}(\theta)=\frac{e^{\theta\ell_{1}}-e^{-\beta_{1}\ell_{1}}}{\beta_{1}+\theta}\frac{1}{\sigma_{1}^{2}}\mathbb{E}[L_{\ell_{1}}(1)],\qquad\varphi_{1}(0)=\frac{1-e^{-\beta_{1}\ell_{1}}}{\sigma_{1}^{2}\beta_{1}}\mathbb{E}[L_{\ell_{1}}(1)]. (3.48)

Hence, we have

φ1(θ)/φ1(0)=eθ1eβ11β1+θβ11eβ11=h^1(θ),θ0.\displaystyle\varphi_{1}(\theta)/\varphi_{1}(0)=\frac{e^{\theta\ell_{1}}-e^{-\beta_{1}\ell_{1}}}{\beta_{1}+\theta}\frac{\beta_{1}}{1-e^{-\beta_{1}\ell_{1}}}=\widehat{h}_{1}(\theta),\qquad\theta\leq 0. (3.49)

Thus, (3.43) is proved for j=1j=1. We prove (3.44) after (3.43) is proved for all j𝒩kj\in{\mathcal{N}}_{k}.

We next prove (3.43) for j{2,3,,k1}j\in\{2,3,\ldots,k-1\}. In this case, we use fj(Z(1)f_{j}(Z(1) of (3.4). Take the difference fj(Z(1))fj1(Z(1))f_{j}(Z(1))-f_{j-1}(Z(1)) for each fixed jj and take the expectation under which Z()Z(\cdot) is stationary, then we have

σj2(βj+θ)φj(θ)eθj𝔼[Lj(1)]+eθj1𝔼[Lj1(1)]=0,θ0,\displaystyle\sigma_{j}^{2}(\beta_{j}+\theta)\varphi_{j}(\theta)-e^{\theta\ell_{j}}\mathbb{E}[L_{\ell_{j}}(1)]+e^{\theta\ell_{j-1}}\mathbb{E}[L_{\ell_{j-1}}(1)]=0,\qquad\theta\leq 0,

because βj=2bj/σj2\beta_{j}=2b_{j}/\sigma_{j}^{2}. This yields

φj(θ)=\displaystyle\varphi_{j}(\theta)= 1σj2(βj+θ)(e(θ+βj)je(θ+βj)j1)eβjj𝔼[Lj(1)]\displaystyle\frac{1}{\sigma_{j}^{2}(\beta_{j}+\theta)}\left(e^{(\theta+\beta_{j})\ell_{j}}-e^{(\theta+\beta_{j})\ell_{j-1}}\right)e^{-\beta_{j}\ell_{j}}\mathbb{E}[L_{\ell_{j}}(1)]
+eθj1σj2(βj+θ)(eβj(jj1)𝔼[Lj(1)]𝔼[Lj1(1)]).\displaystyle\quad+\frac{e^{\theta\ell_{j-1}}}{\sigma_{j}^{2}(\beta_{j}+\theta)}\left(e^{-\beta_{j}(\ell_{j}-\ell_{j-1})}\mathbb{E}[L_{\ell_{j}}(1)]-\mathbb{E}[L_{\ell_{j-1}}(1)]\right). (3.50)

Since φj\varphi_{j} is the mgf of a measure on [j1,j)[\ell_{j-1},\ell_{j}), we must have

𝔼[Lj1(1)]=eβj(jj1)𝔼[Lj(1)],2jk1.\displaystyle\mathbb{E}[L_{\ell_{j-1}}(1)]=e^{-\beta_{j}(\ell_{j}-\ell_{j-1})}\mathbb{E}[L_{\ell_{j}}(1)],\qquad 2\leq j\leq k-1. (3.51)

Hence, (3.4) becomes, for j{2,3,,k1}j\in\{2,3,\ldots,k-1\},

φj(θ)=e(θ+βj)je(θ+βj)j1σj2(βj+θ)eβjj𝔼[Lj(1)],\displaystyle\varphi_{j}(\theta)=\frac{e^{(\theta+\beta_{j})\ell_{j}}-e^{(\theta+\beta_{j})\ell_{j-1}}}{\sigma_{j}^{2}(\beta_{j}+\theta)}e^{-\beta_{j}\ell_{j}}\mathbb{E}[L_{\ell_{j}}(1)], (3.52)
φj(0)=1eβj(jj1)σj2βj𝔼[Lj(1)].\displaystyle\varphi_{j}(0)=\frac{1-e^{-\beta_{j}(\ell_{j}-\ell_{{j-1}})}}{\sigma_{j}^{2}\beta_{j}}\mathbb{E}[L_{\ell_{j}}(1)]. (3.53)

Hence, we have (3.43) for j=2,3,,k1j=2,3,\ldots,k-1.

We finally prove (3.43) for j=kj=k. Similarly to the case for k=2k=2 in the proof of Theorem 3.1, it follows from (3.2.1) that

12i=1kσi2(βi+θ)φi(θ)+𝔼[Y(1)]=0,θ0.\displaystyle\frac{1}{2}\sum_{i=1}^{k}\sigma_{i}^{2}\left(\beta_{i}+\theta\right)\varphi_{i}(\theta)+\mathbb{E}[Y(1)]=0,\qquad\theta\leq 0. (3.54)

Similarly, from (3.4) for j=k1j=k-1, we have

12i=1k1σi2(βi+θ)φi(θ)+𝔼[Y(1)]12eθk1𝔼[Lk1(1)]=0,θ0.\displaystyle\frac{1}{2}\sum_{i=1}^{k-1}\sigma_{i}^{2}\left(\beta_{i}+\theta\right)\varphi_{i}(\theta)+\mathbb{E}[Y(1)]-\frac{1}{2}e^{\theta\ell_{k-1}}\mathbb{E}[L_{\ell_{k-1}}(1)]=0,\qquad\theta\leq 0. (3.55)

Taking the difference of (3.54) and (3.55), we have

σk2(βk+θ)φk(θ)=eθk1𝔼[Lk1(1)],\displaystyle\sigma_{k}^{2}\left(\beta_{k}+\theta\right)\varphi_{k}(\theta)=-e^{\theta\ell_{k-1}}\mathbb{E}[L_{\ell_{k-1}}(1)],

which yields

φk(θ)=1σk2(βk+θ)eθk1𝔼[Lk1(1)],φk(0)=1σk2βk𝔼[Lk1(1)].\displaystyle\varphi_{k}(\theta)=\frac{-1}{\sigma_{k}^{2}\left(\beta_{k}+\theta\right)}e^{\theta\ell_{k-1}}\mathbb{E}[L_{\ell_{k-1}}(1)],\qquad\varphi_{k}(0)=\frac{-1}{\sigma_{k}^{2}\beta_{k}}\mathbb{E}[L_{\ell_{k-1}}(1)]. (3.56)

Hence, we have (3.43) for j=kj=k. Namely,

φk(θ)/φk(0)=βkβk+θeθk1=h^k(θ).\displaystyle\varphi_{k}(\theta)/\varphi_{k}(0)=\frac{\beta_{k}}{\beta_{k}+\theta}e^{\theta\ell_{k-1}}=\widehat{h}_{k}(\theta).

It remains to prove (3.44) for j𝒩kj\in{\mathcal{N}}_{k}. For this, we note that (3.24) is still valid, which is

2𝔼[Y(1)]=eβ11𝔼[L1(1)]=eβ1(10+)𝔼[L1(1)].\displaystyle 2\mathbb{E}[Y(1)]=e^{-\beta_{1}\ell_{1}}\mathbb{E}[L_{\ell_{1}}(1)]=e^{-\beta_{1}(\ell_{1}-\ell_{0}^{+})}\mathbb{E}[L_{\ell_{1}}(1)].

Hence, recalling that ηj=i=1jeβi(ii1+)\eta_{j}=\prod_{i=1}^{j}e^{\beta_{i}(\ell_{i}-\ell_{i-1}^{+})}, (3.51) yields

𝔼[Lj(1)]=eβj(jj1+)𝔼[Lj1(1)]=2𝔼[Y(1)]ηj,j𝒩k1.\displaystyle\mathbb{E}[L_{\ell_{j}}(1)]=e^{\beta_{j}(\ell_{j}-\ell_{j-1}^{+})}\mathbb{E}[L_{\ell_{j-1}}(1)]=2\mathbb{E}[Y(1)]\eta_{j},\qquad j\in{\mathcal{N}}_{k-1}. (3.57)

From (3.53), (3.56), (3.57) and the fact that (eβj(jj1+)1)ηj1=ηjηj1(e^{\beta_{j}(\ell_{j}-\ell_{j-1}^{+})}-1)\eta_{j-1}=\eta_{j}-\eta_{j-1}, we have

φj(0)={2𝔼[Y(1)]ηjηj1σj2βj,j𝒩k1,2𝔼[Y(1)]1σk2βkηk1,j=k.\displaystyle\varphi_{j}(0)=\begin{cases}\displaystyle 2\mathbb{E}[Y(1)]\frac{\eta_{j}-\eta_{j-1}}{\sigma_{j}^{2}\beta_{j}},\quad&j\in{\mathcal{N}}_{k-1},\vskip 6.0pt plus 2.0pt minus 2.0pt\\ \displaystyle 2\mathbb{E}[Y(1)]\frac{-1}{\sigma_{k}^{2}\beta_{k}}\eta_{k-1},&j=k.\end{cases} (3.58)

Since j=1kφj(0)=1\sum_{j=1}^{k}\varphi_{j}(0)=1, it follows from (3.58) that

12𝔼[Y(1)]\displaystyle\frac{1}{2\mathbb{E}[Y(1)]} =i=1kηiηi1σi2βi,\displaystyle=\sum_{i=1}^{k}\frac{\eta_{i}-\eta_{i-1}}{\sigma_{i}^{2}\beta_{i}}, (3.59)

because ηk=0\eta_{k}=0. Substituting this into (3.58) and using σi2βi=2bi\sigma_{i}^{2}\beta_{i}=2b_{i}, we have (3.44) for j𝒩kj\in{\mathcal{N}}_{k} because djd_{j} is defined by (3.36).

(ii) is proved for k=2k=2 from (i) and (a) of Remark 3.1. It is not hard to see that this observation (a) is also valid for any bjb_{j} for j𝒩kj\in{\mathcal{N}}_{k}. Hence, (ii) can be proved also for k2k\geq 2 from (i).

4 Proofs of preliminary lemmas

Refer to caption
Figure 1: Up and down level-crossing periods

4.1 Proof of Lemma 2.2

Recall that Lemma 2.2 assumes the conditions of (2.6) and of the one-dimensional state-dependent SRBM with bounded drifts. Since 1>0\ell_{1}>0, there are constants c,d>0c,d>0 such that 0<c<d<10<c<d<\ell_{1}. Using these constants, we construct the weak solution of (Z(),W())(Z(\cdot),W(\cdot)) of (2.1). The basic idea is to construct the sample path of Z()Z(\cdot) separately for disjoint time intervals, where, for the first interval, if Z(0)<dZ(0)<d, then Z()Z(\cdot) stays there until it hits dd or, if Z(0)dZ(0)\geq d, then it stays there until it hits cc, and, for the subsequent intervals, Z()Z(\cdot) starts below cc until hits d>cd>c, which is called an up-crossing period, and those in which Z()Z(\cdot) starts start at dd or above it until hits c<dc<d, which is called a down-crossing period. Namely, except for the first interval, the up-crossing period always starts at cc, and the down-crossing period always starts at dd (see Figure 1). In this construction, we also construct the filtration for which (Z(),W())(Z(\cdot),W(\cdot)) is adapted.

Define X1(){X1(t);t0}X_{1}(\cdot)\equiv\{X_{1}(t);t\geq 0\} as

X1(t)=X1(0)+0t(σ1dW1(u)+b1du),t0,\displaystyle X_{1}(t)=X_{1}(0)+\int_{0}^{t}\left(\sigma_{1}dW_{1}(u)+b_{1}du\right),\qquad t\geq 0, (4.1)

and let X2(){X2(t);t0}X_{2}(\cdot)\equiv\{X_{2}(t);t\geq 0\} be the solution of the following stochastic integral equation:

X2(t)=X2(0)\displaystyle X_{2}(t)=X_{2}(0) +0tσ(X2(u))𝑑W2(u)+0tb(X2(u))𝑑u,t0.\displaystyle+\int_{0}^{t}\sigma(X_{2}(u))dW_{2}(u)+\int_{0}^{t}b(X_{2}(u))du,\qquad t\geq 0. (4.2)

Note that the SIE (2.4) is stochastically identical with the SIE (4.2). Hence, as we discussed below (2.4), the SIE (4.2) has a unique weak solution under Condition 2.2. Thus, the solution X2()X_{2}(\cdot) weakly exists because the assumptions of Lemma 2.2 imply Condition 2.2. For this weak solution, we use the same notations for X2()X_{2}(\cdot), W2()W_{2}(\cdot) and stochastic basics (Ω,,𝔽,)(\Omega,{\mathcal{F}},\mathbb{F},\mathbb{P}) for convenience, where 𝔽={t;t0}\mathbb{F}=\{{\mathcal{F}}_{t};t\geq 0\}. Without loss of generality, we expand this stochastic basic which accommodates X1()X_{1}(\cdot), and have countable independent copies of Wi()W_{i}(\cdot) and Xi()X_{i}(\cdot) for i=1,2i=1,2, which are denoted by Wn,i(){Wn,i(t);t0}W_{n,i}(\cdot)\equiv\{W_{n,i}(t);t\geq 0\} and Xn,i(){Xn,i(t);t0}X_{n,i}(\cdot)\equiv\{X_{n,i}(t);t\geq 0\} for n=1,2,n=1,2,\ldots.

We first construct the weak solution Z()Z(\cdot) of (2.1) when Z(0)=x<dZ(0)=x<d, using Wn,i()W_{n,i}(\cdot) and Xn,i()X_{n,i}(\cdot). For this construction, we introduce up and down crossing times for a given real-valued semi-martingale V(){V(t);t0}V(\cdot)\equiv\{V(t);t\geq 0\}. Denote the nn-th up-crossing time at dd from below by τd,n(+)(V)\tau^{(+)}_{d,n}(V), and denote the down-crossing time at c(<d)c\;(<d) from above by τc,n()(V)\tau^{(-)}_{c,n}(V). Namely, for n1n\geq 1 and 0<c<d<10<c<d<\ell_{1},

τd,n(+)(V)=inf{u>τc,n1()(V);V(u)d},τc,n()(V)=inf{u>τd,n(+)(V);V(u)<c},\displaystyle\tau^{(+)}_{d,n}(V)=\inf\{u>\tau^{(-)}_{c,n-1}(V);V(u)\geq d\},\quad\tau^{(-)}_{c,n}(V)=\inf\{u>\tau^{(+)}_{d,n}(V);V(u)<c\},

where τc,0()(V)=0\tau^{(-)}_{c,0}(V)=0. Note that τd,n(+)(Z)\tau^{(+)}_{d,n}(Z) and τc,n()(Z)\tau^{(-)}_{c,n}(Z) may be infinite with positive probabilities. In this case, there is no further splitting, which causes no problem in constructing the sample path of Z()Z(\cdot) because such a sample path is already defined for all t0t\geq 0. After the weak solution is obtained, we will see that y[τd,n(+)(Z)<]=1\mathbb{P}_{y}[\tau^{(+)}_{d,n}(Z)<\infty]=1 for y[0,d)y\in[0,d) by Lemma 2.3, but τc,n()(Z)\tau^{(-)}_{c,n}(Z) may be infinite with a positive probability.

We now inductively construct Zn(t){Zn(t);t0}Z_{n}(t)\equiv\{Z_{n}(t);t\geq 0\} for n=1,2,n=1,2,\ldots, where the construction below is stopped when τc,n()(Zn)\tau^{(-)}_{c,n}(Z_{n}) diverges. For n=1n=1, we denote the independent copy of X1()X_{1}(\cdot) with X1(0)=x<dX_{1}(0)=x<d by X11(){X11(t);t0}X_{11}(\cdot)\equiv\{X_{11}(t);t\geq 0\}, and define Z11(t)Z_{11}(t) as

Z11(t)=X11(t)+supu[0,t](X11(u))+,t0,\displaystyle Z_{11}(t)=X_{11}(t)+\sup_{u\in[0,t]}(-X_{11}(u))^{+},\qquad t\geq 0, (4.3)

then it is well known that Z11()Z_{11}(\cdot) is the unique solution of the stochastic integral equation:

Z11(t)=Z11(0)+0t𝑑X11(u)+Y11(t),t0,\displaystyle Z_{11}(t)=Z_{11}(0)+\int_{0}^{t}dX_{11}(u)+Y_{11}(t),\qquad t\geq 0, (4.4)

where Y11(t)Y_{11}(t) is nondecreasing and 0t1(Z11(u)>0)𝑑Y11(u)=0\int_{0}^{t}1(Z_{11}(u)>0)dY_{11}(u)=0 for t0t\geq 0. Furthermore, for X11(0)=Z11(0)X_{11}(0)=Z_{11}(0), Y11(t)=supu[0,t](X11(u))+Y_{11}(t)=\sup_{u\in[0,t]}(-X_{11}(u))^{+} (e.g., see [10]). Since Z11(0)=X11(0)=x<dZ_{11}(0)=X_{11}(0)=x<d and X11(t)Z11(t)d<1X_{11}(t)\leq Z_{11}(t)\leq d<\ell_{1} for t[0,τd,1(+)(Z11)]t\in[0,\tau^{(+)}_{d,1}(Z_{11})], (4.4) can be written as

Z11(t)=Z11(0)\displaystyle Z_{11}(t)=Z_{11}(0)
+0t(σ(Z11(u))dW11(u)+b(Z11(u))du)+Y11(t),t[0,τd,1(+)(Z11)).\displaystyle\quad+\int_{0}^{t}\left(\sigma(Z_{11}(u))dW_{11}(u)+b(Z_{11}(u))du\right)+Y_{11}(t),\qquad t\in[0,\tau^{(+)}_{d,1}(Z_{11})). (4.5)

We next denote the independent copy of X2()X_{2}(\cdot) with X2(0)=dX_{2}(0)=d by X12(){X12(t);t0}X_{12}(\cdot)\equiv\{X_{12}(t);t\geq 0\}, and define

Z12(t)=X12(tτd,1(+)(Z11)),tτd,1(+)(Z11),\displaystyle Z_{12}(t)=X_{12}\left(t-\tau^{(+)}_{d,1}(Z_{11})\right),\qquad t\geq\tau^{(+)}_{d,1}(Z_{11}), (4.6)

then we have, for t[τd,1(+)(Z11),τc,1()(Z12))t\in[\tau^{(+)}_{d,1}(Z_{11}),\tau^{(-)}_{c,1}(Z_{12})),

Z12(t)=d+τd,1(+)(Z11)t(σ(Z12(u))dW12(u)+b(Z12(u))du),\displaystyle Z_{12}(t)=d+\int_{\tau^{(+)}_{d,1}(Z_{11})}^{t}\left(\sigma(Z_{12}(u))dW_{12}(u)+b(Z_{12}(u))du\right), (4.7)

where recall that X12(0)=dX_{12}(0)=d. Define

Z1(t)\displaystyle Z_{1}(t) =Z11(τd,1(+)(Z11)t)+(Z12(t)d)1(τd,1(+)(Z11)<t),t0.\displaystyle=Z_{11}(\tau^{(+)}_{d,1}(Z_{11})\wedge t)+(Z_{12}(t)-d)1(\tau^{(+)}_{d,1}(Z_{11})<t),\qquad t\geq 0.

then Z11(t)Z_{11}(t) is stochastically identical with Z12(t)Z_{12}(t) for t[τd,1(+)(Z11),τ1,1(+)(Z11)τ1,1(+)(Z12))t\in[\tau^{(+)}_{d,1}(Z_{11}),\tau^{(+)}_{\ell_{1},1}(Z_{11})\wedge\tau^{(+)}_{\ell_{1},1}(Z_{12})). Hence, it follows from (4.1) and (4.7) that, for t[0,τc,1()(Z1))t\in[0,\tau^{(-)}_{c,1}(Z_{1})),

Z1(t)=Z1(0)+0t(σ(Z1(u))dW~1(u)+b(Z1(u))du)+Y1(t),\displaystyle Z_{1}(t)=Z_{1}(0)+\int_{0}^{t}\left(\sigma(Z_{1}(u))d\widetilde{W}_{1}(u)+b(Z_{1}(u))du\right)+Y_{1}(t), (4.8)

where Y1(t)=Y11(τd,1(+)(Z11)t)Y_{1}(t)=Y_{11}(\tau^{(+)}_{d,1}(Z_{11})\wedge t) and

W~1(t)=W11(t)1(t<τd,1(+)(Z11))+W12(t)1(tτd,1(+)(Z11)).\displaystyle\widetilde{W}_{1}(t)=W_{11}(t)1(t<\tau^{(+)}_{d,1}(Z_{11}))+W_{12}(t)1(t\geq\tau^{(+)}_{d,1}(Z_{11})). (4.9)

We repeat the same procedure to inductively define Xn,i(t)X_{n,i}(t) with Xn,i(0)=c 1(i=1)+d 1(i=2)X_{n,i}(0)=c\,1(i=1)+d\,1(i=2) and Zn,i(t)Z_{n,i}(t) for i=1,2i=1,2 together with τd,n(+)(Zn1)\tau^{(+)}_{d,n}(Z_{n1}) and τc,n()(Zn2)\tau^{(-)}_{c,n}(Z_{n2}) for n2n\geq 2 by

Xn1(t)=c+τc,n1()(Zn1)t(σ1dWn1(u)+b1du),tτc,n1()(Zn1)\displaystyle X_{n1}(t)=c+\int_{\tau^{(-)}_{c,n-1}(Z_{n-1})}^{t}\left(\sigma_{1}dW_{n1}(u)+b_{1}du\right),\qquad t\geq\tau^{(-)}_{c,n-1}(Z_{n-1})
Zn1(t)=Xn1(t)+supu[τc,n1()(Zn1),t](Xn1(u))+,tτc,n1()(Zn1)\displaystyle Z_{n1}(t)=X_{n1}(t)+\sup_{u\in[\tau^{(-)}_{c,n-1}(Z_{n-1}),t]}(-X_{n1}(u))^{+},\qquad t\geq\tau^{(-)}_{c,n-1}(Z_{n-1})
Zn2(t)=Xn2(tτd,n(+)(Zn1)),t>τd,n(+)(Zn1),Xn2(0)=d,\displaystyle Z_{n2}(t)=X_{n2}\left(t-\tau^{(+)}_{d,n}(Z_{n1})\right),\qquad t>\tau^{(+)}_{d,n}(Z_{n1}),\;X_{n2}(0)=d,

as long as τc,n1()(Zn1)<\tau^{(-)}_{c,n-1}(Z_{n-1})<\infty, and define

Zn(t)\displaystyle Z_{n}(t) =Zn1(τc,n1()(Zn1)t)+(Zn1(τd,n(+)(Zn1)t)c)1(τc,n1()(Zn1)<t)\displaystyle=Z_{n-1}(\tau^{(-)}_{c,n-1}(Z_{n-1})\wedge t)+(Z_{n1}(\tau^{(+)}_{d,n}(Z_{n1})\wedge t)-c)1(\tau^{(-)}_{c,n-1}(Z_{n-1})<t)
+(Zn2(t)d)1(τd,n(+)(Zn1)<t),\displaystyle\quad+(Z_{n2}(t)-d)1(\tau^{(+)}_{d,n}(Z_{n1})<t),

then we have, for t[0,τc,n()(Zn2))t\in[0,\tau^{(-)}_{c,n}(Z_{n2})),

Zn(t)=Zn(0)+0t(σ(Zn(u))dW~n(u)+b(Zn(u))du)+Yn(t),\displaystyle Z_{n}(t)=Z_{n}(0)+\int_{0}^{t}\left(\sigma(Z_{n}(u))d\widetilde{W}_{n}(u)+b(Z_{n}(u))du\right)+Y_{n}(t), (4.10)

where

Yn(t)=Yn1(τc,n1()(Zn1)t)+supu[τc,n1()(Zn1),τd,n(+)(Zn1)t](Xn(u))+,\displaystyle Y_{n}(t)=Y_{n-1}(\tau^{(-)}_{c,n-1}(Z_{n-1})\wedge t)+\sup_{u\in[\tau^{(-)}_{c,n-1}(Z_{n-1}),\tau^{(+)}_{d,n}(Z_{n1})\wedge t]}(-X_{n}(u))^{+},
W~n(t)=W~n1(t)1(tτd,n1(+)(Z(n1)1)+W~(n1)2(t)1(t>τd,1(+)(Z(n1)1)).\displaystyle\widetilde{W}_{n}(t)=\widetilde{W}_{n-1}(t)1(t\leq\tau^{(+)}_{d,n-1}(Z_{(n-1)1})+\widetilde{W}_{(n-1)2}(t)1(t>\tau^{(+)}_{d,1}(Z_{(n-1)1})).

From (4.10), we can see that Zn(){Zn(t);0t<τc,n()(Zn2)}Z_{n}(\cdot)\equiv\{Z_{n}(t);0\leq t<\tau^{(-)}_{c,n}(Z_{n2})\} is the solution of (2.1) for t<τc,n()(Zn2)t<\tau^{(-)}_{c,n}(Z_{n2}). Furthermore, Zn(t)=Zn+1(t)Z_{n}(t)=Z_{n+1}(t) for 0t<τc,n()(Zn2)0\leq t<\tau^{(-)}_{c,n}(Z_{n2}). From this observation, we define Z()Z(\cdot) by Z(0)=xZ(0)=x and

Z(t)=Z(0)+n=1Zn(t)1(τc,n1()(Zn1)t<τc,n()(Zn)),\displaystyle Z(t)=Z(0)+\sum_{n=1}^{\infty}Z_{n}(t)1(\tau^{(-)}_{c,n-1}(Z_{n-1})\leq t<\tau^{(-)}_{c,n}(Z_{n})), (4.11)
Y(t)=n=1Yn(t)1(τc,n1()(Zn1)t<τc,n()(Zn)),\displaystyle Y(t)=\sum_{n=1}^{\infty}Y_{n}(t)1(\tau^{(-)}_{c,n-1}(Z_{n-1})\leq t<\tau^{(-)}_{c,n}(Z_{n})), (4.12)
W(t)=n=1W~n(t)1(τc,n1()(Zn1)t<τc,n()(Zn)),t0,\displaystyle W(t)=\sum_{n=1}^{\infty}\widetilde{W}_{n}(t)1(\tau^{(-)}_{c,n-1}(Z_{n-1})\leq t<\tau^{(-)}_{c,n}(Z_{n})),\qquad t\geq 0, (4.13)

where τc,0()(Z0)=0\tau^{(-)}_{c,0}(Z_{0})=0, then Z()Z(\cdot) is the solution of (2.1) for t<τc,n()(Zn2)t<\tau^{(-)}_{c,n}(Z_{n2}) if τc,n1()(Zn1)<\tau^{(-)}_{c,n-1}(Z_{n-1})<\infty. Otherwise, if τc,n1()(Zn1)=\tau^{(-)}_{c,n-1}(Z_{n-1})=\infty and τc,m()(Zn1)<\tau^{(-)}_{c,m}(Z_{n-1})<\infty for m=1,2,,n2m=1,2,\ldots,n-2, then we stop the procedure by the (n1)(n-1)-th step.

Up to now, we have assumed that Z(0)=Z1(0)=Z11(0)=x<dZ(0)=Z_{1}(0)=Z_{11}(0)=x<d. If this xx is net less than dd, then we start with Z12()Z_{12}(\cdot) of (4.6) with Z12(0)=X12(0)=xdZ_{12}(0)=X_{12}(0)=x\geq d, and replace Z11()Z_{11}(\cdot) of (4.3) by

Z11(t)=X11(t)+supτc,1()(Z12)<ut(X11(u))+,t0.\displaystyle Z_{11}(t)=X_{11}(t)+\sup_{\tau^{(-)}_{c,1}(Z_{12})<u\leq t}(-X_{11}(u))^{+},\qquad t\geq 0.

Then, define Z1()Z_{1}(\cdot) as

Z1(t)\displaystyle Z_{1}(t) =Z12(τc,1()(Z12)t)+(Z11(t)c)1(τc,1()(Z12)t),t0,\displaystyle=Z_{12}(\tau^{(-)}_{c,1}(Z_{12})\wedge t)+(Z_{11}(t)-c)1(\tau^{(-)}_{c,1}(Z_{12})\leq t),\qquad t\geq 0,

where τc,1()(Z12)<τd,1(+)(Z11)\tau^{(-)}_{c,1}(Z_{12})<\tau^{(+)}_{d,1}(Z_{11}) because the order of Z11()Z_{11}(\cdot) and Z12()Z_{12}(\cdot) is swapped. Similarly to the previous case that x<dx<d, we repeat this procedure to inductively define Zn()Z_{n}(\cdot) for n2n\geq 2, then we can defined Z()Z(\cdot) and Y()Y(\cdot) similarly to (4.11) and (4.12).

Hence, Z()Z(\cdot) of (4.11) is the solution of (2.1) if we show that there is some n1n\geq 1 for each t>0t>0 such that t<τc,n()(Z)t<\tau^{(-)}_{c,n}(Z). This condition is equivalent to supn1τc,n()(Z)=\sup_{n\geq 1}\tau^{(-)}_{c,n}(Z)=\infty almost surely. To see this, assume that τc,n()(Z)<\tau^{(-)}_{c,n}(Z)<\infty for all n1n\geq 1, then let Jn=τc,n()(Z)τc,n1()(Z)J_{n}=\tau^{(-)}_{c,n}(Z)-\tau^{(-)}_{c,n-1}(Z) for n1n\geq 1, then {Jn;n2}\{J_{n};n\geq 2\} is a sequence of i.i.d.i.i.d. positive valued random variables. Hence, we have

limnτc,n()(Z)limnm=2nJm=,a.s.,\displaystyle\lim_{n\to\infty}\tau^{(-)}_{c,n}(Z)\geq\lim_{n\to\infty}\sum_{m=2}^{n}J_{m}=\infty,\qquad a.s.,

and therefore Z(t)Z(t) is well defined for all t0t\geq 0. Otherwise, if τc,n()(Z)=\tau^{(-)}_{c,n}(Z)=\infty for some n1n\geq 1, then we stop the procedure by the nn-th step.

Thus, we have constructed the solution Z()Z(\cdot) of (2.1). Note that the probability distribution of this solution does not depend on the choice of c,dc,d as long as 0<c<d<10<c<d<\ell_{1} because of the independent increment property of the Brownian motion. Furthermore, this Z()Z(\cdot) is a strong Markov process because Zn,1()Z_{n,1}(\cdot) and Zn,2()Z_{n,2}(\cdot) are strong Markov processes (e.g. see (8.12) of [4], Theorem 21.11 of [7], Theorem 17.23 and Remark 17.2.4 of [5]) and Z()Z(\cdot) is obtained by continuously connecting their sample paths using stopping times. Thus, the Z()Z(\cdot) is the weak solution of (2.1) which is strong Markov.

It remains to prove the weak uniqueness of the solution Z()Z(\cdot). This is immediate from the construction of Z()Z(\cdot). Namely, suppose that Z~()\widetilde{Z}(\cdot) is the solution of (2.1) with Z~(0)=x\widetilde{Z}(0)=x for given x0x\geq 0. Assume that x<dx<d, then the process {Z~(t);0t<τd,1(+)(Z~)}\{\widetilde{Z}(t);0\leq t<\tau^{(+)}_{d,1}(\widetilde{Z})\} with Z~(0)=x<d<1\widetilde{Z}(0)=x<d<\ell_{1} is stochastically identical with {Z11(t);0t<τd,1(+)(Z)}\{Z_{11}(t);0\leq t<\tau^{(+)}_{d,1}(Z)\} with Z11(0)=xZ_{11}(0)=x, which is the unique solution of (4.4), while the process {Z~(t);τd,1(+)(Z~)<tτc,1()(Z~)}\{\widetilde{Z}(t);\tau^{(+)}_{d,1}(\widetilde{Z})<t\leq\tau^{(-)}_{c,1}(\widetilde{Z})\} must be stochastically identical with {X12(t);0t<τc,1()(Z)}\{X_{12}(t);0\leq t<\tau^{(-)}_{c,1}(Z)\} with X12(0)=dX_{12}(0)=d, which is the unique weak solution of (4.2). Similarly, we can see such stochastic equivalences in the subsequent periods for Z~(0)=x<d\widetilde{Z}(0)=x<d. On the other hand, if Z~(0)=xd\widetilde{Z}(0)=x\geq d, then similar equivalences are obtained. Hence, Z~()\widetilde{Z}(\cdot) and Z()Z(\cdot) have the same distribution for each fixed initial state x0x\geq 0. Thus, the Z()Z(\cdot) is a unique weak solution, and the proof of Lemma 2.2 is completed.

Remark 4.1.

From an analogy to the reflecting Brownian motion on the half line [0,)[0,\infty), it may be questioned whether the solution Z()Z(\cdot) of (2.1) can be directly obtained from the weak solution X()X(\cdot) of (2.4) by its absolute value, that is by |X|(){|X(t)|;t0}|X|(\cdot)\equiv\{|X(t)|;t\geq 0\}. This question is affirmatively answered under Condition 2.2 by Atar et al. [1]. It may be interesting to see how they prove (i) of Lemma 2.1, so we explain it below.

Recall that the solution X()X(\cdot) of the SIE (2.4) weakly exists under Condition 2.2. If |X|()|X|(\cdot) is the solution Z()Z(\cdot) of the stochastic integral equation (2.1), then we must have

|X|(t)=|X|(0)+0t(σ(|X|(u))dW(u)+b(|X|(u))du)+Y(t),t0.\displaystyle|X|(t)=|X|(0)+\int_{0}^{t}(\sigma(|X|(u))dW(u)+b(|X|(u))du)+Y(t),\qquad t\geq 0. (4.14)

On the other hand, from Tanaka formula (A.6) for a=0a=0, we have

|X|(t)|X|(0)=0tsgn(X(u))𝑑X(u)+L0(t)\displaystyle|X|(t)-|X|(0)=\int_{0}^{t}{\rm sgn}(X(u))dX(u)+L_{0}(t)
=0tsgn(X(u))(σ(X(u))dW(u)+b(X(u))du)+L0(t),t0.\displaystyle\quad=\int_{0}^{t}{\rm sgn}(X(u))\left(\sigma(X(u))dW(u)+b(X(u))du\right)+L_{0}(t),\qquad t\geq 0. (4.15)

Hence, letting Y()=L0()Y(\cdot)=L_{0}(\cdot), (4.14) is stochastically identical with (4.1) if

σ(x)=σ(|x|),b(x)=sgn(x)b(|x|),x,\displaystyle\sigma(x)=\sigma(|x|),\qquad b(x)={\rm sgn}(x)b(|x|),\qquad x\in\mathbb{R}, (4.16)

and if W()W(\cdot) is replaced by W~(){sgn(X(t))W(t);t0}\widetilde{W}(\cdot)\equiv\{{\rm sgn}(X(t))W(t);t\geq 0\}. Since the stochastic integral in (2.1) does not depend on σ(x)\sigma(x) and b(x)b(x) for x<0x<0, (4.16) does not cause any problem for (2.1).

4.2 Proof of Lemma 2.3

Recall the definition of τa=τB\tau_{a}=\tau_{B} for B={a}B=\{a\} (see (2.9)). We first prove that

𝔼x[τa]<,0x<a,\displaystyle\mathbb{E}_{x}[\tau_{a}]<\infty,\qquad 0\leq x<a, (4.17)
x[τa<]>0,0a<x,\displaystyle\mathbb{P}_{x}[\tau_{a}<\infty]>0,\qquad 0\leq a<x, (4.18)

Since Z(τat)aZ(\tau_{a}\wedge t)\leq a, 𝔼x[eθZ(τat)]<\mathbb{E}_{x}[e^{\theta Z(\tau_{a}\wedge t)}]<\infty for θ\theta\in\mathbb{R}. Hence, substituting the stopping time τat\tau_{a}\wedge t into tt of the generalize Ito formula (3.2.1) for test function f(x)=eθxf(x)=e^{\theta x} and taking the expectation under x\mathbb{P}_{x}, we have, for x<ax<a and θ\theta\in\mathbb{R},

𝔼x[eθZ(τat)]=eθx+𝔼x[0τatγ(Z(u),θ)eθZ(u)𝑑u]+θ𝔼x[Y(τat)],\displaystyle\mathbb{E}_{x}[e^{\theta Z(\tau_{a}\wedge t)}]=e^{\theta x}+\mathbb{E}_{x}\left[\int_{0}^{\tau_{a}\wedge t}\gamma(Z(u),\theta)e^{\theta Z(u)}du\right]+\theta\mathbb{E}_{x}[Y(\tau_{a}\wedge t)], (4.19)

where γ(x,θ)=b(x)θ+12σ2(x)θ2\gamma(x,\theta)=b(x)\theta+\frac{1}{2}\sigma^{2}(x)\theta^{2}. Note that f, for each ε>0\varepsilon>0, γ(x,θ)ε\gamma(x,\theta)\geq\varepsilon if

θb(x)σ2(x)+(b(x)σ2(x))2+2εσ2(x) or θb(x)σ2(x)(b(x)σ2(x))2+2εσ2(x).\displaystyle\theta\geq\frac{-b(x)}{\sigma^{2}(x)}+\sqrt{\left(\frac{b(x)}{\sigma^{2}(x)}\right)^{2}+\frac{2\varepsilon}{\sigma^{2}(x)}}\;\mbox{ or }\;\theta\leq\frac{-b(x)}{\sigma^{2}(x)}-\sqrt{\left(\frac{b(x)}{\sigma^{2}(x)}\right)^{2}+\frac{2\varepsilon}{\sigma^{2}(x)}}. (4.20)

Recall that βi=2bi/σi2\beta_{i}=2b_{i}/\sigma^{2}_{i}, and introduce the following notations.

|β|max=maxi𝒩k|βi|,|β|min=mini𝒩k|βi|,σmax2=maxi𝒩kσi2,σmin2=mini𝒩kσi2.\displaystyle|\beta|_{\max}=\max_{i\in{\mathcal{N}}_{k}}|\beta_{i}|,\quad|\beta|_{\min}=\min_{i\in{\mathcal{N}}_{k}}|\beta_{i}|,\quad\sigma^{2}_{\max}=\max_{i\in{\mathcal{N}}_{k}}\sigma^{2}_{i},\quad\sigma^{2}_{\min}=\min_{i\in{\mathcal{N}}_{k}}\sigma^{2}_{i}.

Then, |β|max<|\beta|_{\max}<\infty, σmax2<\sigma^{2}_{\max}<\infty and σmin2>0\sigma^{2}_{\min}>0 by Condition 2.2, which is assumed in Lemma 2.3. Hence, for each ε>0\varepsilon>0, γ(x,θ)ε\gamma(x,\theta)\geq\varepsilon for θ12(|β|max+|β|max2+8ε/σmin2)\theta\geq\frac{1}{2}\left(|\beta|_{\max}+\sqrt{|\beta|^{2}_{\max}+8\varepsilon/\sigma^{2}_{\min}}\right) and x0x\geq 0. For this θ\theta, it follows from (4.19) that

eθaeθxε𝔼x[0τateθZ(u)𝑑u]εeθx𝔼x[τat],t0,\displaystyle e^{\theta a}-e^{\theta x}\geq\varepsilon\mathbb{E}_{x}\left[\int_{0}^{\tau_{a}\wedge t}e^{\theta Z(u)}du\right]\geq\varepsilon e^{\theta x}\mathbb{E}_{x}[\tau_{a}\wedge t],\qquad t\geq 0,

because θ>0\theta>0 and eθZ(u)1e^{\theta Z(u)}\geq 1 for u[0,τat]u\in[0,\tau_{a}\wedge t]. This proves (4.17) because we have

𝔼x[τat](eθaeθx)/ε<,x<a.\displaystyle\mathbb{E}_{x}[\tau_{a}\wedge t]\leq(e^{\theta a}-e^{\theta x})/\varepsilon<\infty,\qquad x<a. (4.21)

We next consider the case for x>a>0x>a>0. Similarly to the previous case but for θ<0\theta<0, from (4.20), we have γ(x,θ)>ε\gamma(x,\theta)>\varepsilon for x>ax>a and ε>0\varepsilon>0 if θ\theta satisfies

θ12(|β|max+|β|max2+8ε/σmin2)<0.\displaystyle\theta\leq-\frac{1}{2}\left(|\beta|_{\max}+\sqrt{|\beta|^{2}_{\max}+8\varepsilon/\sigma^{2}_{\min}}\right)<0. (4.22)

Since Y(t)=0Y(t)=0 for tτat\leq\tau_{a} because Z(0)=x>aZ(0)=x>a, we have, from (4.19), for θ\theta satisfying (4.22),

𝔼x[eθZ(τat)]eθx+ε0t𝔼x[eθZ(u)1(uτa)]𝑑u,t0.\displaystyle\quad\mathbb{E}_{x}[e^{\theta Z(\tau_{a}\wedge t)}]\geq e^{\theta x}+\varepsilon\int_{0}^{t}\mathbb{E}_{x}[e^{\theta Z(u)}1(u\leq\tau_{a})]du,\qquad t\geq 0. (4.23)

Assume that x(τa=)=1\mathbb{P}_{x}(\tau_{a}=\infty)=1, then x[t>τa]=0\mathbb{P}_{x}[t>\tau_{a}]=0, so we have, from (4.23),

𝔼x[eθZ(t)1(tτa)]=𝔼x[eθZ(τat)]eθx+ε0t𝔼x[eθZ(u)1(uτa)]𝑑u,t0.\displaystyle\mathbb{E}_{x}[e^{\theta Z(t)}1(t\leq\tau_{a})]=\mathbb{E}_{x}[e^{\theta Z(\tau_{a}\wedge t)}]\geq e^{\theta x}+\varepsilon\int_{0}^{t}\mathbb{E}_{x}[e^{\theta Z(u)}1(u\leq\tau_{a})]du,\qquad t\geq 0.

Denote 𝔼x[eθZ(u)1(uτa)]\mathbb{E}_{x}[e^{\theta Z(u)}1(u\leq\tau_{a})] by g(u)g(u). Then, after elementary manipulation, this yields

ddt(eεt0tg(u)𝑑u)eθxeεt,\displaystyle\frac{d}{dt}\left(e^{-\varepsilon t}\int_{0}^{t}g(u)du\right)\geq e^{\theta x}e^{-\varepsilon t},

and therefore, by integrating both sides of this inequality, we have

eθa1t0tg(u)𝑑ueθxeεt1εt,\displaystyle e^{\theta a}\geq\frac{1}{t}\int_{0}^{t}g(u)du\geq e^{\theta x}\frac{e^{\varepsilon t}-1}{\varepsilon t},

because g(u)=𝔼x[eθZ(u)1(tτa)]eθag(u)=\mathbb{E}_{x}[e^{\theta Z(u)}1(t\leq\tau_{a})]\leq e^{\theta a} for θ<0\theta<0. Letting tt\to\infty in this inequality, we have a contradiction because its right-hand side diverges. Hence, we have (4.18). We finally consider the case 0=a<x0=a<x. If x[Y(τ0)=0]=1\mathbb{P}_{x}[Y(\tau_{0})=0]=1, then (4.23) holds, and the arguments below it works, which proves (4.18). Otherwise, if x(Y(τ0)=0)<1\mathbb{P}_{x}(Y(\tau_{0})=0)<1, that is, x(Y(τ0)>0)>0\mathbb{P}_{x}(Y(\tau_{0})>0)>0, then x[τ0<]>0\mathbb{P}_{x}[\tau_{0}<\infty]>0 because of the definition of Y()Y(\cdot). Hence, we again have (4.18) for a=0a=0.

We finally check Harris irreducible condition (see (2.7)). For this, let τ=τ0τ0\tau=\tau_{0}\wedge\tau_{\ell_{0}}, then {Z(t);t(0,τ)}\{Z(t);t\in(0,\tau)\} is stochastically identical with {X(t);t(0,τ)}\{X(t);t\in(0,\tau)\}, where X(t)X(0)+b1t+σ1W(t)X(t)\equiv X(0)+b_{1}t+\sigma_{1}W(t). Then, from Tanaka’s formula (A.5) for Z()Z(\cdot), if Z(0)=y(x,1)Z(0)=y\in(x,\ell_{1}),

12Lx(τt)\displaystyle\frac{1}{2}L_{x}(\tau\wedge t) =(Z(τt)x)\displaystyle=(Z(\tau\wedge t)-x)^{-}
+b10τt1(Z(u)x)𝑑u+σ10τt1(Z(u)x)𝑑W(u).\displaystyle\quad+b_{1}\int_{0}^{\tau\wedge t}1(Z(u)\leq x)du+\sigma_{1}\int_{0}^{\tau\wedge t}1(Z(u)\leq x)dW(u).

Hence, if b10b_{1}\geq 0, then

𝔼y[Lx(t)]2𝔼y[(xZ(τt)1(x>Z(τt)]>0,t>0.\displaystyle\mathbb{E}_{y}[L_{x}(t)]\geq 2\mathbb{E}_{y}[(x-Z(\tau\wedge t)1(x>Z(\tau\wedge t)]>0,\qquad t>0. (4.24)

Similarly, from (A.4) for X()=Z()X(\cdot)=Z(\cdot), if b1<0b_{1}<0, then, for y(0,x)(0,1)y\in(0,x)\subset(0,\ell_{1}),

𝔼y[Lx(t)]\displaystyle\mathbb{E}_{y}[L_{x}(t)] 𝔼y[Lx(τt)]\displaystyle\geq\mathbb{E}_{y}[L_{x}(\tau\wedge t)]
2𝔼y[(Z(τt)x)+]2b1𝔼y[0τt1(Z(u)>x)𝑑u]>0.\displaystyle\geq 2\mathbb{E}_{y}[(Z(\tau\wedge t)-x)^{+}]-2b_{1}\mathbb{E}_{y}\left[\int_{0}^{\tau\wedge t}1(Z(u)>x)du\right]>0. (4.25)

Assume that b10b_{1}\geq 0, then we choose y(x,1)y\in(x,\ell_{1}) and the Lebesque measure on [0,y][0,y] for ψ\psi. Then, it follows from (4.24) and (A.1) with g=1Bg=1_{B} that ψ(B)>0\psi(B)>0 for B(+)B\in{\mathcal{B}}(\mathbb{R}_{+}) implies

𝔼y[0t1B(Z(u))σ12𝑑u]𝔼y[01B(x)Lx(t)ψ(dx)]>0.\displaystyle\mathbb{E}_{y}\left[\int_{0}^{t}1_{B}(Z(u))\sigma_{1}^{2}du\right]\geq\mathbb{E}_{y}\left[\int_{0}^{\infty}1_{B}(x)L_{x}(t)\psi(dx)\right]>0.

Since Z()Z(\cdot) hits state y(0,1)y\in(0,\ell_{1}) from any state in SS with positive probability, this inequality implies the Harris irreducibility condition (2.7). Similarly, this condition is proved for b1<0b_{1}<0 using (4.2) and the Lebesgue measure on [y,1][y,\ell_{1}] for ψ\psi. Thus, the proof of Lemma 2.3 is completed.

4.3 Proof of Lemma 2.4

Obviously, b<0b_{*}<0 is necessary for Z()Z(\cdot) to have a stationary distribution because Z(t)Z(t) a.s.a.s. diverges if b>0b_{*}>0 by the strong law of large numbers and Lemma 2.3 while Z()Z(\cdot) is null recurrent if b=0b_{*}=0.

Conversely, assume that b<0b_{*}<0. We note the following fact which is partially a counter part of (4.17).

Lemma 4.1.

If b<0b_{*}<0 , then

𝔼x[τa]<,a<x.\displaystyle\mathbb{E}_{x}[\tau_{a}]<\infty,\qquad\ell_{*}\leq a<x. (4.26)
Proof.

Assume that a<x\ell_{*}\leq a<x, and let X(t)x+bt+σW(t)X(t)\equiv x+b_{*}t+\sigma_{*}W(t) for t0t\geq 0. Since <x\ell_{*}<x, {Z(t);0tτX}\{Z(t);0\leq t\leq\tau^{X}_{\ell_{*}}\} under x\mathbb{P}_{x} has the same distribution as {X(t);0tτX}\{X(t);0\leq t\leq\tau^{X}_{\ell_{*}}\}, where τyX=inf{u0;X(u)=y}\tau^{X}_{y}=\inf\{u\geq 0;X(u)=y\} for yy\geq\ell_{*}. Hence, applying the optional sampling theorem to the martingale X(t)xbtX(t)-x-b_{*}t for stopping time τaXt\tau^{X}_{a}\wedge t, we have

𝔼x[X(τaXt)xb(τaXt)]=0,t0.\displaystyle\mathbb{E}_{x}[X(\tau^{X}_{a}\wedge t)-x-b_{*}(\tau^{X}_{a}\wedge t)]=0,\qquad t\geq 0.

Since X(t)X(t)\to-\infty as tt\to\infty w.p.1 by strong law of large numbers, letting tt\to\infty in this equation yields b𝔼x[τaX]=axb_{*}\mathbb{E}_{x}[\tau^{X}_{a}]=a-x. Hence, we have

𝔼x[τa]=𝔼x[τaX]=1b(xa),a<x,\displaystyle\mathbb{E}_{x}[\tau_{a}]=\mathbb{E}_{x}[\tau^{X}_{a}]=\frac{-1}{b_{*}}(x-a),\qquad\ell_{*}\leq a<x, (4.27)

This proves (4.26). ∎

We return to the proof of Lemma 2.4. For n1n\geq 1 and x,y>x,y>\ell_{*} such that x<yx<y, inductively define Sn,TnS_{n},T_{n} as

Sn=inf{t>Tn1;Z(t)=y},Tn=inf{t>Sn;Z(t)=x},\displaystyle S_{n}=\inf\{t>T_{n-1};Z(t)=y\},\qquad T_{n}=\inf\{t>S_{n};Z(t)=x\},

where T0=0T_{0}=0. Because <x<y\ell_{*}<x<y, we have, from (4.17) and (4.26),

0<𝔼x[τy]<𝔼x[T1]𝔼x[τy]+𝔼y[τx]<.\displaystyle 0<\mathbb{E}_{x}[\tau_{y}]<\mathbb{E}_{x}[T_{1}]\leq\mathbb{E}_{x}[\tau_{y}]+\mathbb{E}_{y}[\tau_{x}]<\infty.

Hence, Z()Z(\cdot) is a regenerative process with regeneration cycles {Tn;n1}\{T_{n};n\geq 1\} because the sequence of {Z(t);Tn1t<Tn}\{Z(t);T_{n-1}\leq t<T_{n}\} for n1n\geq 1 is i.i.d.i.i.d. by its strong Markov property. Hence, Z()Z(\cdot) has the stationary probability measure π\pi given by

π(B)=1𝔼x(T1)𝔼x[0T11(Z(u)B)𝑑u],B(+).\displaystyle\pi(B)=\frac{1}{\mathbb{E}_{x}(T_{1})}\mathbb{E}_{x}\left[\int_{0}^{T_{1}}1(Z(u)\in B)du\right],\qquad B\in{\mathcal{B}}(\mathbb{R}_{+}). (4.28)

Thus, Z()Z(\cdot) is positive recurrent.

5 Concluding remarks

We discuss two topics here.

5.1 Process limit

It is conjectured in [12] that a process limit of the diffusion scaled queue length in the 2-level GI/G/1GI/G/1 queue in heavy traffic is the solution Z()Z(\cdot) of stochastic integral equation (2.1) for the 2-level SRBM. As we discussed in Remark 3.2, the stationary distribution of Z()Z(\cdot) is identical with the limit of the stationary distribution of the scaled queue length in the 2-level GI/G/1GI/G/1 queue in heavy traffic, obtained in [12]. This strongly supports this conjecture on the process limit.

We believe that the conjecture is true. However, the standard proof for diffusion approximation based on functional central limit theorem may not work because of the state dependent arrivals and service speed in the 2-level GI/G/1GI/G/1 queue. We are now working on this problem by formulating the queue length process for the 2-level GI/G/1GI/G/1 queue as a semi-martingale. However, we have not yet completed its proof, so this is an open problem.

5.2 Stationary distribution under weaker conditions

In this paper, we derived the stationary distribution for a one-dimensional multi-level SRBM under the stability condition. In the view of Corollary 3.1, it is naturally questioned whether a similar stationary distribution is obtained under more general conditions than Condition 2.1.

To consider this problem, we first need the existence of the solution Z()Z(\cdot) of (2.1), for which the existence of the solution X()X(\cdot) of (2.4) is sufficient as discussed in Remark 4.1. For the latter existence, Condition 2.2 is weaker than Condition 2.1, but Theorem 5.15 of [8] and Theorem 23.1 of [7] show that it can be further weakened to

Condition 5.1.
σ2(x)>0,x,\displaystyle\sigma^{2}(x)>0,\qquad\forall x\in\mathbb{R}, (5.1)
x1x21σ2(y)𝑑y<,(x1,x2)2 satisfying x1<x2,\displaystyle\int_{x_{1}}^{x_{2}}\frac{1}{\sigma^{2}(y)}dy<\infty,\qquad\forall(x_{1},x_{2})\in\mathbb{R}^{2}\mbox{ satisfying }x_{1}<x_{2}, (5.2)
xεx+ε|b(y)|σ2(y)𝑑y<,x,ε>0.\displaystyle\int_{x-\varepsilon}^{x+\varepsilon}\frac{|b(y)|}{\sigma^{2}(y)}dy<\infty,\qquad\forall x\in\mathbb{R},\exists\varepsilon>0. (5.3)

It is easy to see that Condition 5.1 is indeed implied by Condition 2.2. Note that the local integrability condition (5.2) implies that

limε0xεx+ε1σ2(y)𝑑y<,x,\displaystyle\lim_{\varepsilon\downarrow 0}\int_{x-\varepsilon}^{x+\varepsilon}\frac{1}{\sigma^{2}(y)}dy<\infty,\qquad\forall x\in\mathbb{R},

which is equivalent to Sσ=S_{\sigma}=\emptyset, where

Sσ={x;limε0xεx+ε1σ2(y)𝑑y=}.\displaystyle S_{\sigma}=\left\{x\in\mathbb{R};\lim_{\varepsilon\downarrow 0}\int_{x-\varepsilon}^{x+\varepsilon}\frac{1}{\sigma^{2}(y)}dy=\infty\right\}.

This condition Sσ=S_{\sigma}=\emptyset is needed for X(t)X(t) to exist for all t0t\geq 0 in the weak sense as shown by Theorem 23.1 of [7] and its subsequent discussions.

Assume Condition 5.1 for general σ(x)\sigma(x) and b(x)b(x). If these functions are well approximated by simple functions (e.g., the discontinuity points of σ(x)\sigma(x) and b(x)b(x) is finite for xx in each finite interval) and if b(x)0b(x)\not=0 for all x0x\geq 0, then Corollary 3.1 suggests that the stationary density is given by (3.38) under the condition that

01σ2(x)exp(0x2b(y)σ2(y)𝑑y)𝑑x<.\displaystyle\int_{0}^{\infty}\frac{1}{\sigma^{2}(x)}\exp\left(\int_{0}^{x}\frac{2b(y)}{\sigma^{2}(y)}dy\right)dx<\infty. (5.4)

To legitimize this suggestion, we need to carefully consider the approximation, but we have not yet done it. So, we leave it as a conjecture.

Appendix

A.1 Weak solution of a stochastic integral equation

There are two kinds of solutions for a stochastic integral equation such as (2.1). We here only consider them for the SIE (2.1). Recall that this equation is defined on stochastic basis (Ω,,𝔽,)(\Omega,{\mathcal{F}},\mathbb{F},\mathbb{P}). On this stochastic basis, if (2.1) holds almost surely on this stochastic basis, then the SIE (2.1) is said to have a strong solution. In this case, the standard Brownian motion W()W(\cdot) is defined on (Ω,,𝔽,)(\Omega,{\mathcal{F}},\mathbb{F},\mathbb{P}). On the other hand, the SIE (2.1) is said to have a weak solution if there are some stochastic basis (Ω~,~,𝔽~,~)(\widetilde{\Omega},\widetilde{{\mathcal{F}}},\widetilde{\mathbb{F}},\widetilde{\mathbb{P}}) and some 𝔽~\widetilde{\mathbb{F}}-adapted process (Z(),W(),Y())(Z(\cdot),W(\cdot),Y(\cdot)) on it such that (2.1) holds almost surely and W()W(\cdot) is the standard Brownian motion under ~x\widetilde{\mathbb{P}}_{x} for each x0x\geq 0, where ~x\widetilde{\mathbb{P}}_{x} is the conditional distribution of ~\widetilde{\mathbb{P}} given Z(0)=xZ(0)=x (e.g., see [8, Section 5.3]).

It may be better to use a different notation for the weak solution, e.g., (Z~(),W~(),Y~())(\widetilde{Z}(\cdot),\widetilde{W}(\cdot),\widetilde{Y}(\cdot)). However, we have used the same notation not only for this process but also stochastic basis for notational convenience. Thus, when we discuss about the weak solution, the stochastic basis (Ω,,𝔽,)(\Omega,{\mathcal{F}},\mathbb{F},\mathbb{P}) is considered to be appropriately replaced.

A.2 Local time and generalized Ito formula

We briefly discuss about local time for a generalized Ito formula (3.12). This Ito formula is also called Ito-Meyer-Tanaka formula (e.g., see Theorem 6.22 of [8] and Theorem 22.5 of [7]). Let X()X(\cdot) be a continuous semi-martingale with finite quadratic variations [X]t[X]_{t} for all t0t\geq 0. For this X()X(\cdot), local time Lx(t)L_{x}(t) for xx\in\mathbb{R} and t0t\geq 0 is defined through

Lx(t)g(x)𝑑x=0tg(X(u))d[X]u for any measurable function g.\displaystyle\int_{-\infty}^{\infty}L_{x}(t)g(x)dx=\int_{0}^{t}g(X(u))d[X]_{u}\quad\mbox{ for any measurable function $g$}. (A.1)

See Theorem 7.1 of [8] for details about the definition of local time. Note that the local time of [8] is half of the local time in this paper. Applying g(y)=1(xε,x+ε)(y)g(y)=1_{(x-\varepsilon,x+\varepsilon)}(y) for ε>0\varepsilon>0 to (A.1), we can see that

Lx(t)=limε012ε0t1(xε,x+ε)(X(u))d[X]u,a.s.x,t0\displaystyle L_{x}(t)=\lim_{\varepsilon\downarrow 0}\frac{1}{2\varepsilon}\int_{0}^{t}1_{(x-\varepsilon,x+\varepsilon)}(X(u))d[X]_{u},\quad a.s.\quad x\in\mathbb{R},t\geq 0 (A.2)

This can be used as the definition of the local time.

There are two versions of the local time since Lx(t)L_{x}(t) is continuous in tt, but may not be continuous in xx. So, usually, the local time Lx(t)L_{x}(t) is assumed to be right-continuous for the generalized Ito formula (3.12). However, if the finite variation component of X()X(\cdot) is not atomic, then Lx(t)L_{x}(t) is continuous in xx (see Theorem 22.4 of [7]). In particular, the finite variation component of Z()Z(\cdot) is continuous by Lemma 2.2, so we have the following lemma.

Lemma A.1.

For the Z()Z(\cdot) of an 1-dimensional state-dependent SRBM with bounded drifts, its local time Lx(t)L_{x}(t) is continuous in xx for each t0t\geq 0. Furthermore, 𝔼[Lx(t)]\mathbb{E}[L_{x}(t)] is finite by (A.1) for X()=Z()X(\cdot)=Z(\cdot).

Let ff be a concave test function from \mathbb{R} to \mathbb{R}, then f-f is a convex function, where (f)(x)=f(x)(-f)(x)=-f(x), so the generalized Ito formula (3.12) becomes

f(X(t))=f(X(0))+0tf(X(u))𝑑X(u)120Lx(t)μf(dx),t0,\displaystyle f(X(t))=f(X(0))+\int_{0}^{t}f^{\prime}(X(u)-)dX(u)-\frac{1}{2}\int_{0}^{\infty}L_{x}(t)\mu_{-f}(dx),\qquad t\geq 0, (A.3)

For constant aa\in\mathbb{R}, let f(x)=(xa)+max(0,xa)f(x)=(x-a)^{+}\equiv\max(0,x-a) for (3.12), then f(x)=1(x>a)f^{\prime}(x-)=1(x>a) and μf(B)=1(aB)\mu_{f}(B)=1(a\in B). Hence, it follows from (3.12) that

(X(t)a)+=(X(0)a)++0t1(X(u)>a)𝑑X(u)+12La(t).\displaystyle(X(t)-a)^{+}=(X(0)-a)^{+}+\int_{0}^{t}1(X(u)>a)dX(u)+\frac{1}{2}L_{a}(t). (A.4)

Similarly, applying f(x)=(xa)max(0,(xa))f(x)=(x-a)^{-}\equiv\max(0,-(x-a)) and f(x)=|xa|f(x)=|x-a|, we have, by (A.3) and (3.12),

(X(t)a)=(X(0)a)0t1(X(u)a)𝑑X(u)+12La(t),\displaystyle(X(t)-a)^{-}=(X(0)-a)^{-}-\int_{0}^{t}1(X(u)\leq a)dX(u)+\frac{1}{2}L_{a}(t), (A.5)
|X(t)a|=|X(0)a|+0tsgn(X(u)a)𝑑X(u)+La(t),t0,\displaystyle|X(t)-a|=|X(0)-a|+\int_{0}^{t}{\rm sgn}(X(u)-a)dX(u)+L_{a}(t),\qquad t\geq 0, (A.6)

where sgn(x)=1(x>0)1(x0){\rm sgn}(x)=1(x>0)-1(x\leq 0). Note that either one of these three formulas can be used to define local time La(t)L_{a}(t). In particular, (A.6) is called a Tanaka formula because it is originally studied for Brownian motion by Tanaka [13].

Acknowledgement

This study is originally motivated by the BAR approach, coined by Jim Dai (e.g., see [3]). I am grateful to him for his continuous support for my work. I am also grateful to Rami Atar for his helpful comments on the solution of stochastic integral equation (2.1). I also benefit Rajeev Bhaskaran through personal communications. At last but not least, I sincerely thank Krishnamoorthy for encouraging me to present a talk at International Conference on Advances in Applied Probability and Stochastic Processes, held in Thrissur, India in January, 2024. This paper is written to follow up this talk and my paper [12].

References

  • Atar et al. [2022] Atar, R., Castiel, E. and Reiman, M. (2022). Parallel server systems under an extended heavy traffic condition: A lower bound. Tech. rep. Preprint, URL https://arxiv.org/pdf/2201.07855.
  • Atar et al. [2023] Atar, R., Castiel, E. and Reiman, M. (2023). Asymptotic optimality of switched control policies in a simple parallel server system under an extended heavy traffic condition. Tech. rep. Submitted for publication, URL https://arxiv.org/pdf/2207.08010.
  • Braverman et al. [2024] Braverman, A., Dai, J. and Miyazawa, M. (2024). The BAR-approach for multiclass queueing networks with SBP service policies. Stochastic Systems. Published online in Articles in Advance, URL https://doi.org/10.1287/stsy.2023.0011.
  • Chung and Williams [1990] Chung, K. L. and Williams, R. J. (1990). Introduction to Stochastic Integration. 2nd ed. Birkhäuser, Boston.
  • Cohen and Elliott [2015] Cohen, S. N. and Elliott, R. J. (2015). Stochastic Calculus and Applications. 2nd ed. Birkhäuser, Springer, New York.
  • Harrison [2013] Harrison, J. M. (2013). Brownian Models of Performance and Control. Cambridge University Press, New York.
  • Kallenberg [2001] Kallenberg, O. (2001). Foundations of Modern Probability. 2nd ed. Springer Series in Statistics, Probability and its applications, Springer, New York.
  • Karatzas and Shreve [1998] Karatzas, I. and Shreve, S. E. (1998). Brownian motion and stochastic calculus, vol. 113 of Graduate text in mathematics. 2nd ed. Springer, New York.
  • Kaspi and Mandelbaum [1994] Kaspi, H. and Mandelbaum, A. (1994). On Harris recurrence in continuous time. Mathematics of Operations Research, 19 211–222.
  • Kruk et al. [2007] Kruk, L., Lehoczky, J. and Ramanan, K. (2007). An explicit formula for the Skorokhod map on (0,a]. The Annals of Probability, 35 1740–1768.
  • Meyn and Tweedie [1993] Meyn, S. P. and Tweedie, R. L. (1993). Stability of Markovian processes II: Continuous time processes and sampled chains. Advances in Applied Probability, 25 487–517.
  • Miyazawa [2024] Miyazawa, M. (2024). Diffusion approximation of the stationary distribution of a two-level single server queue. Tech. rep. Submitted for publication, URL https://arxiv.org/abs/2312.11284.
  • Tanaka [1963] Tanaka, H. (1963). Note on continuous additive functionals of the 1-dimensional Brownian path. Z. Wahrscheinlichkeitstheorie verw. Gebiete, 1 251–257.
  • Tanaka [1979] Tanaka, H. (1979). Stochastic differential equations with reflecting boundary condition in convex regions. Hiroshima Math. J., 9 163–177.