Multi-level reflecting Brownian motion on the half line and its stationary distribution
Abstract
A semi-martingale reflecting Brownian motion is a popular process for diffusion approximations of queueing models including their networks. In this paper, we are concerned with the case that it lives on the nonnegative half-line, but the drift and variance of its Brownian component discontinuously change at its finitely many states. This reflecting diffusion process naturally arises from a state-dependent single server queue, studied by the author [12]. Our main interest is in its stationary distribution, which is important for application. We define this reflecting diffusion process as the solution of a stochastic integral equation, and show that it uniquely exists in the weak sense. This result is also proved in a different way by Atar et al. [1]. In this paper, we consider its Harris irreducibility and stability, that is, positive recurrence, and derive its stationary distribution under this stability condition. The stationary distribution has a simple analytic expression, likely extendable to a more general state-dependent SRBM. Our proofs rely on the generalized Ito formula for a convex function and local time.
Keywords: reflecting Brownian motion, multi level, discontinuous diffusion coefficients, stationary distribution, stochastic integral equation, generalized Ito formula, Tanaka formula, Harris irreducibility.
MSC Classification: 60H20, 60J25, 60J55, 60H30, 60K25
1 Introduction
We are concerned with a semi-martingale reflecting Brownian motion (SRBM for short) on the nonnegative half-line in which the drift and variance of its Brownian component discontinuously change at its finitely many states. The partitions of its state space which is separated by these states are called levels. This reflecting SRBM is denoted by , and will be called a one-dimensional multi-level SRBM (see Definition 2.1). In particular, if the number of the levels is , then it is called a one-dimensional -level SRBM. Note that the one-dimensional 1-level SRBM is just the standard SRBM on the half line.
Let be a one-dimesional -level SRBM. This reflecting process for arises in the recent study of Miyazawa [12] for asymptotic analysis of a state dependent single server queue, called -level queue, in heavy traffic. This queueing model was motivated by an energy saving problem on servers for internet.
In [12], it is conjectured that the reflecting process for is the weak solution of a stochastic integral equation (see (2.1) in Section 2) and, if its stationary distribution exists, then this distribution agrees with the limit of the scaled stationary distribution of the -level queue under heavy traffic, which is obtained under some extra conditions in Theorem 3.1 of [12]. While writing this paper, we have known that the weak existence of the solution is shown by Atar et al. [1] for a more general model than we have studied here, and its uniqueness is proved in [2, Lemma 4.1] under one of the conditions of this paper.
We refer to these results of Atar et al. [1, 2] as Lemma 2.1. However, we here prove a slightly different lemma, Lemma 2.2, which is restrictive for the existence but less restrictive for the uniqueness. Lemma 2.2 includes some further results which will be used. Furthermore, its proof is sample path based and different from that of Lemma 2.1. We then show in Lemma 2.3 that is Harris irreducible, and give a necessary and sufficient condition for it to be positive recurrent in Lemma 2.4. These three lemmas are bases for our stochastic analysis.
The main results of this paper are Theorem 3.2 and Corollary 3.1 for the -level SRBM, which derive the stationary distribution of without any extra condition under the stability condition obtained in Lemma 2.4. However, we first focus on the case for in Theorem 3.1, then consider the case for general in Theorem 3.2. This is because the presentation and proof for general are notationally complicated while the proof for can be used with minor modifications for general . The stationary distribution for is rather simple, it is composed of two mutually singular measures, one is truncated exponential or uniform on the interval , and the other is exponential on , where is the right endpoint of the first level at which the variance of the Brownian component and drift of the discontinuously change. One may easily guess these measures, but it is not easy to compute their weights by which the stationary distribution is uniquely determined (see [12]). We resolve this computation problem using the local time of the semi-martingale at .
The key tools for the proofs of the lemmas and theorems are the generalized Ito formula for a convex function and local time due to Tanaka [13]. We also use the notion of a weak solution of a stochastic integral equation. These formulas and notions are standard in stochastic analysis nowadays (e.g, see [4, 5, 6, 7, 8]), but they are briefly supplemented in the appendix because they play major roles in our proofs.
This paper is made up by five sections. In Section 2, we formally introduce a one-dimensional reflecting SRBM with state-dependent Brownian component which includes the one-dimensional multi-level SRBM as a special case, and present preliminary results including Lemmas 2.2, 2.3 and 2.4, which are proved in Section 4. Theorems 3.1 and 3.2 are presented and proved in Section 3. Finally, a related problems and a generalization of Theorem 3.2 are discussed in Section 5. In the appendix, the definitions of a weak solution for a stochastic integral equation and local time of a semi-martingale are briefly discussed in Sections A.1 and A.2, respectively.
2 Problem and preliminary lemmas
Let and be measurable positive and real valued functions, respectively, of , where is the set of all real numbers. We are interested in the solution of the following stochastic integral equation, SIE for short.
(2.1) |
where is the standard Brownian motion, and is a non-deceasing process satisfying that for . We refer to this as a regulator. The state space of is denoted by , where .
As usual, we assume that all continuous-time processes are defined on stochastic basis , and right-continuous with left-limits and -adapted, where is a right-continuous filtration. Note that there are two kinds of solutions, strong and weak ones, for the SIE (2.1). See Appendix A.1 for their definitions. In this paper, we call weak solution simply by solution unless stated otherwise.
If functions and are Lipschitz continuous and their squares are bounded by for some constant , then the SIE (2.1) has a unique solution even for the multidimensional SRBM which lives on a convex region (see [14, Theorem 4.1]). However, we are interested in the case that and discontinuously change. In this case, any solution may not exist in general, so we need some condition. As we discussed in Section 1, we are particularly interested when they satisfy the following conditions. Let .
Condition 2.1.
There are an integer and a strictly increasing sequence satisfying , for and such that functions and for are given by
(2.2) |
where is the indicator function of proposition “”, and , for are constants.
Since of (2.1) is nonnegative, and are only used for in (2.1). Taking this into account, we partition the state space of by under Condition 2.1 as follows.
(2.3) |
We call these ’s levels. Note that and are constants in on each level, and they may discontinuously change at state for under Condition 2.1.
We start with the existence of the solution of (2.1), which will be implied by the existence of the solution of the following stochastic integral equation in the weak sense.
(2.4) |
See Remark 4.1. Taking this into account, we will also consider the condition below for the existence of the weak solution .
Condition 2.2.
The functions and are measurable functions satisfying that
(2.5) |
For the unique existence of the weak solution , Condition 2.2 is further weakened to Condition 5.1 by Theorem 5.15 of [8] (see Section 5.2 for its details). However, the latter condition is quite complicated. So, we take the simpler condition (2.5), which is sufficient for our arguments.
Definition 2.1.
The solutions of (2.1) under Condition 2.1 is called a one-dimensional multi-level SRBM, in particular, called one-dimensional -level SRBM if it has levels, namely, the total number of partitions of (2.3) is , while it under Condition 2.2 is called a one-dimensional state-dependent SRBM with bounded drifts.
The proof of (i) is easy (see Remark 4.1) while the proof of (ii) is quite technical. Instead of this lemma, we will use the following lemma, in which (i) and (ii) of Lemma 2.1 are proved under more restrictive and less restrictive conditions, respectively.
Lemma 2.2.
We prove this lemma in Section 4.1, which is different from the proof of Lemma 2.1 by Atar et al. [1, 2].
The main interest of this paper is to derive the stationary distribution of the for the one-dimensional multi-level SRBM under an appropriate stability condition. Since this reflecting diffusion process satisfies the conditions of Lemma 2.2, is a strong Markov process. Hence, our first task for deriving its stationary distribution is to consider its irreducibility and positive recurrence. To this end, we introduce Harris irreducibility and recurrence following [11]. Let be the Borel field, that is, the minimal -algebra on which contains all open sets of . Then, a real valued process which is right-continuous with left-limits is called Harris irreducible if there is a non-trivial -finite measure on such that, for , implies
(2.7) |
while it is called Harris recurrent if (2.7) can be replaced by
(2.8) |
where for , and for a random variable .
Harris conditions (2.7) and (2.8) are related to hitting times. Define the hitting time at a subset of the state space as
(2.9) |
where if for all . We denote simply by for . Then, it is known that Harris recurrent condition (2.8) is equivalent to
(2.10) |
See Theorem 1 of [9] (see also [11]) for the proof of this equivalence. However, Harris irreducible condition (2.7) may not be equivalent to . In what follows, is defined for the process to be discussed unless stated otherwise.
Using those notions and notations, we present the following basic facts for the one-dimensional state-dependent SRBM with bounded drifts, where is strong Markov by Lemma 2.2.
Lemma 2.3.
Remark 2.1.
(ii) is not surprising because we can intuitively see that the drift is pushed to the upward direction by reflection at the origin and the positive-valued variances.
Lemma 2.4.
For the one-dimensional state-dependent SRBM with bounded drifts, if the condition (2.6) of Lemma 2.2 is satisfied and if there are constants and such that
(2.11) |
then has a stationary distribution if and only if . In particular, the one-dimensional -level SRBM has a stationary distribution if and only if .
3 Stationary distribution of multi-level SRBM
We are concerned with the multi-level SRBM. Denote the number of its levels by . We first introduce basic notations. Let , and define
In this section, we derive the stationary distribution of the one-dimensional -level SRBM for arbitrary . We first focus on the case for because this is the simplest case but its proof contains all ideas will be used for general .
3.1 Stationary distribution for
Throughout Section 3.1, we assumed that .
Theorem 3.1 (The case for ).
The of the one-dimensional -level SRBM has a stationary distribution if and only if , equivalently, . Assume that , and let be the stationary distribution of , then is unique and has a probability density function which is given below.
(i) If , then
(3.1) |
where and are probability density functions defined as
(3.2) |
and for are positive constants defined by
(3.3) |
(ii) If , then
(3.4) |
where is defined in (3.2), and
(3.5) | ||||
(3.6) |
Remark 3.1.
(a) (3.5) and (3.6) are obtained from (3.2) and (3.3) by letting .
(b) Assume that is a stationary process, and define the moment generating functions (mgf for short):
Here, and are finite for , and does so for . However, all of them are uniquely identified for as Laplace transforms. So, in what follows, we always assume that unless stated otherwise.
For , let and be the moment generating functions of and , respectively, then
(3.7) | ||||
(3.8) |
where the singular points in (3.7) are negligible to determine , so we take the convention that exists for these .
Remark 3.2.
Miyazawa [12] conjectures that the diffusion scaled process limit of the queue length of the 2-level queue in heavy traffic is the solution of the stochastic integral equation of (5.2) in [12]. This stochastic equation corresponds to (2.1), but , and of the present paper needs to replace by , , for , respectively. Under these replacements, also needs to replace by . Then, it follows from (3.3), (3.6), (3.10) and (3.11) that, under the setting of Miyazawa [12], for ,
and, for ,
Hence, the limiting distributions in (ii) of Theorem 3.1 of [12] are identical with the stationary distributions in Theorem 3.1 here. Note that the limiting distributions in [12] are obtained under some extra conditions, which are not needed for Theorem 3.1.
3.2 Proof of Theorem 3.1
By Remark 3.1, it is sufficient to show (3.9), (3.10) and (3.11) for the proof of Theorem 3.1. We will do it in three steps.
3.2.1 1st step of the proof
In this subsection, we derive two stochastic equations from (2.1). For this, we use the generalized Ito formulas for a continuous semi-martingale with finite quadratic variations for all . For a convex test function , this Ito formula is given by
(3.12) |
where is the local time of which is right-continuous in , and on is a measure on , defined by
(3.13) |
where is the left derivative of at . See Appendix A.2 for the definition of local time and more about its connection to the generalized Ito formula (3.12).
Furthermore, if is twice differentiable, then (3.12) can be written as
(3.14) |
which is well known Ito formula.
In our application of the generalized Ito formula, we first take the following convex function with parameter as a test function.
(3.15) |
Since and , it follows from (3.13) that
On the other hand, for . Hence,
Then, applying local time characterization (A.1) to this formula, we have
(3.16) |
We next compute the quadratic variation of . Define by
(3.17) |
then is a martingale. Denote the quadratic variations of and , respectively, by and . Since and are continuous in , it follows from (2.1) that
(3.18) |
3.2.2 2nd step of the proof
The first statement of Theorem 3.1 is immediate from Lemma 2.4. Hence, under , we can assume that is a stationary process by taking its stationary distribution for the distribution of . In what follows, this is always assumed.
Recall the moment generating functions and , which are defined in Remark 3.1. We first consider the stochastic integral equation (3.2.1) to compute . Since is finite by Lemma A.1, taking the expectation of (3.2.1) for and yields
(3.21) |
because . Note that this equation implies that is also finite.
Using (3.21), we consider separately for and . First, assume that . Then, from (3.21) and , we have
(3.22) |
This equation can be written as
(3.23) |
Observe that the first term in the right-hand side of (3.23) is proportional to the moment generating function (mgf) of the signed measure on whose density function is exponential while its second term is the mgf of a measure on , but the left-hand side of (3.23) is the mgf of a probability measure on . Hence, we must have
(3.24) |
and therefore (3.23) yields
(3.25) |
where is defined in (3.7), but also exists for by our convention.
We next assume that . In this case, , and it follows from (3.22) that
(3.26) |
Since is the mgf of the uniform distribution on , by the same reason as in the case for , we must have
Note that this equation is identical with (3.24) for . Furthermore, and . Hence, (3.26) implies that
(3.27) |
by our convention for similar to . Thus, we have the following lemma.
Lemma 3.1.
The mgf is obtained as
(3.28) |
We next consider the stochastic integral equation (3.2.1) to derive . In this case, we use (3.2.1). Note that and are finite for . Hence, taking the expectations of both sides of (3.2.1) for and yields
(3.29) |
Substituting of (3.21) and of (3.24) into this equation, we have
(3.30) |
The following lemma is immediate from this equation since .
Lemma 3.2.
3.2.3 3rd step of the proof
We now prove (3.9), (3.10) and (3.11). Since , (3.9) is immediate from Lemmas 3.1 and 3.2. To prove (3.10), assumed that . In this case, from (3.25) and (3.31), we have
Taking the ratios of both sides, we have
Since , this and yield
This proves (3.10). We next assume that , then it follows from (3.27) and (3.31) that
Similarly to the case for , this yields
(3.32) | ||||
(3.33) |
This proves (3.11). Thus, the proof of Theorem 3.1 is completed.
3.3 Stationary distribution for general
We now derive the stationary distribution of the one-dimensional -level SRBM for a general positive integer . Recall the definition of , and define as
where for . Also recall that the state space is partitioned to defined in (2.3) for .
Theorem 3.2 (The case for general ).
The of the one-dimensional -level SRBM has a stationary distribution if and only if , equivalently, . Let , and assume that , then denote the stationary distribution of by , then is unique and has a probability density function for which is given below.
(i) If , that is, for all , then
(3.34) |
where for are probability density functions defined as
(3.35) |
and for are positive constants defined as
(3.36) |
(ii) If , that is, for some , then
(3.37) |
where and for .
Before proving this theorem in Section 3.4, we note that the density has a simple expression, which is further discussed in Section 5.2.
Corollary 3.1.
Under the assumptions of Theorem 3.2, the probability density function of the stationary distribution of the -level on when for all is given by
(3.38) |
where
(3.39) |
Proof.
3.4 Proof of Theorem 3.2
Similar to the proof of Theorem 3.1, the first statement is immediate from Lemma 2.4, and we can assume that is a stationary process since . We also always assume that . Define moment generating functions (mgf):
which are obviously finite because . Then, the mgf of is expressed as
and for .
We first prove (i). In this case, let be the mgf of for , then
(3.42) |
Hence, (3.34) is obtained if we show that, for ,
(3.43) | ||||
(3.44) |
To prove (3.43) and (3.44), we use the following convex function with parameter as a test function for the generalized Ito formula similar to (3.15).
(3.45) |
Since and , it follows from (3.13) that
and, for . Hence, similarly to (3.2.1), the generalized Ito formula (3.12) for becomes
(3.46) |
Similarly to the proof of Theorem 3.1, we next apply Ito formula for test function to (2.1), then we have (3.2.1) for and which are defined by (2.2). From (3.4) and (3.2.1), we will compute the stationary distribution of .
We first consider this equation for . In this case, (3.4) becomes
(3.47) |
Then, by the same arguments in the proof of Theorem 3.1, we have (3.21) and (3.24), which imply
(3.48) |
Hence, we have
(3.49) |
Thus, (3.43) is proved for . We prove (3.44) after (3.43) is proved for all .
We next prove (3.43) for . In this case, we use of (3.4). Take the difference for each fixed and take the expectation under which is stationary, then we have
because . This yields
(3.50) |
Since is the mgf of a measure on , we must have
(3.51) |
Hence, (3.4) becomes, for ,
(3.52) | ||||
(3.53) |
Hence, we have (3.43) for .
We finally prove (3.43) for . Similarly to the case for in the proof of Theorem 3.1, it follows from (3.2.1) that
(3.54) |
Similarly, from (3.4) for , we have
(3.55) |
Taking the difference of (3.54) and (3.55), we have
which yields
(3.56) |
Hence, we have (3.43) for . Namely,
It remains to prove (3.44) for . For this, we note that (3.24) is still valid, which is
Hence, recalling that , (3.51) yields
(3.57) |
From (3.53), (3.56), (3.57) and the fact that , we have
(3.58) |
Since , it follows from (3.58) that
(3.59) |
because . Substituting this into (3.58) and using , we have (3.44) for because is defined by (3.36).
(ii) is proved for from (i) and (a) of Remark 3.1. It is not hard to see that this observation (a) is also valid for any for . Hence, (ii) can be proved also for from (i).
4 Proofs of preliminary lemmas

4.1 Proof of Lemma 2.2
Recall that Lemma 2.2 assumes the conditions of (2.6) and of the one-dimensional state-dependent SRBM with bounded drifts. Since , there are constants such that . Using these constants, we construct the weak solution of of (2.1). The basic idea is to construct the sample path of separately for disjoint time intervals, where, for the first interval, if , then stays there until it hits or, if , then it stays there until it hits , and, for the subsequent intervals, starts below until hits , which is called an up-crossing period, and those in which starts start at or above it until hits , which is called a down-crossing period. Namely, except for the first interval, the up-crossing period always starts at , and the down-crossing period always starts at (see Figure 1). In this construction, we also construct the filtration for which is adapted.
Define as
(4.1) |
and let be the solution of the following stochastic integral equation:
(4.2) |
Note that the SIE (2.4) is stochastically identical with the SIE (4.2). Hence, as we discussed below (2.4), the SIE (4.2) has a unique weak solution under Condition 2.2. Thus, the solution weakly exists because the assumptions of Lemma 2.2 imply Condition 2.2. For this weak solution, we use the same notations for , and stochastic basics for convenience, where . Without loss of generality, we expand this stochastic basic which accommodates , and have countable independent copies of and for , which are denoted by and for .
We first construct the weak solution of (2.1) when , using and . For this construction, we introduce up and down crossing times for a given real-valued semi-martingale . Denote the -th up-crossing time at from below by , and denote the down-crossing time at from above by . Namely, for and ,
where . Note that and may be infinite with positive probabilities. In this case, there is no further splitting, which causes no problem in constructing the sample path of because such a sample path is already defined for all . After the weak solution is obtained, we will see that for by Lemma 2.3, but may be infinite with a positive probability.
We now inductively construct for , where the construction below is stopped when diverges. For , we denote the independent copy of with by , and define as
(4.3) |
then it is well known that is the unique solution of the stochastic integral equation:
(4.4) |
where is nondecreasing and for . Furthermore, for , (e.g., see [10]). Since and for , (4.4) can be written as
(4.5) |
We next denote the independent copy of with by , and define
(4.6) |
then we have, for ,
(4.7) |
where recall that . Define
then is stochastically identical with for . Hence, it follows from (4.1) and (4.7) that, for ,
(4.8) |
where and
(4.9) |
We repeat the same procedure to inductively define with and for together with and for by
as long as , and define
then we have, for ,
(4.10) |
where
From (4.10), we can see that is the solution of (2.1) for . Furthermore, for . From this observation, we define by and
(4.11) | ||||
(4.12) | ||||
(4.13) |
where , then is the solution of (2.1) for if . Otherwise, if and for , then we stop the procedure by the -th step.
Up to now, we have assumed that . If this is net less than , then we start with of (4.6) with , and replace of (4.3) by
Then, define as
where because the order of and is swapped. Similarly to the previous case that , we repeat this procedure to inductively define for , then we can defined and similarly to (4.11) and (4.12).
Hence, of (4.11) is the solution of (2.1) if we show that there is some for each such that . This condition is equivalent to almost surely. To see this, assume that for all , then let for , then is a sequence of positive valued random variables. Hence, we have
and therefore is well defined for all . Otherwise, if for some , then we stop the procedure by the -th step.
Thus, we have constructed the solution of (2.1). Note that the probability distribution of this solution does not depend on the choice of as long as because of the independent increment property of the Brownian motion. Furthermore, this is a strong Markov process because and are strong Markov processes (e.g. see (8.12) of [4], Theorem 21.11 of [7], Theorem 17.23 and Remark 17.2.4 of [5]) and is obtained by continuously connecting their sample paths using stopping times. Thus, the is the weak solution of (2.1) which is strong Markov.
It remains to prove the weak uniqueness of the solution . This is immediate from the construction of . Namely, suppose that is the solution of (2.1) with for given . Assume that , then the process with is stochastically identical with with , which is the unique solution of (4.4), while the process must be stochastically identical with with , which is the unique weak solution of (4.2). Similarly, we can see such stochastic equivalences in the subsequent periods for . On the other hand, if , then similar equivalences are obtained. Hence, and have the same distribution for each fixed initial state . Thus, the is a unique weak solution, and the proof of Lemma 2.2 is completed.
Remark 4.1.
From an analogy to the reflecting Brownian motion on the half line , it may be questioned whether the solution of (2.1) can be directly obtained from the weak solution of (2.4) by its absolute value, that is by . This question is affirmatively answered under Condition 2.2 by Atar et al. [1]. It may be interesting to see how they prove (i) of Lemma 2.1, so we explain it below.
Recall that the solution of the SIE (2.4) weakly exists under Condition 2.2. If is the solution of the stochastic integral equation (2.1), then we must have
(4.14) |
On the other hand, from Tanaka formula (A.6) for , we have
(4.15) |
Hence, letting , (4.14) is stochastically identical with (4.1) if
(4.16) |
and if is replaced by . Since the stochastic integral in (2.1) does not depend on and for , (4.16) does not cause any problem for (2.1).
4.2 Proof of Lemma 2.3
Recall the definition of for (see (2.9)). We first prove that
(4.17) | ||||
(4.18) |
Since , for . Hence, substituting the stopping time into of the generalize Ito formula (3.2.1) for test function and taking the expectation under , we have, for and ,
(4.19) |
where . Note that f, for each , if
(4.20) |
Recall that , and introduce the following notations.
Then, , and by Condition 2.2, which is assumed in Lemma 2.3. Hence, for each , for and . For this , it follows from (4.19) that
because and for . This proves (4.17) because we have
(4.21) |
We next consider the case for . Similarly to the previous case but for , from (4.20), we have for and if satisfies
(4.22) |
Since for because , we have, from (4.19), for satisfying (4.22),
(4.23) |
Assume that , then , so we have, from (4.23),
Denote by . Then, after elementary manipulation, this yields
and therefore, by integrating both sides of this inequality, we have
because for . Letting in this inequality, we have a contradiction because its right-hand side diverges. Hence, we have (4.18). We finally consider the case . If , then (4.23) holds, and the arguments below it works, which proves (4.18). Otherwise, if , that is, , then because of the definition of . Hence, we again have (4.18) for .
We finally check Harris irreducible condition (see (2.7)). For this, let , then is stochastically identical with , where . Then, from Tanaka’s formula (A.5) for , if ,
Hence, if , then
(4.24) |
Similarly, from (A.4) for , if , then, for ,
(4.25) |
Assume that , then we choose and the Lebesque measure on for . Then, it follows from (4.24) and (A.1) with that for implies
Since hits state from any state in with positive probability, this inequality implies the Harris irreducibility condition (2.7). Similarly, this condition is proved for using (4.2) and the Lebesgue measure on for . Thus, the proof of Lemma 2.3 is completed.
4.3 Proof of Lemma 2.4
Obviously, is necessary for to have a stationary distribution because diverges if by the strong law of large numbers and Lemma 2.3 while is null recurrent if .
Conversely, assume that . We note the following fact which is partially a counter part of (4.17).
Lemma 4.1.
If , then
(4.26) |
Proof.
Assume that , and let for . Since , under has the same distribution as , where for . Hence, applying the optional sampling theorem to the martingale for stopping time , we have
Since as w.p.1 by strong law of large numbers, letting in this equation yields . Hence, we have
(4.27) |
This proves (4.26). ∎
We return to the proof of Lemma 2.4. For and such that , inductively define as
where . Because , we have, from (4.17) and (4.26),
Hence, is a regenerative process with regeneration cycles because the sequence of for is by its strong Markov property. Hence, has the stationary probability measure given by
(4.28) |
Thus, is positive recurrent.
5 Concluding remarks
We discuss two topics here.
5.1 Process limit
It is conjectured in [12] that a process limit of the diffusion scaled queue length in the 2-level queue in heavy traffic is the solution of stochastic integral equation (2.1) for the 2-level SRBM. As we discussed in Remark 3.2, the stationary distribution of is identical with the limit of the stationary distribution of the scaled queue length in the 2-level queue in heavy traffic, obtained in [12]. This strongly supports this conjecture on the process limit.
We believe that the conjecture is true. However, the standard proof for diffusion approximation based on functional central limit theorem may not work because of the state dependent arrivals and service speed in the 2-level queue. We are now working on this problem by formulating the queue length process for the 2-level queue as a semi-martingale. However, we have not yet completed its proof, so this is an open problem.
5.2 Stationary distribution under weaker conditions
In this paper, we derived the stationary distribution for a one-dimensional multi-level SRBM under the stability condition. In the view of Corollary 3.1, it is naturally questioned whether a similar stationary distribution is obtained under more general conditions than Condition 2.1.
To consider this problem, we first need the existence of the solution of (2.1), for which the existence of the solution of (2.4) is sufficient as discussed in Remark 4.1. For the latter existence, Condition 2.2 is weaker than Condition 2.1, but Theorem 5.15 of [8] and Theorem 23.1 of [7] show that it can be further weakened to
Condition 5.1.
(5.1) | ||||
(5.2) | ||||
(5.3) |
It is easy to see that Condition 5.1 is indeed implied by Condition 2.2. Note that the local integrability condition (5.2) implies that
which is equivalent to , where
This condition is needed for to exist for all in the weak sense as shown by Theorem 23.1 of [7] and its subsequent discussions.
Assume Condition 5.1 for general and . If these functions are well approximated by simple functions (e.g., the discontinuity points of and is finite for in each finite interval) and if for all , then Corollary 3.1 suggests that the stationary density is given by (3.38) under the condition that
(5.4) |
To legitimize this suggestion, we need to carefully consider the approximation, but we have not yet done it. So, we leave it as a conjecture.
Appendix
A.1 Weak solution of a stochastic integral equation
There are two kinds of solutions for a stochastic integral equation such as (2.1). We here only consider them for the SIE (2.1). Recall that this equation is defined on stochastic basis . On this stochastic basis, if (2.1) holds almost surely on this stochastic basis, then the SIE (2.1) is said to have a strong solution. In this case, the standard Brownian motion is defined on . On the other hand, the SIE (2.1) is said to have a weak solution if there are some stochastic basis and some -adapted process on it such that (2.1) holds almost surely and is the standard Brownian motion under for each , where is the conditional distribution of given (e.g., see [8, Section 5.3]).
It may be better to use a different notation for the weak solution, e.g., . However, we have used the same notation not only for this process but also stochastic basis for notational convenience. Thus, when we discuss about the weak solution, the stochastic basis is considered to be appropriately replaced.
A.2 Local time and generalized Ito formula
We briefly discuss about local time for a generalized Ito formula (3.12). This Ito formula is also called Ito-Meyer-Tanaka formula (e.g., see Theorem 6.22 of [8] and Theorem 22.5 of [7]). Let be a continuous semi-martingale with finite quadratic variations for all . For this , local time for and is defined through
(A.1) |
See Theorem 7.1 of [8] for details about the definition of local time. Note that the local time of [8] is half of the local time in this paper. Applying for to (A.1), we can see that
(A.2) |
This can be used as the definition of the local time.
There are two versions of the local time since is continuous in , but may not be continuous in . So, usually, the local time is assumed to be right-continuous for the generalized Ito formula (3.12). However, if the finite variation component of is not atomic, then is continuous in (see Theorem 22.4 of [7]). In particular, the finite variation component of is continuous by Lemma 2.2, so we have the following lemma.
Lemma A.1.
For the of an 1-dimensional state-dependent SRBM with bounded drifts, its local time is continuous in for each . Furthermore, is finite by (A.1) for .
Let be a concave test function from to , then is a convex function, where , so the generalized Ito formula (3.12) becomes
(A.3) |
For constant , let for (3.12), then and . Hence, it follows from (3.12) that
(A.4) |
Similarly, applying and , we have, by (A.3) and (3.12),
(A.5) | ||||
(A.6) |
where . Note that either one of these three formulas can be used to define local time . In particular, (A.6) is called a Tanaka formula because it is originally studied for Brownian motion by Tanaka [13].
Acknowledgement
This study is originally motivated by the BAR approach, coined by Jim Dai (e.g., see [3]). I am grateful to him for his continuous support for my work. I am also grateful to Rami Atar for his helpful comments on the solution of stochastic integral equation (2.1). I also benefit Rajeev Bhaskaran through personal communications. At last but not least, I sincerely thank Krishnamoorthy for encouraging me to present a talk at International Conference on Advances in Applied Probability and Stochastic Processes, held in Thrissur, India in January, 2024. This paper is written to follow up this talk and my paper [12].
References
- Atar et al. [2022] Atar, R., Castiel, E. and Reiman, M. (2022). Parallel server systems under an extended heavy traffic condition: A lower bound. Tech. rep. Preprint, URL https://arxiv.org/pdf/2201.07855.
- Atar et al. [2023] Atar, R., Castiel, E. and Reiman, M. (2023). Asymptotic optimality of switched control policies in a simple parallel server system under an extended heavy traffic condition. Tech. rep. Submitted for publication, URL https://arxiv.org/pdf/2207.08010.
- Braverman et al. [2024] Braverman, A., Dai, J. and Miyazawa, M. (2024). The BAR-approach for multiclass queueing networks with SBP service policies. Stochastic Systems. Published online in Articles in Advance, URL https://doi.org/10.1287/stsy.2023.0011.
- Chung and Williams [1990] Chung, K. L. and Williams, R. J. (1990). Introduction to Stochastic Integration. 2nd ed. Birkhäuser, Boston.
- Cohen and Elliott [2015] Cohen, S. N. and Elliott, R. J. (2015). Stochastic Calculus and Applications. 2nd ed. Birkhäuser, Springer, New York.
- Harrison [2013] Harrison, J. M. (2013). Brownian Models of Performance and Control. Cambridge University Press, New York.
- Kallenberg [2001] Kallenberg, O. (2001). Foundations of Modern Probability. 2nd ed. Springer Series in Statistics, Probability and its applications, Springer, New York.
- Karatzas and Shreve [1998] Karatzas, I. and Shreve, S. E. (1998). Brownian motion and stochastic calculus, vol. 113 of Graduate text in mathematics. 2nd ed. Springer, New York.
- Kaspi and Mandelbaum [1994] Kaspi, H. and Mandelbaum, A. (1994). On Harris recurrence in continuous time. Mathematics of Operations Research, 19 211–222.
- Kruk et al. [2007] Kruk, L., Lehoczky, J. and Ramanan, K. (2007). An explicit formula for the Skorokhod map on (0,a]. The Annals of Probability, 35 1740–1768.
- Meyn and Tweedie [1993] Meyn, S. P. and Tweedie, R. L. (1993). Stability of Markovian processes II: Continuous time processes and sampled chains. Advances in Applied Probability, 25 487–517.
- Miyazawa [2024] Miyazawa, M. (2024). Diffusion approximation of the stationary distribution of a two-level single server queue. Tech. rep. Submitted for publication, URL https://arxiv.org/abs/2312.11284.
- Tanaka [1963] Tanaka, H. (1963). Note on continuous additive functionals of the 1-dimensional Brownian path. Z. Wahrscheinlichkeitstheorie verw. Gebiete, 1 251–257.
- Tanaka [1979] Tanaka, H. (1979). Stochastic differential equations with reflecting boundary condition in convex regions. Hiroshima Math. J., 9 163–177.