This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Asymptotic property of the occupation measures in a two-dimensional skip-free Markov modulated random walk

Toshihisa Ozawa
Faculty of Business Administration, Komazawa University
1-23-1 Komazawa, Setagaya-ku, Tokyo 154-8525, Japan
E-mail: [email protected]
Abstract

We consider a discrete-time two-dimensional process {(X1,n,X2,n)}\{(X_{1,n},X_{2,n})\} on 2\mathbb{Z}^{2} with a background process {Jn}\{J_{n}\} on a finite set S0S_{0}, where individual processes {X1,n}\{X_{1,n}\} and {X2,n}\{X_{2,n}\} are both skip free. We assume that the joint process {𝒀n}={(X1,n,X2,n,Jn)}\{\boldsymbol{Y}_{n}\}=\{(X_{1,n},X_{2,n},J_{n})\} is Markovian and that the transition probabilities of the two-dimensional process {(X1,n,X2,n)}\{(X_{1,n},X_{2,n})\} vary according to the state of the background process {Jn}\{J_{n}\}. This modulation is assumed to be space homogeneous. We refer to this process as a two-dimensional skip-free Markov modulate random walk. For 𝒚,𝒚+2×S0\boldsymbol{y},\boldsymbol{y}^{\prime}\in\mathbb{Z}_{+}^{2}\times S_{0}, consider the process {𝒀n}n0\{\boldsymbol{Y}_{n}\}_{n\geq 0} starting from the state 𝒚\boldsymbol{y} and let q~𝒚,𝒚\tilde{q}_{\boldsymbol{y},\boldsymbol{y}^{\prime}} be the expected number of visits to the state 𝒚\boldsymbol{y}^{\prime} before the process leaves the nonnegative area +2×S0\mathbb{Z}_{+}^{2}\times S_{0} for the first time. For 𝒚=(x1,x2,j)+2×S0\boldsymbol{y}=(x_{1},x_{2},j)\in\mathbb{Z}_{+}^{2}\times S_{0}, the measure (q~𝒚,𝒚;𝒚=(x1,x2,j)+2×S0)(\tilde{q}_{\boldsymbol{y},\boldsymbol{y}^{\prime}};\boldsymbol{y}^{\prime}=(x_{1}^{\prime},x_{2}^{\prime},j^{\prime})\in\mathbb{Z}_{+}^{2}\times S_{0}) is called an occupation measure. Our main aim is to obtain asymptotic decay rate of the occupation measure as the values of x1x_{1}^{\prime} and x2x_{2}^{\prime} go to infinity in a given direction. We also obtain the convergence domain of the matrix moment generating function of the occupation measures.

Key wards: Markov modulated random walk, Markov additive process, occupation measure, asymptotic decay rate, moment generating function, convergence domain

Mathematics Subject Classification: 60J10, 60K25

1 Introduction

We consider a discrete-time two-dimensional process {(X1,n,X2,n)}\{(X_{1,n},X_{2,n})\} on 2\mathbb{Z}^{2}, where \mathbb{Z} is the set of all integers, and a background process {Jn}\{J_{n}\} on a finite set S0={1,2,,s0}S_{0}=\{1,2,...,s_{0}\}, where s0s_{0} is the cardinality of S0S_{0}. We assume that individual processes {X1,n}\{X_{1,n}\} and {X2,n}\{X_{2,n}\} are both skip free, which means that their increments take values in {1,0,1}\{-1,0,1\}. Furthermore, we assume that the joint process {𝒀n}={(X1,n,X2,n,Jn)}\{\boldsymbol{Y}_{n}\}=\{(X_{1,n},X_{2,n},J_{n})\} is Markovian and that the transition probabilities of the two-dimensional process {(X1,n,X2,n)}\{(X_{1,n},X_{2,n})\} vary according to the state of the background process {Jn}\{J_{n}\}. This modulation is assumed to be space homogeneous. We refer to this process as a two-dimensional skip-free Markov modulate random walk (2d-MMRW for short). The state space of the 2d-MMRW is given by 𝕊=2×S0\mathbb{S}=\mathbb{Z}^{2}\times S_{0}. The 2d-MMRW is a two-dimensional Markov additive process (2d-MA-process for short) [6], where (X1,n,X2,n)(X_{1,n},X_{2,n}) is the additive part and JnJ_{n} the background state. A discrete-time two-dimensional quasi-birth-and-death process [9] (2d-QBD process for short) is a 2d-MMRW with reflecting boundaries on the x1x_{1} and x2x_{2}-axes, where the process (X1,n,X2,n)(X_{1,n},X_{2,n}) is the level and JnJ_{n} the phase. Stochastic models arising from various Markovian two-queue models and two-node queueing networks such as two-queue polling models and generalized two-node Jackson networks with Markovian arrival processes and phase-type service processes can be represented as continuous-time 2d-QBD processes (see, for example, [7] and [9, 10, 11]) and, by using the uniformization technique, they can be deduced to discrete-time 2d-QBD processes. In that sense, (discrete-time) 2d-QBD processes are more versatile than two-dimensional skip-free reflecting random walks (2d-RRWs for short), which are 2d-QBD processes without a phase process and called double QBD processes in [4]. It is well known that, in general, the stationary distribution of a Markov chain can be represented in terms of its stationary probabilities on some boundary faces and its occupation measures. In the case of a 2d-QBD process, such occupation measures are given as those of the corresponding 2d-MMRW. For this reason, we focus on 2d-MMRWs and study their occupation measures, especially, asymptotic properties of the occupation measures. Here we briefly explain that the assumption of skip-free is not so restricted. For a given k>1k>1, assume that the increments of X1,nX_{1,n} and X2,nX_{2,n} take values in {k,(k1),,0,1,,k}\{-k,-(k-1),...,0,1,...,k\}. For i{1,2}i\in\{1,2\}, let Xi,nk{}^{k}\!X_{i,n} and Mi,nk{}^{k}\!M_{i,n} be the quotient and remainder of Xi,nX_{i,n} divided by kk, respectively, where Xi,nk{}^{k}\!X_{i,n}\in\mathbb{Z} and 0Mi,nkk10\leq{}^{k}\!M_{i,n}\leq k-1. Then, the process {(X1,nk,X2,nk,(M1,nk,M2,nk,Jn))}\{({}^{k}\!X_{1,n},{}^{k}\!X_{2,n},({}^{k}\!M_{1,n},{}^{k}\!M_{2,n},J_{n}))\} becomes a 2d-MMRW with skip-free jumps, where (X1,nk,X2,nk)({}^{k}\!X_{1,n},{}^{k}\!X_{2,n}) is the level and (M1,nk,M2,nk,Jn)({}^{k}\!M_{1,n},{}^{k}\!M_{2,n},J_{n}) the background state. Hence, any 2d-MMRW with bounded jumps can be reduced to a 2d-MMRW with skip-free jumps.

Let P=(p(x1,x2,j),(x1,x2,j);(x1,x2,j),(x1,x2,j)𝕊)P=\left(p_{(x_{1},x_{2},j),(x_{1}^{\prime},x_{2}^{\prime},j^{\prime})};(x_{1},x_{2},j),(x_{1}^{\prime},x_{2}^{\prime},j^{\prime})\in\mathbb{S}\right) be the transition probability matrix of the 2d-MMRW {𝒀n}\{\boldsymbol{Y}_{n}\}, where p(x1,x2,j)(x1,x2,j)=(𝒀n+1=(x1,x2,j)|𝒀n=(x1,x2,j))p_{(x_{1},x_{2},j)(x_{1}^{\prime},x_{2}^{\prime},j^{\prime})}=\mathbb{P}(\boldsymbol{Y}_{n+1}=(x_{1}^{\prime},x_{2}^{\prime},j^{\prime})\,|\,\boldsymbol{Y}_{n}=(x_{1},x_{2},j)). By the property of skip-free, each element of PP, say p(x1,x2,j)(x1,x2,j)p_{(x_{1},x_{2},j)(x_{1}^{\prime},x_{2}^{\prime},j^{\prime})}, is nonzero only if x1x1{1,0,1}x_{1}^{\prime}-x_{1}\in\{-1,0,1\} and x2x2{1,0,1}x_{2}^{\prime}-x_{2}\in\{-1,0,1\}. By the property of space-homogeneity, for (x1,x2),(x1,x2)2(x_{1},x_{2}),(x_{1}^{\prime},x_{2}^{\prime})\in\mathbb{Z}^{2}, k,l{1,0,1}k,l\in\{-1,0,1\} and j,jS0j,j^{\prime}\in S_{0}, we have p(x1,x2,j),(x1+k,x2+l,j)=p(x1,x2,j),(x1+k,x2+l,j)p_{(x_{1},x_{2},j),(x_{1}+k,x_{2}+l,j^{\prime})}=p_{(x_{1}^{\prime},x_{2}^{\prime},j),(x_{1}^{\prime}+k,x_{2}^{\prime}+l,j^{\prime})}. Hence, the transition probability matrix PP can be represented as a block matrix in terms of only the following s0×s0s_{0}\times s_{0} blocks:

Ak,l=(p(0,0,j)(k,l,j);j,jS0),k,l{1,0,1},A_{k,l}=\left(p_{(0,0,j)(k,l,j^{\prime})};\,j,j^{\prime}\in S_{0}\right),\ k,l\in\{-1,0,1\},

i.e., for (x1,x2),(x1,x2)2(x_{1},x_{2}),(x_{1}^{\prime},x_{2}^{\prime})\in\mathbb{Z}^{2}, block P(x1,x2)(x1,x2)=(p(x1,x2,j)(x1,x2,j);j,jS0)P_{(x_{1},x_{2})(x_{1}^{\prime},x_{2}^{\prime})}=(p_{(x_{1},x_{2},j)(x_{1}^{\prime},x_{2}^{\prime},j^{\prime})};\,j,j^{\prime}\in S_{0}) is given as

P(x1,x2)(x1,x2)={Ax1x1,x2x2,if x1x1,x2x2{1,0,1},O,otherwise,P_{(x_{1},x_{2})(x_{1}^{\prime},x_{2}^{\prime})}=\left\{\begin{array}[]{ll}A_{x_{1}^{\prime}-x_{1},x_{2}^{\prime}-x_{2}},&\mbox{if $x_{1}^{\prime}-x_{1},x_{2}^{\prime}-x_{2}\in\{-1,0,1\}$},\cr O,&\mbox{otherwise},\end{array}\right. (1.1)

where OO is a matrix of 0’s whose dimension is determined in context. Define a set 𝕊+\mathbb{S}_{+} as 𝕊+=+2×S0\mathbb{S}_{+}=\mathbb{Z}_{+}^{2}\times S_{0}, where +\mathbb{Z}_{+} is the set of all nonnegative integers, and let τ\tau be the stopping time at which the 2d-MMRW {𝒀n}\{\boldsymbol{Y}_{n}\} enters 𝕊𝕊+\mathbb{S}\setminus\mathbb{S}_{+} for the first time, i.e.,

τ=inf{n0;𝒀n𝕊𝕊+}.\tau=\inf\{n\geq 0;\boldsymbol{Y}_{n}\in\mathbb{S}\setminus\mathbb{S}_{+}\}.

For 𝒚=(x1,x2,j),𝒚=(x1,x2,j)𝕊+\boldsymbol{y}=(x_{1},x_{2},j),\boldsymbol{y}^{\prime}=(x_{1}^{\prime},x_{2}^{\prime},j^{\prime})\in\mathbb{S}_{+}, let q~𝒚,𝒚\tilde{q}_{\boldsymbol{y},\boldsymbol{y}^{\prime}} be the expected number of visits to the state 𝒚\boldsymbol{y}^{\prime} before the process {𝒀n}\{\boldsymbol{Y}_{n}\} starting from the state 𝒚\boldsymbol{y} enters 𝕊𝕊+\mathbb{S}\setminus\mathbb{S}_{+} for the first time, i.e.,

q~𝒚,𝒚=𝔼(n=0τ11(𝒀n=𝒚)|𝒀0=𝒚),\tilde{q}_{\boldsymbol{y},\boldsymbol{y}^{\prime}}=\mathbb{E}\bigg{(}\sum_{n=0}^{\tau-1}1\big{(}\boldsymbol{Y}_{n}=\boldsymbol{y}^{\prime}\big{)}\,\Big{|}\,\boldsymbol{Y}_{0}=\boldsymbol{y}\bigg{)}, (1.2)

where 1()1(\cdot) is an indicator function. For 𝒚𝕊+\boldsymbol{y}\in\mathbb{S}_{+}, the measure (q~𝒚,𝒚;𝒚=(x1,x2,j)𝕊+)(\tilde{q}_{\boldsymbol{y},\boldsymbol{y}^{\prime}};\boldsymbol{y}^{\prime}=(x_{1}^{\prime},x_{2}^{\prime},j^{\prime})\in\mathbb{S}_{+}) is called an occupation measure. Note that q~𝒚,𝒚\tilde{q}_{\boldsymbol{y},\boldsymbol{y}^{\prime}} is the (𝒚,𝒚)(\boldsymbol{y},\boldsymbol{y}^{\prime})-element of the fundamental matrix of a truncated substochastic matrix P+P_{+} given as P+=(p𝒚,𝒚;𝒚,𝒚𝕊+)P_{+}=\left(p_{\boldsymbol{y},\boldsymbol{y}^{\prime}};\boldsymbol{y},\boldsymbol{y}^{\prime}\in\mathbb{S}_{+}\right), i.e., q~𝒚,𝒚=[P~+]𝒚,𝒚\tilde{q}_{\boldsymbol{y},\boldsymbol{y}^{\prime}}=[\tilde{P}_{+}]_{\boldsymbol{y},\boldsymbol{y}^{\prime}} and

P+~=k=0P+k,\tilde{P_{+}}=\sum_{k=0}^{\infty}P_{+}^{k},

where, for example, P+2=(p𝒚,𝒚(2))P_{+}^{2}=\left(p^{(2)}_{\boldsymbol{y},\boldsymbol{y}^{\prime}}\right) is defined by p𝒚,𝒚(2)=𝒚′′𝒮+p𝒚,𝒚′′p𝒚′′,𝒚p^{(2)}_{\boldsymbol{y},\boldsymbol{y}^{\prime}}=\sum_{\boldsymbol{y}^{\prime\prime}\in\mathcal{S}_{+}}p_{\boldsymbol{y},\boldsymbol{y}^{\prime\prime}}\,p_{\boldsymbol{y}^{\prime\prime},\boldsymbol{y}^{\prime}}. P+P_{+} governs transitions of {𝒀n}\{\boldsymbol{Y}_{n}\} on the positive quarter-plane. Our main aim is to obtain the asymptotic decay rate of the occupation measure (q~𝒚,𝒚;𝒚=(x1,x2,j)𝕊+)(\tilde{q}_{\boldsymbol{y},\boldsymbol{y}^{\prime}};\boldsymbol{y}^{\prime}=(x_{1}^{\prime},x_{2}^{\prime},j^{\prime})\in\mathbb{S}_{+}) as the values of x1x_{1}^{\prime} and x2x_{2}^{\prime} go to infinity in a given direction. This asymptotic decay rate gives a lower bound for the asymptotic decay rate of the stationary distribution in the corresponding 2d-QBD process in the same direction. Such lower bounds have been obtained for some kinds of multi-dimensional reflected process without background states; for example, two-dimensional 0-partially chains in [1], also see comments on Conjecture 5.1 in [6]. With respect to multi-dimensional reflected processes with background states, such asymptotic decay rates of the stationary tail distributions in two-dimensional reflected processes have been discussed in [6, 7] by using Markov additive processes and large deviations, but some results seem to be halfway and, in addition, the methods used in those papers are different from ours; we use matrix analytic methods and complex analytic methods. Note that the asymptotic decay rates of the stationary distribution in a 2d-QBD process in the coordinate directions have been obtained in [9, 10].

As mentioned above, the 2d-MMRW {𝒀n}={(X1,n,X2,n,Jn)}\{\boldsymbol{Y}_{n}\}=\{(X_{1,n},X_{2,n},J_{n})\} is a 2d-MA-process, where the set of blocks, {Ai,j;i,j{1,0,1}}\{A_{i,j};i,j\in\{-1,0,1\}\}, corresponds to the kernel of the 2d-MA-process. Let A,(θ1,θ2)A_{*,*}(\theta_{1},\theta_{2}) be the matrix moment generating function of one-step transition probabilities defined as

A,(θ1,θ2)=i,j{1,0,1}eiθ1+jθ2Ai,j.A_{*,*}(\theta_{1},\theta_{2})=\sum_{i,j\in\{-1,0,1\}}e^{i\theta_{1}+j\theta_{2}}A_{i,j}.

A,(θ1,θ2)A_{*,*}(\theta_{1},\theta_{2}) is the Feynman-Kac operator [8] for the 2d-MA-process. For 𝒙=(x1,x2),𝒙=(x1,x2)+2\boldsymbol{x}=(x_{1},x_{2}),\boldsymbol{x}^{\prime}=(x_{1}^{\prime},x_{2}^{\prime})\in\mathbb{Z}_{+}^{2}, define an s0×s0s_{0}\times s_{0} matrix N𝒙,𝒙N_{\boldsymbol{x},\boldsymbol{x}^{\prime}} as N𝒙,𝒙=(q~(𝒙,j),(𝒙,j);j,jS0)N_{\boldsymbol{x},\boldsymbol{x}^{\prime}}=(\tilde{q}_{(\boldsymbol{x},j),(\boldsymbol{x}^{\prime},j^{\prime})};j,j^{\prime}\in S_{0}) and N𝒙N_{\boldsymbol{x}} as N𝒙=(N𝒙,𝒙;𝒙+2)N_{\boldsymbol{x}}=(N_{\boldsymbol{x},\boldsymbol{x}^{\prime}};\boldsymbol{x}^{\prime}\in\mathbb{Z}_{+}^{2}). In terms of N𝒙,𝒙N_{\boldsymbol{x},\boldsymbol{x}^{\prime}}, P~+\tilde{P}_{+} is represented as P~+=(N𝒙,𝒙;𝒙,𝒙+2)\tilde{P}_{+}=(N_{\boldsymbol{x},\boldsymbol{x}^{\prime}};\boldsymbol{x},\boldsymbol{x}^{\prime}\in\mathbb{Z}_{+}^{2}). For 𝒙=(x1,x2)+2\boldsymbol{x}=(x_{1},x_{2})\in\mathbb{Z}_{+}^{2}, let Φ𝒙(θ1,θ2)\Phi_{\boldsymbol{x}}(\theta_{1},\theta_{2}) be the matrix moment generating function of the occupation measures defined as

Φ𝒙(θ1,θ2)=k1=0k2=0ek1θ1+k2θ2N𝒙,(k1,k2).\Phi_{\boldsymbol{x}}(\theta_{1},\theta_{2})=\sum_{k_{1}=0}^{\infty}\sum_{k_{2}=0}^{\infty}e^{k_{1}\theta_{1}+k_{2}\theta_{2}}N_{\boldsymbol{x},(k_{1},k_{2})}.

For 𝒙+2\boldsymbol{x}\in\mathbb{Z}_{+}^{2}, define the convergence domain of Φ𝒙(θ1,θ2)\Phi_{\boldsymbol{x}}(\theta_{1},\theta_{2}) as

𝒟𝒙=the interior of {(θ1,θ2)2:Φ𝒙(θ1,θ2)<}.\mathcal{D}_{\boldsymbol{x}}=\mbox{the interior of }\{(\theta_{1},\theta_{2})\in\mathbb{R}^{2}:\Phi_{\boldsymbol{x}}(\theta_{1},\theta_{2})<\infty\}.

Define point sets Γ\Gamma and 𝒟\mathcal{D} as

Γ={(θ1,θ2)2;spr(A,(θ1,θ2))<1},\displaystyle\Gamma=\left\{(\theta_{1},\theta_{2})\in\mathbb{R}^{2};\mbox{\rm spr}(A_{*,*}(\theta_{1},\theta_{2}))<1\right\},
𝒟={(θ1,θ2)2;there exists (θ1,θ2)Γ such that (θ1,θ2)<(θ1,θ2)},\displaystyle\mathcal{D}=\left\{(\theta_{1},\theta_{2})\in\mathbb{R}^{2};\mbox{there exists $(\theta_{1}^{\prime},\theta_{2}^{\prime})\in\Gamma$ such that $(\theta_{1},\theta_{2})<(\theta_{1}^{\prime},\theta_{2}^{\prime})$}\right\},

where spr(A,(θ1,θ2))\mbox{\rm spr}(A_{*,*}(\theta_{1},\theta_{2})) is the spectral radius of A,(θ1,θ2)A_{*,*}(\theta_{1},\theta_{2}). In the following sections, we will prove that, for any vector 𝒄=(c1,c2)\boldsymbol{c}=(c_{1},c_{2}) of positive integers and for every j,jS0j,j^{\prime}\in S_{0},

limk1klogq~(0,0,j),(c1k,c2k,j)=sup𝜽Γ𝒄,𝜽,\lim_{k\to\infty}\frac{1}{k}\log\tilde{q}_{(0,0,j),(c_{1}k,c_{2}k,j^{\prime})}=-\sup_{\boldsymbol{\theta}\in\Gamma}\,\langle\boldsymbol{c},\boldsymbol{\theta}\rangle,

where 𝒄,𝜽\langle\boldsymbol{c},\boldsymbol{\theta}\rangle is the inner product of vectors 𝒄\boldsymbol{c} and 𝜽\boldsymbol{\theta}. Furthermore, using this asymptotic property, we will also demonstrate that, for any 𝒙+2\boldsymbol{x}\in\mathbb{Z}_{+}^{2}, 𝒟𝒙\mathcal{D}_{\boldsymbol{x}} is given by 𝒟\mathcal{D}.

The rest of the paper is organized as follows. In Sect. 2, we present some assumptions and basic properties of the 2d-MMRW. In Sect. 3, we introduce three kinds of one-dimensional QBD process with countably many phases and obtain the convergence parameters of the rate matrices in the one-dimensional QBD processes. In Sect. 4, we consider the matrix generating functions of the occupation measures and demonstrate that the convergence domain of Φ𝒙(θ1,θ2)\Phi_{\boldsymbol{x}}(\theta_{1},\theta_{2}) contains Γ\Gamma. Sect. 5 is a main section, where the asymptotic decay rates of the occupation measures and the convergence domain of Φ𝒙(θ1,θ2)\Phi_{\boldsymbol{x}}(\theta_{1},\theta_{2}) are obtained. The paper concludes with remarks on the asymptotic property of 2d-QBD processes in Section 6.


Notation for matrices. For a matrix AA, we denote by [A]i,j[A]_{i,j} the (i,j)(i,j)-element of AA. The convergence parameter of a square matrix AA with a finite or countable dimension is denoted by cp(A)\mbox{\rm cp}(A), i.e., cp(A)=sup{r+;n=0rnAn<}\mbox{\rm cp}(A)=\sup\{r\in\mathbb{R}_{+};\sum_{n=0}^{\infty}r^{n}A^{n}<\infty\}. For a finite-dimensional square matrix AA, we denote by spr(A)\mbox{\rm spr}(A) the spectral radius of AA, which is the maximum modulus of eigenvalue of AA. If a square matrix AA is finite and nonnegative, we have spr(A)=cp(A)1\mbox{\rm spr}(A)=\mbox{\rm cp}(A)^{-1}. The determinant of a square matrix AA is denoted by detA\det A and the adjugate matrix by adjA\mbox{\rm adj}\,A.

2 Preliminaries

We give some assumptions and propositions, which will be necessary in the following sections. First, we assume the following condition.

Assumption 2.1.

The 2d-MMRW {𝐘n}={(X1,n,X2,n,Jn)}\{\boldsymbol{Y}_{n}\}=\{(X_{1,n},X_{2,n},J_{n})\} is irreducible and aperiodic.

Under this assumption, for any θ1,θ2\theta_{1},\theta_{2}\in\mathbb{R}, A,(θ1,θ2)A_{*,*}(\theta_{1},\theta_{2}) is also irreducible and aperiodic. Denote A,(0,0)A_{*,*}(0,0) by A,A_{*,*} and define the following matrices: for i,j{1,0,1}i,j\in\{-1,0,1\},

A,j=k{1,0,1}Ak,j,Ai,=k{1,0,1}Ai,k.A_{*,j}=\sum_{k\in\{-1,0,1\}}A_{k,j},\quad A_{i,*}=\sum_{k\in\{-1,0,1\}}A_{i,k}.

A,A_{*,*} is the transition probability matrix of the background process {Jn}\{J_{n}\}. Since A,A_{*,*} is finite and irreducible, it is positive recurrent. Denote by 𝝅,\boldsymbol{\pi}_{*,*} the stationary distribution of A,A_{*,*}. Define the mean increment vector of the process {𝒀n}\{\boldsymbol{Y}_{n}\}, denoted by 𝒂=(a1,a2)\boldsymbol{a}=(a_{1},a_{2}), as follows.

a1=𝝅,(A1,+A1,)𝟏,a2=𝝅,(A,1+A,1)𝟏,a_{1}=\boldsymbol{\pi}_{*,*}\left(-A_{-1,*}+A_{1,*}\right)\mathbf{1},\quad a_{2}=\boldsymbol{\pi}_{*,*}\left(-A_{*,-1}+A_{*,1}\right)\mathbf{1}, (2.1)

where 𝟏\mathbf{1} is a column vector of 1’s whose dimension is determined in context. With respect to the occupation measures defined in Sect. 1, the following property holds.

Proposition 2.1.

If a1<0a_{1}<0 or a2<0a_{2}<0, then, for any 𝐲𝕊+\boldsymbol{y}\in\mathbb{S}_{+}, the occupation measure (q~𝐲,𝐲;𝐲𝕊+)(\tilde{q}_{\boldsymbol{y},\boldsymbol{y}^{\prime}};\boldsymbol{y}^{\prime}\in\mathbb{S}_{+}) is finite, i.e.,

𝒚𝕊+q~𝒚,𝒚=𝔼(τ|𝒀0=𝒚)<,\sum_{\boldsymbol{y}^{\prime}\in\mathbb{S}_{+}}\tilde{q}_{\boldsymbol{y},\boldsymbol{y}^{\prime}}=\mathbb{E}(\tau\,|\,\boldsymbol{Y}_{0}=\boldsymbol{y})<\infty, (2.2)

where τ\tau is the stopping time at which {𝐘n}\{\boldsymbol{Y}_{n}\} enters 𝕊𝕊+\mathbb{S}\setminus\mathbb{S}_{+} for the first time.

Proof.

Without loss of generality, we assume a1<0a_{1}<0. Let τˇ\check{\tau} be the stopping time at which X1,nX_{1,n} becomes less than 0 for the first time, i.e., τˇ=inf{n0;X1,n<0}\check{\tau}=\inf\{n\geq 0;X_{1,n}<0\}. Since {(x1,x2,j)𝕊;x1<0}𝕊𝕊+\{(x_{1},x_{2},j)\in\mathbb{S};x_{1}<0\}\subset\mathbb{S}\setminus\mathbb{S}_{+}, we have ττˇ\tau\leq\check{\tau}, and this implies that, for any 𝒚𝕊+\boldsymbol{y}\in\mathbb{S}_{+},

𝔼(τ|𝒀0=𝒚)𝔼(τˇ|𝒀0=𝒚).\mathbb{E}(\tau\,|\,\boldsymbol{Y}_{0}=\boldsymbol{y})\leq\mathbb{E}(\check{\tau}\,|\,\boldsymbol{Y}_{0}=\boldsymbol{y}). (2.3)

Next, we demonstrate that 𝔼(τˇ|𝒀0=𝒚)\mathbb{E}(\check{\tau}\,|\,\boldsymbol{Y}_{0}=\boldsymbol{y}) is finite. Consider a one-dimensional QBD process {𝒀ˇn}={(Xˇn,Jˇn)}\{\check{\boldsymbol{Y}}_{n}\}=\{(\check{X}_{n},\check{J}_{n})\} on +×S0\mathbb{Z}_{+}\times S_{0}, having A1,A_{-1,*}, A0,A_{0,*} and A1,A_{1,*} as the transition probability blocks that govern transitions of the QBD process when Xˇn>0\check{X}_{n}>0. We assume the transition probability blocks that govern transitions of the QBD process when Xˇn=0\check{X}_{n}=0 are given appropriately. Then, a1a_{1} is the mean increment of the QBD process when Xˇn>0\check{X}_{n}>0 and the assumption of a1<0a_{1}<0 implies that the QBD process is positive recurrent. Define a stopping time τˇQ\check{\tau}^{Q} as τˇQ=inf{n0;Xˇn=0}\check{\tau}^{Q}=\inf\{n\geq 0;\check{X}_{n}=0\}. We have, for any 𝒚=(x1,x2,j)𝕊\boldsymbol{y}=(x_{1},x_{2},j)\in\mathbb{S},

𝔼(τˇ|𝒀0=𝒚)=𝔼(τˇQ|Yˇ0=(x1+1,j))<,\mathbb{E}(\check{\tau}\,|\,\boldsymbol{Y}_{0}=\boldsymbol{y})=\mathbb{E}(\check{\tau}^{Q}\,|\,\check{Y}_{0}=(x_{1}+1,j))<\infty, (2.4)

and this completes the proof. ∎

Hereafter, we assume the following condition.

Assumption 2.2.

a1<0a_{1}<0 or a2<0a_{2}<0.

Let χ(θ1,θ2)\chi(\theta_{1},\theta_{2}) be the maximum eigenvalue of A,(θ1,θ2)A_{*,*}(\theta_{1},\theta_{2}). Since A,(θ1,θ2)A_{*,*}(\theta_{1},\theta_{2}) is nonnegative, irreducible and aperiodic, χ(θ1,θ2)\chi(\theta_{1},\theta_{2}) is the Perron-Frobenius eigenvalue of A,(θ1,θ2)A_{*,*}(\theta_{1},\theta_{2}), i.e., χ(θ1,θ2)=spr(A,(θ1,θ2))\chi(\theta_{1},\theta_{2})=\mbox{\rm spr}(A_{*,*}(\theta_{1},\theta_{2})). The modulus of every eigenvalue of A,(θ1,θ2)A_{*,*}(\theta_{1},\theta_{2}) except χ(θ1,θ2)\chi(\theta_{1},\theta_{2}) is strictly less than spr(A,(θ1,θ2))\mbox{\rm spr}(A_{*,*}(\theta_{1},\theta_{2})). We say that a positive function f(x,y)f(x,y) is log-convex in (x,y)(x,y) if logf(x,y)\log f(x,y) is convex in (x,y)(x,y). A log-convex function is also a convex function. With respect to χ(θ1,θ2)\chi(\theta_{1},\theta_{2}), the following property holds.

Proposition 2.2 (Proposition 3.1 of [9]).

χ(θ1,θ2)\chi(\theta_{1},\theta_{2}) is log-convex and hence convex in (θ1,θ2)2(\theta_{1},\theta_{2})\in\mathbb{R}^{2}.

Let Γ¯\bar{\Gamma} be the closure of Γ\Gamma, i.e., Γ¯={(θ1,θ2)2:χ(θ1,θ2)1}\bar{\Gamma}=\{(\theta_{1},\theta_{2})\in\mathbb{R}^{2}:\chi(\theta_{1},\theta_{2})\leq 1\}. By Proposition 2.2, Γ¯\bar{\Gamma} is a convex set. Furthermore, the following property holds.

Proposition 2.3 (Lemma 2.2 of [10]).

Γ¯\bar{\Gamma} is bounded.

For 𝒚=(𝒙,j)=(x1,x2,j)𝕊+\boldsymbol{y}=(\boldsymbol{x},j)=(x_{1},x_{2},j)\in\mathbb{S}_{+}, we give an asymptotic inequality for the occupation measure (q~𝒚,𝒚;𝒚𝕊+)(\tilde{q}_{\boldsymbol{y},\boldsymbol{y}^{\prime}};\boldsymbol{y}^{\prime}\in\mathbb{S}_{+}). Under Assumption 2.2, the occupation measure is finite and (q~𝒚,𝒚/E(τ|𝒀0=𝒚);𝒚𝕊+)(\tilde{q}_{\boldsymbol{y},\boldsymbol{y}^{\prime}}/E(\tau\,|\,\boldsymbol{Y}_{0}=\boldsymbol{y});\boldsymbol{y}^{\prime}\in\mathbb{S}_{+}) becomes a probability measure. Let 𝒀=(𝑿,J)=(X1,X2,J)\boldsymbol{Y}=(\boldsymbol{X},J)=(X_{1},X_{2},J) be a random vector subject to the probability measure, i.e., P(𝒀=𝒚)=q~𝒚,𝒚/E(τ|𝒀0=𝒚)P(\boldsymbol{Y}=\boldsymbol{y}^{\prime})=\tilde{q}_{\boldsymbol{y},\boldsymbol{y}^{\prime}}/E(\tau\,|\,\boldsymbol{Y}_{0}=\boldsymbol{y}) for 𝒚𝕊+\boldsymbol{y}^{\prime}\in\mathbb{S}_{+}. By Markov’s inequality, for 𝜽=(θ1,θ2)2\boldsymbol{\theta}=(\theta_{1},\theta_{2})\in\mathbb{R}^{2} and for 𝒄=(c1,c2)+2\boldsymbol{c}=(c_{1},c_{2})\in\mathbb{R}_{+}^{2} such that 𝒄𝟎\boldsymbol{c}\neq\mathbf{0}, where 𝟎\mathbf{0} is a vector of 0’s whose dimension is determined in context, we have

𝔼(e𝜽,𝑿1(J=j))\displaystyle\mathbb{E}(e^{\langle\boldsymbol{\theta},\boldsymbol{X}\rangle}1(J=j^{\prime})) ek𝜽,𝒄P(e𝜽,𝑿1(J=j)ek𝜽,𝒄)\displaystyle\geq e^{k\langle\boldsymbol{\theta},\boldsymbol{c}\rangle}P(e^{\langle\boldsymbol{\theta},\boldsymbol{X}\rangle}1(J=j^{\prime})\geq e^{k\langle\boldsymbol{\theta},\boldsymbol{c}\rangle})
=ek𝜽,𝒄P(𝜽,𝑿𝜽,k𝒄,J=j)\displaystyle=e^{k\langle\boldsymbol{\theta},\boldsymbol{c}\rangle}P(\langle\boldsymbol{\theta},\boldsymbol{X}\rangle\geq\langle\boldsymbol{\theta},k\boldsymbol{c}\rangle,J=j^{\prime})
ek𝜽,𝒄P(𝑿k𝒄,J=j).\displaystyle\geq e^{k\langle\boldsymbol{\theta},\boldsymbol{c}\rangle}P(\boldsymbol{X}\geq k\boldsymbol{c},J=j^{\prime}).

This implies that, for every l1,l2+l_{1},l_{2}\in\mathbb{Z}_{+},

[Φ𝒙(𝜽)]j,jek𝜽,𝒄x1kc1x2kc2q~𝒚,(x1,x2,j)ek𝜽,𝒄q~𝒚,(k(c1+l1),k(c2+l2),j),\displaystyle[\Phi_{\boldsymbol{x}}(\boldsymbol{\theta})]_{j,j^{\prime}}\geq e^{k\langle\boldsymbol{\theta},\boldsymbol{c}\rangle}\sum_{x_{1}^{\prime}\geq kc_{1}}\ \sum_{x_{2}^{\prime}\geq kc_{2}}\tilde{q}_{\boldsymbol{y},(x_{1}^{\prime},x_{2}^{\prime},j^{\prime})}\geq e^{k\langle\boldsymbol{\theta},\boldsymbol{c}\rangle}\tilde{q}_{\boldsymbol{y},(k(\lfloor c_{1}\rfloor+l_{1}),k(\lfloor c_{2}\rfloor+l_{2}),j^{\prime})}, (2.5)

where x\lfloor x\rfloor is the largest integer less than or equal to xx. Hence, considering the convergence domain of Φ𝒙(𝜽)\Phi_{\boldsymbol{x}}(\boldsymbol{\theta}),we immediately obtain the following basic inequality.

Proposition 2.4.

For any 𝐜=(c1,c2)+2\boldsymbol{c}=(c_{1},c_{2})\in\mathbb{Z}_{+}^{2} such that 𝐜𝟎\boldsymbol{c}\neq\mathbf{0} and for every (𝐱,j)𝕊+(\boldsymbol{x},j)\in\mathbb{S}_{+}, jS0j^{\prime}\in S_{0} and l1,l2+l_{1},l_{2}\in\mathbb{Z}_{+},

lim supk1klogq~(𝒙,j),(kc1+l1,kc2+l2,j)sup𝜽𝒟𝒙𝜽,𝒄.\limsup_{k\to\infty}\frac{1}{k}\log\tilde{q}_{(\boldsymbol{x},j),(kc_{1}+l_{1},kc_{2}+l_{2},j^{\prime})}\leq-\sup_{\boldsymbol{\theta}\in\mathcal{D}_{\boldsymbol{x}}}\langle\boldsymbol{\theta},\boldsymbol{c}\rangle. (2.6)

3 QBD representations with a countable phase space

In order to analyze the occupation measures, we introduce three kinds of one-dimensional QBD process with countably many phases. Let {𝒀n}={(X1,n,X2,n,Jn)}\{\boldsymbol{Y}_{n}\}=\{(X_{1,n},X_{2,n},J_{n})\} be a 2d-MMRW and τ\tau be the stopping time defined in Sect. 1, i.e., τ=inf{n0:𝒀n𝕊𝕊+}\tau=\inf\{n\geq 0:\boldsymbol{Y}_{n}\in\mathbb{S}\setminus\mathbb{S}_{+}\}. Using τ\tau, we define a process {𝒀^n}={(X^1,n,X^2,n,J^n)}\{\hat{\boldsymbol{Y}}_{n}\}=\{(\hat{X}_{1,n},\hat{X}_{2,n},\hat{J}_{n})\} as

𝒀^n=𝒀τn,n0,\hat{\boldsymbol{Y}}_{n}=\boldsymbol{Y}_{\tau\wedge n},\ n\geq 0,

where xy=min{x,y}x\wedge y=\min\{x,y\}. The process {𝒀^n}\{\hat{\boldsymbol{Y}}_{n}\} is an absorbing Markov chain on the state space 𝕊\mathbb{S}, where the set of absorbing states is given by 𝕊𝕊+\mathbb{S}\setminus\mathbb{S}_{+}. Hereafter, we restrict the state space of {𝒀^n}\{\hat{\boldsymbol{Y}}_{n}\} to 𝕊+\mathbb{S}_{+}, where the transition probability matrix of the process is given by P+P_{+}. We assume the following condition through the paper.

Assumption 3.1.

P+P_{+} is irreducible.

Under this assumption, PP is irreducible regardless of Assumption 2.1 and every element of P~+\tilde{P}_{+} is positive. We consider three kinds of QBD representation for {Y^n}\{\hat{Y}_{n}\}: the first is {𝒀^n(1)}={(X^1,n,(X^2,n,J^n))}\{\hat{\boldsymbol{Y}}_{n}^{(1)}\}=\{(\hat{X}_{1,n},(\hat{X}_{2,n},\hat{J}_{n}))\}, where X^1,n\hat{X}_{1,n} is the level and (X^2,n,J^n)(\hat{X}_{2,n},\hat{J}_{n}) is the phase, the second {𝒀^n(2)}={(X^2,n,(X^1,n,J^n))}\{\hat{\boldsymbol{Y}}_{n}^{(2)}\}=\{(\hat{X}_{2,n},(\hat{X}_{1,n},\hat{J}_{n}))\}, where X^2,n\hat{X}_{2,n} is the level and (X^1,n,J^n)(\hat{X}_{1,n},\hat{J}_{n}) is the phase, and the third

{𝒀^n(1,1)}={(min{X^1,n,X^2,n},(X^1,nX^2,n,J^n))},\{\hat{\boldsymbol{Y}}_{n}^{(1,1)}\}=\{(\min\{\hat{X}_{1,n},\hat{X}_{2,n}\},(\hat{X}_{1,n}-\hat{X}_{2,n},\hat{J}_{n}))\},

where min{X^1,n,X^2,n}\min\{\hat{X}_{1,n},\hat{X}_{2,n}\} is the level and (X^1,nX^2,n,J^n)(\hat{X}_{1,n}-\hat{X}_{2,n},\hat{J}_{n}) is the phase. The state spaces of {𝒀^n(1)}\{\hat{\boldsymbol{Y}}_{n}^{(1)}\} and {𝒀^n(2)}\{\hat{\boldsymbol{Y}}_{n}^{(2)}\} are given by 𝕊+\mathbb{S}_{+} and that of {𝒀^n(1,1)}\{\hat{\boldsymbol{Y}}_{n}^{(1,1)}\} by +××S0\mathbb{Z}_{+}\times\mathbb{Z}\times S_{0}. On the state space 𝕊+\mathbb{S}_{+}, the kk-th level set of {𝒀^n(1,1)}\{\hat{\boldsymbol{Y}}_{n}^{(1,1)}\} is given by

𝕃k(1,1)={(x1,x2,j)𝕊+;min{x1,x2}=k},\mathbb{L}_{k}^{(1,1)}=\left\{(x_{1},x_{2},j)\in\mathbb{S}_{+};\min\{x_{1},x_{2}\}=k\right\},

and the level sets satisfy, for k0k\geq 0, 𝕃k+1={(x1+1,x2+1,j);(x1,x2,j)𝕃k}\mathbb{L}_{k+1}=\{(x_{1}+1,x_{2}+1,j);(x_{1},x_{2},j)\in\mathbb{L}_{k}\}. It can, therefore, be said that {𝒀^n(1,1)}\{\hat{\boldsymbol{Y}}_{n}^{(1,1)}\} is a QBD process with level direction vector (1,1)(1,1). Note that the QBD process {𝒀^n(1,1)}\{\hat{\boldsymbol{Y}}_{n}^{(1,1)}\} is constructed according to Example 4.2 of [6]. For α{(1),(2),(1,1)}\alpha\in\{(1),(2),(1,1)\}, the transition probability matrix of {Y^nα}\{\hat{Y}^{\alpha}_{n}\} is given in block tri-diagonal form as

Pα=(A0αA1αA1αA0αA1αA1αA0αA1α),P^{\alpha}=\begin{pmatrix}A^{\alpha}_{0}&A^{\alpha}_{1}&&&\cr A^{\alpha}_{-1}&A^{\alpha}_{0}&A^{\alpha}_{1}&&\cr&A^{\alpha}_{-1}&A^{\alpha}_{0}&A^{\alpha}_{1}&\cr&&\ddots&\ddots&\ddots\end{pmatrix}, (3.1)

where for i{1,0,1}i\in\{-1,0,1\},

Ai(1)=(Ai,0Ai,1Ai,1Ai,0Ai,1Ai,1Ai,0Ai,1),Ai(2)=(A0,iA1,iA1,iA0,iA1,iA1,iA0,iA1,i),A^{(1)}_{i}=\begin{pmatrix}A_{i,0}&A_{i,1}&&&\cr A_{i,-1}&A_{i,0}&A_{i,1}&&\cr&A_{i,-1}&A_{i,0}&A_{i,1}&\cr&&\ddots&\ddots&\ddots\end{pmatrix},\quad A^{(2)}_{i}=\begin{pmatrix}A_{0,i}&A_{1,i}&&&\cr A_{-1,i}&A_{0,i}&A_{1,i}&&\cr&A_{-1,i}&A_{0,i}&A_{1,i}&\cr&&\ddots&\ddots&\ddots\end{pmatrix},

and

A1(1,1)=(A1,1A1,0A1,1A1,1A1,0A1,1A0,1A1,1A1,1A0,1A1,1),\displaystyle A^{(1,1)}_{-1}=\begin{pmatrix}\ddots&\ddots&\ddots&&&&&&\cr&A_{-1,1}&A_{-1,0}&A_{-1,-1}&&&&&\cr&&A_{-1,1}&A_{-1,0}&A_{-1,-1}&A_{0,-1}&A_{1,-1}&&\cr&&&&&A_{-1,-1}&A_{0,-1}&A_{1,-1}&\cr&&&&&&\ddots&\ddots&\ddots\end{pmatrix},
A0(1,1)=(A0,1A0,0A0,1A0,1A0,0A0,1A1,1A0,1A0,0A1,0A1,1A1,0A0,0A1,0A1,0A0,0A1,0),\displaystyle A^{(1,1)}_{0}=\begin{pmatrix}\ddots&\ddots&\ddots&&&&&&\cr&A_{0,1}&A_{0,0}&A_{0,-1}&&&&&\cr&&A_{0,1}&A_{0,0}&A_{0,-1}&A_{1,-1}&&&\cr&&&A_{0,1}&A_{0,0}&A_{1,0}&&&\cr&&&A_{-1,1}&A_{-1,0}&A_{0,0}&A_{1,0}&&\cr&&&&&A_{-1,0}&A_{0,0}&A_{1,0}&\cr&&&&&&\ddots&\ddots&\ddots\end{pmatrix},
A1(1,1)=(A1,1A1,0A1,1A1,1A1,0A1,1A0,1A1,1A1,1A0,1A1,1).\displaystyle A^{(1,1)}_{1}=\begin{pmatrix}\ddots&\ddots&\ddots&&&\cr&A_{1,1}&A_{1,0}&A_{1,-1}&&\cr&&A_{1,1}&A_{1,0}&&\cr&&&A_{1,1}&&&\cr&&&A_{0,1}&A_{1,1}&&\cr&&&A_{-1,1}&A_{0,1}&A_{1,1}&\cr&&&&\ddots&\ddots&\ddots\end{pmatrix}.

For α{(1),(2),(1,1)}\alpha\in\{(1),(2),(1,1)\}, let RαR^{\alpha} be the rate matrix generated from the triplet {A1α,A0α,A1α}\{A^{\alpha}_{-1},A^{\alpha}_{0},A^{\alpha}_{1}\}, which is the minimal nonnegative solution to the matrix quadratic equation:

Rα=(Rα)2A1α+RαA0α+A1α.R^{\alpha}=(R^{\alpha})^{2}A^{\alpha}_{-1}+R^{\alpha}A^{\alpha}_{0}+A^{\alpha}_{1}. (3.2)

We give cp(Rα)\mbox{\rm cp}(R^{\alpha}), the convergence parameter of RαR^{\alpha}. For θ\theta\in\mathbb{R}, define a matrix function Aα(θ)A^{\alpha}_{*}(\theta) as

Aα(θ)=eθA1α+A0α+eθA1α.A^{\alpha}_{*}(\theta)=e^{-\theta}A^{\alpha}_{-1}+A^{\alpha}_{0}+e^{\theta}A^{\alpha}_{1}. (3.3)

Since P+P_{+} is irreducible and the number of positive elements of each row and column of Aα(0)A^{\alpha}_{*}(0) is finite, we have, by Lemma 2.5 of [12],

logcp(Rα)=sup{θ;cp(Aα(θ))1<1}.\log\mbox{\rm cp}(R^{\alpha})=\sup\{\theta\in\mathbb{R};\mbox{\rm cp}(A^{\alpha}_{*}(\theta))^{-1}<1\}. (3.4)

For θ1,θ2\theta_{1},\theta_{2}\in\mathbb{R} and for i,j{1,0,1}i,j\in\{-1,0,1\}, define matrix functions A,j(θ1)A_{*,j}(\theta_{1}) and Ai,(θ2)A_{i,*}(\theta_{2}) as

A,j(θ1)=k{1,0,1}ekθ1Ak,j,Ai,(θ2)=k{1,0,1}ekθ2Ai,k.\displaystyle A_{*,j}(\theta_{1})=\sum_{k\in\{-1,0,1\}}e^{k\theta_{1}}A_{k,j},\quad A_{i,*}(\theta_{2})=\sum_{k\in\{-1,0,1\}}e^{k\theta_{2}}A_{i,k}.

The matrix function A,(θ1,θ2)A_{*,*}(\theta_{1},\theta_{2}) has been already defined in Sect. 1. Note that the point set Γ¯\bar{\Gamma} is given as Γ¯={(θ1,θ2)2;cp(A,(θ1,θ2))11}\bar{\Gamma}=\{(\theta_{1},\theta_{2})\in\mathbb{R}^{2};\mbox{\rm cp}(A_{*,*}(\theta_{1},\theta_{2}))^{-1}\leq 1\} and it is a closed convex set. For α{(1),(2),(1,1)}\alpha\in\{(1),(2),(1,1)\}, we define three points on the boundary of Γ¯\bar{\Gamma} as

𝜽¯(1)=(θ¯1(1),θ¯2(1))=argmax(θ1,θ2)Γ¯θ1,𝜽¯(2)=(θ¯1(2),θ¯2(2))=argmax(θ1,θ2)Γ¯θ2,\displaystyle\bar{\boldsymbol{\theta}}^{(1)}=(\bar{\theta}_{1}^{(1)},\bar{\theta}_{2}^{(1)})=\arg\max_{(\theta_{1},\theta_{2})\in\bar{\Gamma}}\theta_{1},\quad\bar{\boldsymbol{\theta}}^{(2)}=(\bar{\theta}_{1}^{(2)},\bar{\theta}_{2}^{(2)})=\arg\max_{(\theta_{1},\theta_{2})\in\bar{\Gamma}}\theta_{2},
𝜽¯(1,1)=(θ¯1(1,1),θ¯2(1,1))=argmax(θ1,θ2)Γ¯θ1+θ2.\displaystyle\bar{\boldsymbol{\theta}}^{(1,1)}=(\bar{\theta}_{1}^{(1,1)},\bar{\theta}_{2}^{(1,1)})=\arg\max_{(\theta_{1},\theta_{2})\in\bar{\Gamma}}\theta_{1}+\theta_{2}.

The matrix function A(1)(θ1)A^{(1)}_{*}(\theta_{1}) and A(2)(θ2)A^{(2)}_{*}(\theta_{2}) are given in block tri-diagonal form as

A(1)(θ1)=(A,0(θ1)A,1(θ1)A,1(θ1)A,0(θ1)A,1(θ1)A,1(θ1)A,0(θ1)A,1(θ1)),\displaystyle A^{(1)}_{*}(\theta_{1})=\begin{pmatrix}A_{*,0}(\theta_{1})&A_{*,1}(\theta_{1})&&&\cr A_{*,-1}(\theta_{1})&A_{*,0}(\theta_{1})&A_{*,1}(\theta_{1})&&\cr&A_{*,-1}(\theta_{1})&A_{*,0}(\theta_{1})&A_{*,1}(\theta_{1})&\cr&&\ddots&\ddots&\ddots\end{pmatrix},
A(2)(θ2)=(A0,(θ2)A1,(θ2)A1,(θ2)A0,(θ2)A1,(θ2)A1,(θ2)A0,(θ2)A1,(θ2)).\displaystyle A^{(2)}_{*}(\theta_{2})=\begin{pmatrix}A_{0,*}(\theta_{2})&A_{1,*}(\theta_{2})&&&\cr A_{-1,*}(\theta_{2})&A_{0,*}(\theta_{2})&A_{1,*}(\theta_{2})&&\cr&A_{-1,*}(\theta_{2})&A_{0,*}(\theta_{2})&A_{1,*}(\theta_{2})&\cr&&\ddots&\ddots&\ddots\end{pmatrix}.

Since A(1)(θ1)A^{(1)}_{*}(\theta_{1}) and A(2)(θ2)A^{(2)}_{*}(\theta_{2}) are irreducible, we obtain, by Lemma 2.6 of [12],

cp(A(1)(θ1))=supθ2cp(A,(θ1,θ2)),cp(A(2)(θ2))=supθ1cp(A,(θ1,θ2)),\mbox{\rm cp}(A^{(1)}_{*}(\theta_{1}))=\sup_{\theta_{2}\in\mathbb{R}}\mbox{\rm cp}(A_{*,*}(\theta_{1},\theta_{2})),\quad\mbox{\rm cp}(A^{(2)}_{*}(\theta_{2}))=\sup_{\theta_{1}\in\mathbb{R}}\mbox{\rm cp}(A_{*,*}(\theta_{1},\theta_{2})),

and this with (3.4) leads us to the following proposition.

Proposition 3.1.
logcp(R(1))=max(θ1,θ2)Γ¯θ1=θ¯1(1),logcp(R(2))=max(θ1,θ2)Γ¯θ2=θ¯2(2).\log\mbox{\rm cp}(R^{(1)})=\max_{(\theta_{1},\theta_{2})\in\bar{\Gamma}}\theta_{1}=\bar{\theta}_{1}^{(1)},\quad\log\mbox{\rm cp}(R^{(2)})=\max_{(\theta_{1},\theta_{2})\in\bar{\Gamma}}\theta_{2}=\bar{\theta}_{2}^{(2)}. (3.5)

For R(1,1)R^{(1,1)}, we give an upper bound of cp(R(1,1))\mbox{\rm cp}(R^{(1,1)}). A(1,1)(θ)A_{*}^{(1,1)}(\theta) is given in block quintuple-diagonal form as

A(1,1)(θ)\displaystyle A_{*}^{(1,1)}(\theta)
=(A¯,2(1,1)(θ)A¯,1(1,1)(θ)A¯,0(1,1)(θ)A¯,1(1,1)(θ)A¯,2(1,1)(θ)A¯,2(1,1)(θ)A¯,1(1,1)(θ)A¯,0(1,1)(θ)A¯,1(1,1)(θ)A1,1A¯,2(1,1)(θ)A¯,1(1,1)(θ)A,0(1,1)(θ)A,1(1,1)(θ)A,2(1,1)(θ)A1,1A,1(1,1)(θ)A,0(1,1)(θ)A,1(1,1)(θ)A,2(1,1)(θ)A,2(1,1)(θ)A,1(1,1)(θ)A,0(1,1)(θ)A,1(1,1)(θ)A,2(1,1)(θ)),\displaystyle=\begin{pmatrix}\ddots&\ddots&\ddots&\ddots&\ddots&&&&\cr\bar{A}_{*,-2}^{(1,1)}(\theta)&\bar{A}_{*,-1}^{(1,1)}(\theta)&\bar{A}_{*,0}^{(1,1)}(\theta)&\bar{A}_{*,1}^{(1,1)}(\theta)&\bar{A}_{*,2}^{(1,1)}(\theta)&&&&\cr&\bar{A}_{*,-2}^{(1,1)}(\theta)&\bar{A}_{*,-1}^{(1,1)}(\theta)&\bar{A}_{*,0}^{(1,1)}(\theta)&\bar{A}_{*,1}^{(1,1)}(\theta)&A_{1,-1}&&&\cr&&\bar{A}_{*,-2}^{(1,1)}(\theta)&\bar{A}_{*,-1}^{(1,1)}(\theta)&A_{*,0}^{(1,1)}(\theta)&A_{*,1}^{(1,1)}(\theta)&A_{*,2}^{(1,1)}(\theta)&&\cr&&&A_{-1,1}&A_{*,-1}^{(1,1)}(\theta)&A_{*,0}^{(1,1)}(\theta)&A_{*,1}^{(1,1)}(\theta)&A_{*,2}^{(1,1)}(\theta)&\cr&&&&A_{*,-2}^{(1,1)}(\theta)&A_{*,-1}^{(1,1)}(\theta)&A_{*,0}^{(1,1)}(\theta)&A_{*,1}^{(1,1)}(\theta)&A_{*,2}^{(1,1)}(\theta)\cr&&&&\ddots&\ddots&\ddots&\ddots&\ddots\end{pmatrix},

where

A,2(1,1)(θ)=eθA1,1,A,1(1,1)(θ)=A1,0+eθA0,1,A,0(1,1)(θ)=eθA1,1+A0,0+eθA1,1,\displaystyle A_{*,-2}^{(1,1)}(\theta)=e^{\theta}A_{-1,1},\quad A_{*,-1}^{(1,1)}(\theta)=A_{-1,0}+e^{\theta}A_{0,1},\quad A_{*,0}^{(1,1)}(\theta)=e^{-\theta}A_{-1,-1}+A_{0,0}+e^{\theta}A_{1,1},
A,1(1,1)(θ)=eθA0,1+A1,0,A,2(1,1)=eθA1,1,\displaystyle A_{*,1}^{(1,1)}(\theta)=e^{-\theta}A_{0,-1}+A_{1,0},\quad A_{*,2}^{(1,1)}=e^{-\theta}A_{1,-1},

and

A¯,2(1,1)(θ)=e2θA,2(1,1)(θ),A¯,1(1,1)(θ)=eθA,1(1,1)(θ),A¯,0(1,1)(θ)=A,0(1,1)(θ),\displaystyle\bar{A}_{*,-2}^{(1,1)}(\theta)=e^{-2\theta}A_{*,-2}^{(1,1)}(\theta),\quad\bar{A}_{*,-1}^{(1,1)}(\theta)=e^{-\theta}A_{*,-1}^{(1,1)}(\theta),\quad\bar{A}_{*,0}^{(1,1)}(\theta)=A_{*,0}^{(1,1)}(\theta),
A¯,1(1,1)(θ)=eθA,1(1,1)(θ),A¯,2(1,1)=e2θA,2(1,1).\displaystyle\bar{A}_{*,1}^{(1,1)}(\theta)=e^{\theta}A_{*,1}^{(1,1)}(\theta),\quad\bar{A}_{*,2}^{(1,1)}=e^{2\theta}A_{*,2}^{(1,1)}.

For θ1,θ2\theta_{1},\theta_{2}\in\mathbb{R}, define a matrix function A,(1,1)(θ1,θ2)A_{*,*}^{(1,1)}(\theta_{1},\theta_{2}) as

A,(1,1)(θ1,θ2)=j=22ejθ2A,j(1,1)(θ1),A_{*,*}^{(1,1)}(\theta_{1},\theta_{2})=\sum_{j=-2}^{2}e^{j\theta_{2}}A_{*,j}^{(1,1)}(\theta_{1}),

and consider a partial matrix of A(1,1)(θ1)A_{*}^{(1,1)}(\theta_{1}), denoted by Q(1,1)(θ1)Q_{*}^{(1,1)}(\theta_{1}), given as

Q(1,1)(θ1)=(A,0(1,1)(θ1)A,1(1,1)(θ1)A,2(1,1)A,1(1,1)(θ1)A,0(1,1)(θ1)A,1(1,1)(θ1)A,2(1,1)A,2(1,1)(θ1)A,1(1,1)(θ1)A,0(1,1)(θ1)A,1(1,1)(θ1)A,2(1,1)A,2(1,1)(θ1)A,1(1,1)(θ1)A,0(1,1)(θ1)A,1(1,1)(θ1)A,2(1,1)(θ1)),Q_{*}^{(1,1)}(\theta_{1})=\begin{pmatrix}A_{*,0}^{(1,1)}(\theta_{1})&A_{*,1}^{(1,1)}(\theta_{1})&A_{*,2}^{(1,1)}&&&&\cr A_{*,-1}^{(1,1)}(\theta_{1})&A_{*,0}^{(1,1)}(\theta_{1})&A_{*,1}^{(1,1)}(\theta_{1})&A_{*,2}^{(1,1)}&&&\cr A_{*,-2}^{(1,1)}(\theta_{1})&A_{*,-1}^{(1,1)}(\theta_{1})&A_{*,0}^{(1,1)}(\theta_{1})&A_{*,1}^{(1,1)}(\theta_{1})&A_{*,2}^{(1,1)}&&\cr&A_{*,-2}^{(1,1)}(\theta_{1})&A_{*,-1}^{(1,1)}(\theta_{1})&A_{*,0}^{(1,1)}(\theta_{1})&A_{*,1}^{(1,1)}(\theta_{1})&A_{*,2}^{(1,1)}(\theta_{1})&\cr&&\ddots&\ddots&\ddots&\ddots&\ddots\end{pmatrix},

which satisfies cp(A(1,1)(θ1))cp(Q(1,1)(θ1))\mbox{\rm cp}(A_{*}^{(1,1)}(\theta_{1}))\leq\mbox{\rm cp}(Q_{*}^{(1,1)}(\theta_{1})). Since Q(1,1)(θ1)Q_{*}^{(1,1)}(\theta_{1}) is a block quintuple-diagonal matrix, we have, by Remark 2.5 of [12], cp(Q(1,1)(θ1))=supθ2cp(A,(1,1)(θ1,θ2))\mbox{\rm cp}(Q_{*}^{(1,1)}(\theta_{1}))=\sup_{\theta_{2}\in\mathbb{R}}\mbox{\rm cp}(A_{*,*}^{(1,1)}(\theta_{1},\theta_{2})) and this implies

logcp(R(1,1))sup{θ1;cp(A,(1,1)(θ1,θ2))1<1for someθ2}.\log\mbox{\rm cp}(R^{(1,1)})\leq\sup\{\theta_{1}\in\mathbb{R};\mbox{\rm cp}(A_{*,*}^{(1,1)}(\theta_{1},\theta_{2}))^{-1}<1\ \mbox{for some}\ \theta_{2}\in\mathbb{R}\}. (3.6)

On the other hand, we have

A,(1,1)(θ1+θ2,θ1)\displaystyle A_{*,*}^{(1,1)}(\theta_{1}+\theta_{2},\theta_{1}) (3.7)
=e2θ1eθ1+θ2A1,1+eθ1(A1,0+eθ1+θ2A0,1)+(e(θ1+θ2)A1,1+A0,0+eθ1+θ2A1,1)\displaystyle\quad=e^{-2\theta_{1}}e^{\theta_{1}+\theta_{2}}A_{-1,1}+e^{-\theta_{1}}(A_{-1,0}+e^{\theta_{1}+\theta_{2}}A_{0,1})+(e^{-(\theta_{1}+\theta_{2})}A_{-1,-1}+A_{0,0}+e^{\theta_{1}+\theta_{2}}A_{1,1}) (3.8)
+eθ1(e(θ1+θ2)A0,1+A1,0)+e2θ1e(θ1+θ2)A1,1\displaystyle\qquad\quad+e^{\theta_{1}}(e^{-(\theta_{1}+\theta_{2})}A_{0,-1}+A_{1,0})+e^{2\theta_{1}}e^{-(\theta_{1}+\theta_{2})}A_{1,-1} (3.9)
=A,(θ1,θ2),\displaystyle\quad=A_{*,*}(\theta_{1},\theta_{2}), (3.10)

and this implies

{θ1;cp(A,(1,1)(θ1,θ2))1<1for someθ2}\displaystyle\{\theta_{1}\in\mathbb{R};\mbox{\rm cp}(A_{*,*}^{(1,1)}(\theta_{1},\theta_{2}))^{-1}<1\ \mbox{for some}\ \theta_{2}\in\mathbb{R}\} (3.11)
={θ1+θ2;cp(A,(1,1)(θ1+θ2,θ1))1<1}\displaystyle\quad=\{\theta_{1}+\theta_{2}\in\mathbb{R};\mbox{\rm cp}(A_{*,*}^{(1,1)}(\theta_{1}+\theta_{2},\theta_{1}))^{-1}<1\} (3.12)
={θ1+θ2;cp(A,(θ1,θ2))1<1}.\displaystyle\quad=\{\theta_{1}+\theta_{2}\in\mathbb{R};\mbox{\rm cp}(A_{*,*}(\theta_{1},\theta_{2}))^{-1}<1\}. (3.13)

Hence, we obtain the following proposition.

Proposition 3.2.
logcp(R(1,1))max(θ1,θ2)Γ¯θ1+θ2=θ¯1(1,1)+θ¯2(1,1).\log\mbox{\rm cp}(R^{(1,1)})\leq\max_{(\theta_{1},\theta_{2})\in\bar{\Gamma}}\theta_{1}+\theta_{2}=\bar{\theta}_{1}^{(1,1)}+\bar{\theta}_{2}^{(1,1)}. (3.14)

4 Convergence domain of the moment generating functions

In this section, we prove that, for any 𝒙+2\boldsymbol{x}\in\mathbb{Z}_{+}^{2}, the convergence domain of the matrix moment generating function Φ𝒙(θ1,θ2)\Phi_{\boldsymbol{x}}(\theta_{1},\theta_{2}) includes the point set Γ\Gamma. For the purpose, we introduce generating functions of the occupation measures since, in analysis of convergence domain, they are more convenient than the moment generating functions.

4.1 Matrix generating functions of the occupation measures

Recall that P~+\tilde{P}_{+} is the fundamental matrix of the substochastic matrix P+P_{+} and each row of P~+\tilde{P}_{+} is an occupation measure. Furthermore, P~+\tilde{P}_{+} is represented in block form as P~+=(N𝒙;𝒙+2)=(N𝒙,𝒙;𝒙,𝒙+2)\tilde{P}_{+}=(N_{\boldsymbol{x}};\boldsymbol{x}\in\mathbb{Z}_{+}^{2})=(N_{\boldsymbol{x},\boldsymbol{x}^{\prime}};\boldsymbol{x},\boldsymbol{x}^{\prime}\in\mathbb{Z}_{+}^{2}), and N𝒙N_{\boldsymbol{x}} and N𝒙,𝒙N_{\boldsymbol{x},\boldsymbol{x}^{\prime}} are given as N𝒙=(N𝒙,𝒙;𝒙+2)N_{\boldsymbol{x}}=(N_{\boldsymbol{x},\boldsymbol{x}^{\prime}};\boldsymbol{x}^{\prime}\in\mathbb{Z}_{+}^{2}) and N𝒙,𝒙=(q~(𝒙,j),(𝒙,j);j,jS0)N_{\boldsymbol{x},\boldsymbol{x}^{\prime}}=(\tilde{q}_{(\boldsymbol{x},j),(\boldsymbol{x}^{\prime},j^{\prime})};j,j^{\prime}\in S_{0}), respectively.

For 𝒙+2\boldsymbol{x}\in\mathbb{Z}_{+}^{2}, let Φ^𝒙(z1,z2)\hat{\Phi}_{\boldsymbol{x}}(z_{1},z_{2}) be the matrix generating function of the occupation measures defined as

Φ^𝒙(z1,z2)=k1=0k2=0z1k1z2k2N𝒙,(k1,k2).\hat{\Phi}_{\boldsymbol{x}}(z_{1},z_{2})=\sum_{k_{1}=0}^{\infty}\sum_{k_{2}=0}^{\infty}z_{1}^{k_{1}}z_{2}^{k_{2}}N_{\boldsymbol{x},(k_{1},k_{2})}.

In terms of Φ^𝒙(z1,z2)\hat{\Phi}_{\boldsymbol{x}}(z_{1},z_{2}), the matrix moment generating function Φ𝒙(θ1,θ2)\Phi_{\boldsymbol{x}}(\theta_{1},\theta_{2}) defined in Sect. 1 is given as Φ𝒙(θ1,θ2)=Φ^𝒙(eθ1,eθ2)\Phi_{\boldsymbol{x}}(\theta_{1},\theta_{2})=\hat{\Phi}_{\boldsymbol{x}}(e^{\theta_{1}},e^{\theta_{2}}). The matrix generating function Φ^𝒙(z1,z2)\hat{\Phi}_{\boldsymbol{x}}(z_{1},z_{2}) satisfies

Φ^𝒙(z1,z2)\displaystyle\hat{\Phi}_{\boldsymbol{x}}(z_{1},z_{2}) =N𝒙,(0,0)+Φ^𝒙(1)(z1)+Φ^𝒙(2)(z2)+Φ^𝒙(+)(z1,z2),\displaystyle=N_{\boldsymbol{x},(0,0)}+\hat{\Phi}_{\boldsymbol{x}}^{(1)}(z_{1})+\hat{\Phi}_{\boldsymbol{x}}^{(2)}(z_{2})+\hat{\Phi}_{\boldsymbol{x}}^{(+)}(z_{1},z_{2}), (4.1)

where

Φ^𝒙(1)(z1)=k1=1z1k1N𝒙,(k1,0),Φ^𝒙(2)(z2)=k2=1z2k2N𝒙,(0,k2),\displaystyle\hat{\Phi}_{\boldsymbol{x}}^{(1)}(z_{1})=\sum_{k_{1}=1}^{\infty}z_{1}^{k_{1}}N_{\boldsymbol{x},(k_{1},0)},\quad\hat{\Phi}_{\boldsymbol{x}}^{(2)}(z_{2})=\sum_{k_{2}=1}^{\infty}z_{2}^{k_{2}}N_{\boldsymbol{x},(0,k_{2})},
Φ^𝒙(+)(z1,z2)=k1=1k2=1z1k1z2k2N𝒙,(k1,k2).\displaystyle\hat{\Phi}_{\boldsymbol{x}}^{(+)}(z_{1},z_{2})=\sum_{k_{1}=1}^{\infty}\sum_{k_{2}=1}^{\infty}z_{1}^{k_{1}}z_{2}^{k_{2}}N_{\boldsymbol{x},(k_{1},k_{2})}.

Define the following matrix functions:

C^(z1,z2)=i,j{1,0,1}z1iz2jAi,j,C^0(z1,z2)=i,j{0,1}z1iz2jAi,j,\displaystyle\hat{C}(z_{1},z_{2})=\sum_{i,j\in\{-1,0,1\}}z_{1}^{i}z_{2}^{j}A_{i,j},\quad\hat{C}_{0}(z_{1},z_{2})=\sum_{i,j\in\{0,1\}}z_{1}^{i}z_{2}^{j}A_{i,j},
C^1(z1,z2)=i{1,0,1}j{0,1}z1iz2jAi,j,C^2(z1,z2)=i{0,1}j{1,0,1}z1iz2jAi,j,\displaystyle\hat{C}_{1}(z_{1},z_{2})=\sum_{i\in\{-1,0,1\}}\sum_{j\in\{0,1\}}z_{1}^{i}z_{2}^{j}A_{i,j},\quad\hat{C}_{2}(z_{1},z_{2})=\sum_{i\in\{0,1\}}\sum_{j\in\{-1,0,1\}}z_{1}^{i}z_{2}^{j}A_{i,j},

where A,(θ1,θ2)=C^(eθ1,eθ2)A_{*,*}(\theta_{1},\theta_{2})=\hat{C}(e^{\theta_{1}},e^{\theta_{2}}). Under Assumption 2.2, the summation of each row of P~+\tilde{P}_{+} is finite and we obtain P~+P+<\tilde{P}_{+}P_{+}<\infty. This leads us to the following recursive formula for P~+\tilde{P}_{+}:

P~+=I+P~+P+.\tilde{P}_{+}=I+\tilde{P}_{+}P_{+}. (4.2)

Considering the block structure of P~+\tilde{P}_{+}, we obtain, for 𝒙+2\boldsymbol{x}\in\mathbb{Z}_{+}^{2}, the recursive formula for N𝒙N_{\boldsymbol{x}}:

N𝒙=(1(𝒙=𝒙)I;𝒙+2)+N𝒙P+,\displaystyle N_{\boldsymbol{x}}=\big{(}1(\boldsymbol{x}^{\prime}=\boldsymbol{x})I;\boldsymbol{x}^{\prime}\in\mathbb{Z}_{+}^{2}\big{)}+N_{\boldsymbol{x}}P_{+}, (4.3)

and this leads us to

Φ^𝒙(z1,z2)\displaystyle\hat{\Phi}_{\boldsymbol{x}}(z_{1},z_{2}) =z1x1z2x2I+N𝒙,(0,0)C^0(z1,z2)+Φ^𝒙(1)(z1)C^1(z1,z2)\displaystyle=z_{1}^{x_{1}}z_{2}^{x_{2}}I+N_{\boldsymbol{x},(0,0)}\hat{C}_{0}(z_{1},z_{2})+\hat{\Phi}_{\boldsymbol{x}}^{(1)}(z_{1})\hat{C}_{1}(z_{1},z_{2}) (4.4)
+Φ^𝒙(2)(z2)C^2(z1,z2)+Φ^𝒙(+)(z1,z2)C^(z1,z2).\displaystyle\qquad\qquad\qquad+\hat{\Phi}_{\boldsymbol{x}}^{(2)}(z_{2})\hat{C}_{2}(z_{1},z_{2})+\hat{\Phi}_{\boldsymbol{x}}^{(+)}(z_{1},z_{2})\hat{C}(z_{1},z_{2}). (4.5)

Combining this equation with (4.1), we obtain

Φ^𝒙(+)(z1,z2)(IC^(z1,z2))+Φ^𝒙(1)(z1)(IC^1(z1,z2))\displaystyle\hat{\Phi}_{\boldsymbol{x}}^{(+)}(z_{1},z_{2})(I-\hat{C}(z_{1},z_{2}))+\hat{\Phi}_{\boldsymbol{x}}^{(1)}(z_{1})(I-\hat{C}_{1}(z_{1},z_{2})) (4.6)
+Φ^𝒙(2)(z2)(IC^2(z1,z2))+N𝒙,(0,0)(IC^0(z1,z2))z1x1z2x2I=O.\displaystyle\qquad+\hat{\Phi}_{\boldsymbol{x}}^{(2)}(z_{2})(I-\hat{C}_{2}(z_{1},z_{2}))+N_{\boldsymbol{x},(0,0)}(I-\hat{C}_{0}(z_{1},z_{2}))-z_{1}^{x_{1}}z_{2}^{x_{2}}I=O. (4.7)

This equation will become a clue for investigating the convergence domain.

4.2 Radii of convergence of Φ^𝒙(1)(z)\hat{\Phi}_{\boldsymbol{x}}^{(1)}(z) and Φ^𝒙(2)(z)\hat{\Phi}_{\boldsymbol{x}}^{(2)}(z)

For 𝒚=(𝒙,j)=(x1,x2,j)𝕊+\boldsymbol{y}=(\boldsymbol{x},j)=(x_{1},x_{2},j)\in\mathbb{S}_{+}, x1,x2+x_{1}^{\prime},x_{2}^{\prime}\in\mathbb{Z}_{+} and jS0j^{\prime}\in S_{0}, define generating functions φ^𝒚,(x2,j)(1)(z)\hat{\varphi}_{\boldsymbol{y},(x_{2}^{\prime},j^{\prime})}^{(1)}(z) and φ^𝒚,(x1,j)(2)(z)\hat{\varphi}_{\boldsymbol{y},(x_{1}^{\prime},j^{\prime})}^{(2)}(z) as

φ^𝒚,(x2,j)(1)(z)=k=0q~𝒚,(k,x2,j)zk,φ^𝒚,(x1,j)(2)(z)=k=0q~𝒚,(x1,k,j)zk,\hat{\varphi}_{\boldsymbol{y},(x_{2}^{\prime},j^{\prime})}^{(1)}(z)=\sum_{k=0}^{\infty}\tilde{q}_{\boldsymbol{y},(k,x_{2}^{\prime},j^{\prime})}z^{k},\quad\hat{\varphi}_{\boldsymbol{y},(x_{1}^{\prime},j^{\prime})}^{(2)}(z)=\sum_{k=0}^{\infty}\tilde{q}_{\boldsymbol{y},(x_{1}^{\prime},k,j^{\prime})}z^{k},

and denote by r𝒚,(x2,j)(1)r_{\boldsymbol{y},(x_{2}^{\prime},j^{\prime})}^{(1)} and r𝒚,(x1,j)(2)r_{\boldsymbol{y},(x_{1}^{\prime},j^{\prime})}^{(2)} the radii of convergence of them, respectively, i.e.,

r𝒚,(x2,j)(1)=sup{r0;φ^𝒚,(x2,j)(1)(r)<},r𝒚,(x1,j)(2)=sup{r0;φ^𝒚,(x1,j)(2)(r)<}.r_{\boldsymbol{y},(x_{2}^{\prime},j^{\prime})}^{(1)}=\sup\{r\geq 0;\hat{\varphi}_{\boldsymbol{y},(x_{2}^{\prime},j^{\prime})}^{(1)}(r)<\infty\},\quad r_{\boldsymbol{y},(x_{1}^{\prime},j^{\prime})}^{(2)}=\sup\{r\geq 0;\hat{\varphi}_{\boldsymbol{y},(x_{1}^{\prime},j^{\prime})}^{(2)}(r)<\infty\}.

We have

Φ^𝒙(1)(z)=(φ^(𝒙,j),(0,j)(1)(z);j,jS0),Φ^𝒙(2)(z)=(φ^(𝒙,j),(0,j)(2)(z);j,jS0),\hat{\Phi}_{\boldsymbol{x}}^{(1)}(z)=\big{(}\hat{\varphi}_{(\boldsymbol{x},j),(0,j^{\prime})}^{(1)}(z);j,j^{\prime}\in S_{0}\big{)},\quad\hat{\Phi}_{\boldsymbol{x}}^{(2)}(z)=\big{(}\hat{\varphi}_{(\boldsymbol{x},j),(0,j^{\prime})}^{(2)}(z);j,j^{\prime}\in S_{0}\big{)},

and hence, in order to know the radii of convergence of Φ^𝒙(1)(z)\hat{\Phi}_{\boldsymbol{x}}^{(1)}(z) and Φ^𝒙(2)(z)\hat{\Phi}_{\boldsymbol{x}}^{(2)}(z), it suffices to obtain r(𝒙,j),(0,j)(1)r_{(\boldsymbol{x},j),(0,j^{\prime})}^{(1)} and r(𝒙,j),(0,j)(2)r_{(\boldsymbol{x},j),(0,j^{\prime})}^{(2)} for j,jS0j,j^{\prime}\in S_{0}. For the purpose, we present a couple of propositions.

Proposition 4.1.

For every 𝐲,𝐲𝕊+\boldsymbol{y},\boldsymbol{y}^{\prime}\in\mathbb{S}_{+}, k+k\in\mathbb{Z}_{+} and lS0l\in S_{0}, we have r𝐲,(k,l)(1)=r𝐲,(k,l)(1)r_{\boldsymbol{y},(k,l)}^{(1)}=r_{\boldsymbol{y}^{\prime},(k,l)}^{(1)} and r𝐲,(k,l)(2)=r𝐲,(k,l)(2)r_{\boldsymbol{y},(k,l)}^{(2)}=r_{\boldsymbol{y}^{\prime},(k,l)}^{(2)}.

Proof.

Recall that, for 𝒚,(k1,k2,l)𝕊+\boldsymbol{y},(k_{1},k_{2},l)\in\mathbb{S}_{+}, q~𝒚,(k1,k2,l)\tilde{q}_{\boldsymbol{y},(k_{1},k_{2},l)} is given by

q~𝒚,(k1,k2,l)=𝔼(n=01(𝒀n=(k1,k2,l))1(τ>n)|𝒀0=𝒚),\displaystyle\tilde{q}_{\boldsymbol{y},(k_{1},k_{2},l)}=\mathbb{E}\bigg{(}\sum_{n=0}^{\infty}1(\boldsymbol{Y}_{n}=(k_{1},k_{2},l))1(\tau>n)\,\Big{|}\,\boldsymbol{Y}_{0}=\boldsymbol{y}\bigg{)},

where τ\tau is the stopping time defined as τ=inf{n0:𝒀n𝕊𝕊+}\tau=\inf\{n\geq 0:\boldsymbol{Y}_{n}\in\mathbb{S}\setminus\mathbb{S}_{+}\}. For any 𝒚𝕊+\boldsymbol{y}^{\prime}\in\mathbb{S}_{+}, since P+P_{+} is irreducible, there exists n00n_{0}\geq 0 such that (𝒀n0=𝒚|𝒀0=𝒚)>0\mathbb{P}(\boldsymbol{Y}_{n_{0}}=\boldsymbol{y}^{\prime}\,|\,\boldsymbol{Y}_{0}=\boldsymbol{y})>0. Using this n0n_{0}, we obtain

q~𝒚,(k1,k2,l)\displaystyle\tilde{q}_{\boldsymbol{y},(k_{1},k_{2},l)} 𝔼(n=n01(𝒀n=(k1,k2,l))1(τ>n)|𝒀n0=𝒚)(𝒀n0=𝒚|𝒀0=𝒚)\displaystyle\geq\mathbb{E}\bigg{(}\sum_{n=n_{0}}^{\infty}1(\boldsymbol{Y}_{n}=(k_{1},k_{2},l))1(\tau>n)\,\Big{|}\,\boldsymbol{Y}_{n_{0}}=\boldsymbol{y}^{\prime}\bigg{)}\mathbb{P}(\boldsymbol{Y}_{n_{0}}=\boldsymbol{y}^{\prime}\,|\,\boldsymbol{Y}_{0}=\boldsymbol{y}) (4.8)
=q~𝒚,(k1,k2,l)(𝒀n0=𝒚|𝒀0=𝒚),\displaystyle=\tilde{q}_{\boldsymbol{y}^{\prime},(k_{1},k_{2},l)}\,\mathbb{P}(\boldsymbol{Y}_{n_{0}}=\boldsymbol{y}^{\prime}\,|\,\boldsymbol{Y}_{0}=\boldsymbol{y}), (4.9)

and this implies that r𝒚,(k2,l)(1)r𝒚,(k2,l)(1)r_{\boldsymbol{y},(k_{2},l)}^{(1)}\leq r_{\boldsymbol{y}^{\prime},(k_{2},l)}^{(1)}. Exchanging 𝒚\boldsymbol{y} with 𝒚\boldsymbol{y}^{\prime}, we also obtain r𝒚,(k2,l)(1)r𝒚,(k2,l)(1)r_{\boldsymbol{y}^{\prime},(k_{2},l)}^{(1)}\leq r_{\boldsymbol{y},(k_{2},l)}^{(1)}, and this leads us to r𝒚,(k2,l)(1)=r𝒚,(k2,l)(1)r_{\boldsymbol{y},(k_{2},l)}^{(1)}=r_{\boldsymbol{y}^{\prime},(k_{2},l)}^{(1)}. The other equation r𝒚,(k1,l)(2)=r𝒚,(k1,l)(2)r_{\boldsymbol{y},(k_{1},l)}^{(2)}=r_{\boldsymbol{y}^{\prime},(k_{1},l)}^{(2)} can analogously be obtained. ∎

Next, we consider the matrix generating functions in matrix geometric form corresponding to {𝒀^n(1)}\{\hat{\boldsymbol{Y}}_{n}^{(1)}\} and {𝒀^n(2)}\{\hat{\boldsymbol{Y}}_{n}^{(2)}\}. For k+k\in\mathbb{Z}_{+}, define matrices N0,k(1)N^{(1)}_{0,k} and N0,k(2)N^{(2)}_{0,k} and matrix generating functions Φ^0(1)(z)\hat{\Phi}^{(1)}_{0}(z) and Φ^0(2)(z)\hat{\Phi}^{(2)}_{0}(z) as

N0,k(1)=(N(0,x2),(k,x2);x2,x2+),N0,k(2)=(N(x1,0),(x1,k);x1,x1+),\displaystyle N^{(1)}_{0,k}=\big{(}N_{(0,x_{2}),(k,x_{2}^{\prime})};x_{2},x_{2}^{\prime}\in\mathbb{Z}_{+}\big{)},\quad N^{(2)}_{0,k}=\big{(}N_{(x_{1},0),(x_{1}^{\prime},k)};x_{1},x_{1}^{\prime}\in\mathbb{Z}_{+}\big{)},
Φ^0(1)(z)=k=0N0,k(1)zk=((φ^(0,x2,j),(x2,j)(1)(z);j,jS0);x2,x2+),\displaystyle\hat{\Phi}^{(1)}_{0}(z)=\sum_{k=0}^{\infty}N^{(1)}_{0,k}z^{k}=\Big{(}\big{(}\hat{\varphi}_{(0,x_{2},j),(x_{2}^{\prime},j^{\prime})}^{(1)}(z);j,j^{\prime}\in S_{0}\big{)};x_{2},x_{2}^{\prime}\in\mathbb{Z}_{+}\Big{)},
Φ^0(2)(z)=k=0N0,k(2)zk=((φ^(x1,0,j),(x1,j)(2)(z);j,jS0);x1,x1+).\displaystyle\hat{\Phi}^{(2)}_{0}(z)=\sum_{k=0}^{\infty}N^{(2)}_{0,k}z^{k}=\Big{(}\big{(}\hat{\varphi}_{(x_{1},0,j),(x_{1}^{\prime},j^{\prime})}^{(2)}(z);j,j^{\prime}\in S_{0}\big{)};x_{1},x_{1}^{\prime}\in\mathbb{Z}_{+}\Big{)}.

Further define N0(1)N^{(1)}_{0} and N0(2)N^{(2)}_{0} as

N0(1)=(N0,k(1);k+),N0(2)=(N0,k(2);k+).N^{(1)}_{0}=\big{(}N^{(1)}_{0,k};k\in\mathbb{Z}_{+}\big{)},\quad N^{(2)}_{0}=\big{(}N^{(2)}_{0,k};k\in\mathbb{Z}_{+}\big{)}.

From (4.2), we obtain, for i{1,2}i\in\{1,2\},

P~(i)=I+P~(i)P(i),\tilde{P}^{(i)}=I+\tilde{P}^{(i)}P^{(i)}, (4.10)

where P~(i)=k=0(P(i))k\tilde{P}^{(i)}=\sum_{k=0}^{\infty}(P^{(i)})^{k} and P(i)P^{(i)} is given by (3.1). This leads us to, for i{1,2}i\in\{1,2\},

N0(i)=(IO)+N0(i)P(i),\displaystyle N^{(i)}_{0}=\begin{pmatrix}I&O&\cdots\end{pmatrix}+N^{(i)}_{0}P^{(i)}, (4.11)

and we obtain, for i{1,2}i\in\{1,2\},

N0,0(i)=I+N0,0(i)A0(i)+N0,1(i)A1(i),\displaystyle N_{0,0}^{(i)}=I+N_{0,0}^{(i)}A_{0}^{(i)}+N_{0,1}^{(i)}A_{-1}^{(i)}, (4.12)
N0,k(i)=N0,k1(i)A1(i)+N0,k(i)A0(i)+N0,k+1(i)A1(i),k1.\displaystyle N_{0,k}^{(i)}=N_{0,k-1}^{(i)}A_{1}^{(i)}+N_{0,k}^{(i)}A_{0}^{(i)}+N_{0,k+1}^{(i)}A_{-1}^{(i)},\ k\geq 1. (4.13)

The solution to equation (4.13) is given as

N0,k(i)=N0,0(i)(R(i))k,N0,0(i)=(IA0(i)R(i)A1(i))1=k=0(A0(i)+R(i)A1(i))k,\displaystyle N_{0,k}^{(i)}=N_{0,0}^{(i)}(R^{(i)})^{k},\quad N_{0,0}^{(i)}=(I-A_{0}^{(i)}-R^{(i)}A_{-1}^{(i)})^{-1}=\sum_{k=0}^{\infty}(A_{0}^{(i)}+R^{(i)}A_{-1}^{(i)})^{k}, (4.14)

where we use the fact that cp(A0(i)+R(i)A1(i))<1\mbox{\rm cp}\big{(}A_{0}^{(i)}+R^{(i)}A_{-1}^{(i)}\big{)}<1 since P~(i)\tilde{P}^{(i)} is finite. From (4.14) and Fubini’s theorem, we obtain

Φ^0(i)(z)=k=0N0,0(i)(R(i))kzk=N0,0(i)k=0(zR(i))k.\displaystyle\hat{\Phi}_{0}^{(i)}(z)=\sum_{k=0}^{\infty}N_{0,0}^{(i)}(R^{(i)})^{k}z^{k}=N_{0,0}^{(i)}\sum_{k=0}^{\infty}(zR^{(i)})^{k}. (4.15)

This leads us to the following proposition.

Proposition 4.2.

There exist some states (0,x2,j)(0,x_{2},j) and (x1,0,j)(x_{1}^{\prime},0,j^{\prime}) in 𝕊+\mathbb{S}_{+} such that, for every k+k\in\mathbb{Z}_{+} and lS0l\in S_{0}, r(0,x2,j),(k,l)(1)=cp(R(1))=eθ¯1(1)r_{(0,x_{2},j),(k,l)}^{(1)}=\mbox{\rm cp}(R^{(1)})=e^{\bar{\theta}_{1}^{(1)}} and r(x1,0,j),(k,l)(2)=cp(R(2))=eθ¯2(2)r_{(x_{1}^{\prime},0,j^{\prime}),(k,l)}^{(2)}=\mbox{\rm cp}(R^{(2)})=e^{\bar{\theta}_{2}^{(2)}}.

Proof.

Define R(1)(z)R^{(1)}(z) as R(1)(z)=k=0(zR(1))kR^{(1)}(z)=\sum_{k=0}^{\infty}(zR^{(1)})^{k}. Recall that the ((x2,j),(x2,j))((x_{2},j),(x_{2}^{\prime},j^{\prime}))-element of Φ^0(1)(z)\hat{\Phi}_{0}^{(1)}(z) is φ^(0,x2,j),(x2,j)(1)(z)\hat{\varphi}_{(0,x_{2},j),(x_{2}^{\prime},j^{\prime})}^{(1)}(z). Furthermore, P+P_{+} is irreducible and every element of N0,0(1)N_{0,0}^{(1)} is positive. Hence, by (4.15), we have

Φ^0(1)(z)<R(1)(z)<,\hat{\Phi}_{0}^{(1)}(z)<\infty\Rightarrow R^{(1)}(z)<\infty, (4.16)

and this implies that, for every x2,x2+x_{2},x_{2}^{\prime}\in\mathbb{Z}_{+} and every j,jS0j,j^{\prime}\in S_{0}, r(0,x2,j),(x2,j)(1)cp(R(1))r_{(0,x_{2},j),(x_{2}^{\prime},j^{\prime})}^{(1)}\leq\mbox{\rm cp}(R^{(1)}). On the other hand, we have R(1)=A1(1)N0,0(1)R^{(1)}=A_{1}^{(1)}N_{0,0}^{(1)} and, by Fubini’s theorem,

zA1(1)Φ^0(1)(z)\displaystyle zA_{1}^{(1)}\hat{\Phi}_{0}^{(1)}(z) =k=0zA1(1)N0,0(1)(zR(1))k=k=0(zR(1))k+1R(1)(z).\displaystyle=\sum_{k=0}^{\infty}zA_{1}^{(1)}N_{0,0}^{(1)}(zR^{(1)})^{k}=\sum_{k=0}^{\infty}(zR^{(1)})^{k+1}\leq R^{(1)}(z). (4.17)

Since A1(1)A_{1}^{(1)} is a block tri-diagonal matrix and the size of each block is finite, the number of positive elements in each row of A1(1)A_{1}^{(1)} is finite. Since P+P_{+} is irreducible, at least one element of A1(1)A_{1}^{(1)}, say the ((x0,j0),(x2,j))((x_{0},j_{0}),(x_{2},j))-element, is positive. Hence, we have, for every k+k\in\mathbb{Z}_{+} and lS0l\in S_{0},

R(1)(z)<φ^(0,x2,j),(k,l)(1)(z)<,R^{(1)}(z)<\infty\Rightarrow\hat{\varphi}_{(0,x_{2},j),(k,l)}^{(1)}(z)<\infty, (4.18)

and this implies that cp(R(1))r(0,x2,j),(k,l)(1)\mbox{\rm cp}(R^{(1)})\leq r_{(0,x_{2},j),(k,l)}^{(1)}. As a result, we obtain, for some x2+x_{2}\in\mathbb{Z}_{+} and jS0j\in S_{0} and for every k+k\in\mathbb{Z}_{+} and lS0l\in S_{0}, r(0,x2,j),(k,l)(1)=cp(R(1))r_{(0,x_{2},j),(k,l)}^{(1)}=\mbox{\rm cp}(R^{(1)}).

Analogously, we obtain, for some x1+x_{1}\in\mathbb{Z}_{+} and jS0j\in S_{0} and for every k+k\in\mathbb{Z}_{+} and lS0l\in S_{0}, r(x1,0,j),(k,l)(2)=cp(R(2))r_{(x_{1},0,j),(k,l)}^{(2)}=\mbox{\rm cp}(R^{(2)}). ∎

Propositions 4.1 and 4.2 lead us to the following lemma.

Lemma 4.1.

For every 𝐲𝕊+\boldsymbol{y}\in\mathbb{S}_{+} and for every x1,x2+x_{1}^{\prime},x_{2}^{\prime}\in\mathbb{Z}_{+} and jS0j^{\prime}\in S_{0}, we have r𝐲,(x1,j)(1)=cp(R(1))=eθ¯1(1)r_{\boldsymbol{y},(x_{1}^{\prime},j^{\prime})}^{(1)}=\mbox{\rm cp}(R^{(1)})=e^{\bar{\theta}_{1}^{(1)}} and r𝐲,(x2,j)(2)=cp(R(2))=eθ¯2(2)r_{\boldsymbol{y},(x_{2}^{\prime},j^{\prime})}^{(2)}=\mbox{\rm cp}(R^{(2)})=e^{\bar{\theta}_{2}^{(2)}}. Hence, for every 𝐱+2\boldsymbol{x}\in\mathbb{Z}_{+}^{2}, the radius of convergence of Φ^𝐱(1)(z)\hat{\Phi}_{\boldsymbol{x}}^{(1)}(z) is given by eθ¯1(1)e^{\bar{\theta}_{1}^{(1)}} and that of Φ^𝐱(2)(z)\hat{\Phi}_{\boldsymbol{x}}^{(2)}(z) by eθ¯2(2)e^{\bar{\theta}_{2}^{(2)}}.

4.3 Radius of convergence of another matrix generating function

In the previous subsection, we defined the matrix generating functions of the occupation measures in matrix geometric form corresponding to {𝒀^n(1)}\{\hat{\boldsymbol{Y}}_{n}^{(1)}\} and {𝒀^n(2)}\{\hat{\boldsymbol{Y}}_{n}^{(2)}\}. In this subsection, we consider that corresponding to {𝒀^n(1,1)}\{\hat{\boldsymbol{Y}}_{n}^{(1,1)}\}. For k+k\in\mathbb{Z}_{+}, define N0,k(1,1)N_{0,k}^{(1,1)} as

N0,k(1,1)\displaystyle N_{0,k}^{(1,1)} =(N(x1,x2),(x1,x2);min{x1,x2}=0,x1x2,min{x1,x2}=k,x1x2)\displaystyle=\left(N_{(x_{1},x_{2}),(x_{1}^{\prime},x_{2}^{\prime})};\min\{x_{1},x_{2}\}=0,\,x_{1}-x_{2}\in\mathbb{Z},\,\min\{x_{1}^{\prime},x_{2}^{\prime}\}=k,\,x_{1}^{\prime}-x_{2}^{\prime}\in\mathbb{Z}\right)
=(N(0,1),(k,k+1)N(0,1),(k,k)N(0,1),(k+1,k)N(0,0),(k,k+1)N(0,0),(k,k)N(0,0),(k+1,k)N(1,0),(k,k+1)N(1,0),(k,k)N(1,0),(k+1,k)).\displaystyle=\begin{pmatrix}&\vdots&\vdots&\vdots&\cr\cdots&N_{(0,1),(k,k+1)}&N_{(0,1),(k,k)}&N_{(0,1),(k+1,k)}&\cdots\cr\cdots&N_{(0,0),(k,k+1)}&N_{(0,0),(k,k)}&N_{(0,0),(k+1,k)}&\cdots\cr\cdots&N_{(1,0),(k,k+1)}&N_{(1,0),(k,k)}&N_{(1,0),(k+1,k)}&\cdots\cr&\vdots&\vdots&\vdots&\end{pmatrix}.

Then, in a manner similar to that used for deriving (4.14), we obtain

N0,k(1,1)=N0,0(1,1)(R(1,1))k,N0,0(1,1)=(IA0(1,1)R(1,1)A1(1,1))1.N_{0,k}^{(1,1)}=N_{0,0}^{(1,1)}(R^{(1,1)})^{k},\quad N_{0,0}^{(1,1)}=(I-A_{0}^{(1,1)}-R^{(1,1)}A_{-1}^{(1,1)})^{-1}. (4.19)

Define a matrix generating function Φ^0(1,1)(z)\hat{\Phi}_{0}^{(1,1)}(z) as

Φ^0(1,1)(z)=k=0N0,k(1,1)zk=N0,0(1,1)k=0(zR(1,1))k.\hat{\Phi}_{0}^{(1,1)}(z)=\sum_{k=0}^{\infty}N_{0,k}^{(1,1)}z^{k}=N_{0,0}^{(1,1)}\sum_{k=0}^{\infty}(zR^{(1,1)})^{k}.

For k,kk,k^{\prime}\in\mathbb{Z} and j,jS0j,j^{\prime}\in S_{0}, let φ^(k,j),(k,j)(1,1)(z)\hat{\varphi}_{(k,j),(k^{\prime},j^{\prime})}^{(1,1)}(z) be the ((k,j),(k,j))((k,j),(k^{\prime},j^{\prime}))-element of Φ^0(1,1)(z)\hat{\Phi}_{0}^{(1,1)}(z) and denote by r(k,j),(k,j)(1,1)r_{(k,j),(k^{\prime},j^{\prime})}^{(1,1)} the radius of convergence of φ^(k,j),(k,j)(1,1)(z)\hat{\varphi}_{(k,j),(k^{\prime},j^{\prime})}^{(1,1)}(z). In terms of q~𝒚,𝒚\tilde{q}_{\boldsymbol{y},\boldsymbol{y}^{\prime}}, φ^(k,j),(k,j)(1,1)(z)\hat{\varphi}_{(k,j),(k^{\prime},j^{\prime})}^{(1,1)}(z) is given as

φ^(k,j),(k,j)(1,1)(z)=l=0q~(k0,(k0),j),(l+(k0),l(k0),j)zl,\hat{\varphi}_{(k,j),(k^{\prime},j^{\prime})}^{(1,1)}(z)=\sum_{l=0}^{\infty}\tilde{q}_{(k\vee 0,-(k\wedge 0),j),(l+(k^{\prime}\vee 0),l-(k^{\prime}\wedge 0),j^{\prime})}z^{l}, (4.20)

where xy=max{x,y}x\vee y=\max\{x,y\} and xy=min{x,y}x\wedge y=\min\{x,y\}. We have the following property of r(k,j),(k,j)(1,1)r_{(k,j),(k^{\prime},j^{\prime})}^{(1,1)}, which corresponds to Lemma 4.1.

Lemma 4.2.

For every k,kk,k^{\prime}\in\mathbb{Z} and j,jS0j,j^{\prime}\in S_{0}, we have r(k,j),(k,j)(1,1)=cp(R(1,1))r_{(k,j),(k^{\prime},j^{\prime})}^{(1,1)}=\mbox{\rm cp}(R^{(1,1)}). Hence, the radius of convergence of Φ^0(1,1)(z)\hat{\Phi}_{0}^{(1,1)}(z) is given by cp(R(1,1))\mbox{\rm cp}(R^{(1,1)}).

Since this lemma can be proved in a manner similar to that used for proving Propositions 4.1 and 4.2, we omit the proof.

4.4 Convergence domain of Φ𝒙(θ1,θ2)\Phi_{\boldsymbol{x}}(\theta_{1},\theta_{2})

Recall that, for 𝒙+2\boldsymbol{x}\in\mathbb{Z}_{+}^{2}, the convergence domain of the matrix moment generating function Φ𝒙(θ1,θ2)\Phi_{\boldsymbol{x}}(\theta_{1},\theta_{2}) is given as 𝒟𝒙=the interior of {(θ1,θ2)2:Φ𝒙(θ1,θ2)<}\mathcal{D}_{\boldsymbol{x}}=\mbox{the interior of }\{(\theta_{1},\theta_{2})\in\mathbb{R}^{2}:\Phi_{\boldsymbol{x}}(\theta_{1},\theta_{2})<\infty\}. This domain does not depend on 𝒙\boldsymbol{x}.

Proposition 4.3.

For every 𝐱,𝐱+2\boldsymbol{x},\boldsymbol{x}^{\prime}\in\mathbb{Z}_{+}^{2}, 𝒟𝐱=𝒟𝐱\mathcal{D}_{\boldsymbol{x}}=\mathcal{D}_{\boldsymbol{x}^{\prime}}.

Proof.

For every 𝒙,𝒙+2\boldsymbol{x},\boldsymbol{x}^{\prime}\in\mathbb{Z}_{+}^{2} and jS0j\in S_{0}, since P+P_{+} is irreducible, there exists n00n_{0}\geq 0 such that (𝒀n0=(𝒙,j)|𝒀0=(𝒙,j))>0\mathbb{P}(\boldsymbol{Y}_{n_{0}}=(\boldsymbol{x}^{\prime},j)\,|\,\boldsymbol{Y}_{0}=(\boldsymbol{x},j))>0. Using this n0n_{0} and inequality (4.9), we obtain, for every 𝒙,𝒙+2\boldsymbol{x},\boldsymbol{x}^{\prime}\in\mathbb{Z}_{+}^{2} and j,jS0j,j^{\prime}\in S_{0},

[Φ𝒙(θ1,θ2)]j,j\displaystyle[\Phi_{\boldsymbol{x}}(\theta_{1},\theta_{2})]_{j,j^{\prime}} =k1=0k2=0q~(𝒙,j),(k1,k2,j)eθ1k1+θ2k2\displaystyle=\sum_{k_{1}=0}^{\infty}\sum_{k_{2}=0}^{\infty}\tilde{q}_{(\boldsymbol{x},j),(k_{1},k_{2},j^{\prime})}e^{\theta_{1}k_{1}+\theta_{2}k_{2}} (4.21)
k1=0k2=0q~(𝒙,j),(k1,k2,j)eθ1k1+θ2k2(𝒀n0=(𝒙,j)|𝒀0=(𝒙,j))\displaystyle\geq\sum_{k_{1}=0}^{\infty}\sum_{k_{2}=0}^{\infty}\tilde{q}_{(\boldsymbol{x}^{\prime},j),(k_{1},k_{2},j^{\prime})}e^{\theta_{1}k_{1}+\theta_{2}k_{2}}\,\mathbb{P}(\boldsymbol{Y}_{n_{0}}=(\boldsymbol{x}^{\prime},j)\,|\,\boldsymbol{Y}_{0}=(\boldsymbol{x},j)) (4.22)
=[Φ𝒙(θ1,θ2)]j,j(𝒀n0=(𝒙,j)|𝒀0=(𝒙,j)),\displaystyle=[\Phi_{\boldsymbol{x}^{\prime}}(\theta_{1},\theta_{2})]_{j,j^{\prime}}\,\mathbb{P}(\boldsymbol{Y}_{n_{0}}=(\boldsymbol{x}^{\prime},j)\,|\,\boldsymbol{Y}_{0}=(\boldsymbol{x},j)), (4.23)

and this implies 𝒟𝒙𝒟𝒙\mathcal{D}_{\boldsymbol{x}}\subset\mathcal{D}_{\boldsymbol{x}^{\prime}}. Exchanging 𝒙\boldsymbol{x} with 𝒙\boldsymbol{x}^{\prime}, we obtain 𝒟𝒙𝒟𝒙\mathcal{D}_{\boldsymbol{x}^{\prime}}\subset\mathcal{D}_{\boldsymbol{x}}, and this completes the proof. ∎

A relation between the point sets Γ\Gamma and 𝒟x\mathcal{D}_{x} is given as follows.

Lemma 4.3.

For every 𝐱+2\boldsymbol{x}\in\mathbb{Z}_{+}^{2}, Γ𝒟𝐱\Gamma\subset\mathcal{D}_{\boldsymbol{x}} and hence, 𝒟𝒟𝐱\mathcal{D}\subset\mathcal{D}_{\boldsymbol{x}}.

We use complex analytic methods for proving this lemma. Letting zz and ww be complex variables, we define complex matrix functions Φ^𝒙(z,w)\hat{\Phi}_{\boldsymbol{x}}(z,w), Φ^𝒙(1)(z)\hat{\Phi}_{\boldsymbol{x}}^{(1)}(z), Φ^𝒙(2)(w)\hat{\Phi}_{\boldsymbol{x}}^{(2)}(w), Φ^𝒙(+)(z,w)\hat{\Phi}_{\boldsymbol{x}}^{(+)}(z,w), C^(z,w)\hat{C}(z,w), C^0(z,w)\hat{C}_{0}(z,w), C^1(z,w)\hat{C}_{1}(z,w) and C^2(z,w)\hat{C}_{2}(z,w) in the same manner as that used in the case of real variable. They satisfy equation (4.1). For r>0r>0, denote by Δr\Delta_{r}, Δ¯r\bar{\Delta}_{r} and Δr\partial\Delta_{r} the open disk, closed disk and circle of center 0 and radius rr on the complex plane, respectively. Since every element of P~+\tilde{P}_{+} is nonnegative, we immediately obtain, by Lemma 4.1, the following proposition.

Proposition 4.4.

Under Assumption 3.1, for every 𝐱+2\boldsymbol{x}\in\mathbb{Z}_{+}^{2}, Φ^𝐱(1)(z)\hat{\Phi}_{\boldsymbol{x}}^{(1)}(z) is element-wise analytic in Δeθ¯1(1)\Delta_{e^{\bar{\theta}_{1}^{(1)}}} and Φ^𝐱(2)(w)\hat{\Phi}_{\boldsymbol{x}}^{(2)}(w) is element-wise analytic in Δeθ¯2(2)\Delta_{e^{\bar{\theta}_{2}^{(2)}}}.

Recall that the matrix generating function Φ^𝒙(+)(z,w)\hat{\Phi}_{\boldsymbol{x}}^{(+)}(z,w) is the function of two variables defined by the power series

Φ^𝒙(+)(z,w)=k1=1k2=1N𝒙,(k1,k2)zk1wk2,\hat{\Phi}_{\boldsymbol{x}}^{(+)}(z,w)=\sum_{k_{1}=1}^{\infty}\sum_{k_{2}=1}^{\infty}N_{\boldsymbol{x},(k_{1},k_{2})}z^{k_{1}}w^{k_{2}}, (4.24)

which is also the Taylor series for the function Φ^𝒙(+)(z,w)\hat{\Phi}_{\boldsymbol{x}}^{(+)}(z,w) at (z,w)=(0,0)(z,w)=(0,0). Since Φ𝒙(+)(θ1,θ2)=Φ^𝒙(+)(eθ1,eθ2)\Phi_{\boldsymbol{x}}^{(+)}(\theta_{1},\theta_{2})=\hat{\Phi}_{\boldsymbol{x}}^{(+)}(e^{\theta_{1}},e^{\theta_{2}}), we use power series (4.24) for proving Lemma 4.3. Under Assumption 2.2, since Φ^𝒙(+)(1,1)<\hat{\Phi}_{\boldsymbol{x}}^{(+)}(1,1)<\infty, power series (4.24) converges absolutely on Δ¯1×Δ¯1\bar{\Delta}_{1}\times\bar{\Delta}_{1} and Φ^𝒙(+)(z,w)\hat{\Phi}_{\boldsymbol{x}}^{(+)}(z,w) is analytic in Δ1×Δ1\Delta_{1}\times\Delta_{1}. Lemma 4.3 asserts that, for every (p,q)Γ(p,q)\in\Gamma, power series (4.24) converges absolutely in Δ¯ep×Δ¯eq\bar{\Delta}_{e^{p}}\times\bar{\Delta}_{e^{q}}, and to prove it, it suffices to show that, every (p,q)Γ(p,q)\in\Gamma, power series (4.24) converges at (z,w)=(ep,eq)(z,w)=(e^{p},e^{q}) since any coefficient in the power series is nonnegative. Define a matrix function H(z,w)H(z,w) as H(z,w)=F(z,w)/g(z,w)H(z,w)=F(z,w)/g(z,w), where

F(z,w)\displaystyle F(z,w) =zs0ws0(Φ^𝒙(1)(z)(C^1(z,w)I)+Φ^𝒙(2)(w)(C^2(z,w)I)\displaystyle=z^{s_{0}}w^{s_{0}}\bigl{(}\hat{\Phi}_{\boldsymbol{x}}^{(1)}(z)(\hat{C}_{1}(z,w)-I)+\hat{\Phi}_{\boldsymbol{x}}^{(2)}(w)(\hat{C}_{2}(z,w)-I) (4.25)
+N𝒙,(0,0)(C^0(z,w)I)+zx1wx2)adj(IC^(z,w)),\displaystyle\qquad\qquad\qquad\qquad+N_{\boldsymbol{x},(0,0)}(\hat{C}_{0}(z,w)-I)+z^{x_{1}}w^{x_{2}}\bigr{)}\,\mbox{\rm adj}(I-\hat{C}(z,w)), (4.26)
g(z,w)\displaystyle g(z,w) =zs0ws0det(IC^(z,w)).\displaystyle=z^{s_{0}}w^{s_{0}}\det(I-\hat{C}(z,w)). (4.27)

Note that the inverse of IC^(z,w)I-\hat{C}(z,w) is given by adj(IC^(z,w))/det(IC^(z,w))\mbox{\rm adj}(I-\hat{C}(z,w))/\det(I-\hat{C}(z,w)). The function g(z,w)g(z,w) is analytic in 2\mathbb{C}^{2} and, by Proposition 4.4, each element of F(z,w)F(z,w) is analytic in Δeθ¯1(1)×Δeθ¯2(2)\Delta_{e^{\bar{\theta}_{1}^{(1)}}}\times\Delta_{e^{\bar{\theta}_{2}^{(2)}}}. For (z,w)Δ¯1×Δ¯1(z,w)\in\bar{\Delta}_{1}\times\bar{\Delta}_{1}, since power series (4.24) is absolutely convergent, equation (4.7) holds and we have Φ^𝒙(+)(z,w)=H(z,w)\hat{\Phi}_{\boldsymbol{x}}^{(+)}(z,w)=H(z,w). Hence, by the identity theorem, we see that H(z,w)H(z,w) is an analytic extension of Φ^𝒙(+)(z,w)\hat{\Phi}_{\boldsymbol{x}}^{(+)}(z,w). Hereafter, we denote the analytic extension by the same notation Φ^𝒙(+)(z,w)\hat{\Phi}_{\boldsymbol{x}}^{(+)}(z,w). In the proof of Lemma 4.3, we use the following proposition.

Proposition 4.5 (Proposition 4.1 of [10]).

Under Assumption 2.1, for z,wz,w\in\mathbb{C} such that z0z\neq 0 and w0w\neq 0, if |z|z|z|\neq z or |w|w|w|\neq w, then spr(C^(z,w))<spr(C^(|z|,|w|)\mbox{\rm spr}(\hat{C}(z,w))<\mbox{\rm spr}(\hat{C}(|z|,|w|).

Refer to caption
Figure 1: Convergence domain of Φ^𝒙(+)(eθ1,eθ2)\hat{\Phi}_{\boldsymbol{x}}^{(+)}(e^{\theta_{1}},e^{\theta_{2}})
Proof of Lemma 4.3.

By Lemma 4.1, the radius of convergence of Φ^𝒙(1)(z)\hat{\Phi}_{\boldsymbol{x}}^{(1)}(z) is eθ¯1(1)e^{\bar{\theta}_{1}^{(1)}} and that of Φ^𝒙(2)(w)\hat{\Phi}_{\boldsymbol{x}}^{(2)}(w) is eθ¯2(2)e^{\bar{\theta}_{2}^{(2)}}. Since we have, for every p,qΓp,q\in\Gamma, p<θ¯1(1)p<\bar{\theta}_{1}^{(1)} and q<θ¯2(2)q<\bar{\theta}_{2}^{(2)}, both Φ^𝒙(1)(z)\hat{\Phi}_{\boldsymbol{x}}^{(1)}(z) and Φ^𝒙(2)(w)\hat{\Phi}_{\boldsymbol{x}}^{(2)}(w) converge absolutely at every (z,w)Δep×Δeq(z,w)\in\Delta_{e^{p}}\times\Delta_{e^{q}}. Hence, from (4.1), we see that, in order to prove the lemma, it suffices to show that, for every (p,q)Γ(p,q)\in\Gamma, power series (4.24) converges at (z,w)=(ep,eq)(z,w)=(e^{p},e^{q}). We demonstrate it.

Under Assumption 2.2, either a1a_{1} or a2a_{2} is negative. Here, we assume a1<0a_{1}<0; the proof for the case where a2<0a_{2}<0 is analogous. By Lemma 2.3 of [10], we have χθ1(0,0)=a1<0\chi_{\theta_{1}}(0,0)=a_{1}<0, where χθ1(θ1,θ2)=(/θ1)χ(θ1,θ2)\chi_{\theta_{1}}(\theta_{1},\theta_{2})=(\partial/\partial\,\theta_{1})\,\chi(\theta_{1},\theta_{2}), and this implies that Γ\Gamma includes open interval {(θ1,0)2;0<θ1<p}\{(\theta_{1},0)\in\mathbb{R}^{2};0<\theta_{1}<p^{*}\} on the complex plain, where pp^{*} is the value of θ1\theta_{1} that satisfies χ(p,0)=1\chi(p^{*},0)=1 (see Fig. 1). Let (p,q)(p,q) be an arbitrary point in Γ\Gamma and consider a path on Γ{(0,0)}\Gamma\cup\{(0,0)\} connecting different points (p0,q0)=(0,0)(p_{0},q_{0})=(0,0), (p1,q0)(p_{1},q_{0}), (p1,q)(p_{1},q) and (p,q)(p,q) by lines in this order, where we assume 1<p1<p1<p_{1}<p^{*} (see Fig. 1). It is always possible since the closure of Γ\Gamma, Γ¯\bar{\Gamma}, is a closed convex set.

First, we consider a matrix function of one variable given by Φ^𝒙(+)(z,1)\hat{\Phi}_{\boldsymbol{x}}^{(+)}(z,1). The Taylor series for Φ^𝒙(+)(z,1)\hat{\Phi}_{\boldsymbol{x}}^{(+)}(z,1) at z=0z=0 is given by power series (4.24), where ww is set at 11, and Φ^𝒙(+)(z,1)\hat{\Phi}_{\boldsymbol{x}}^{(+)}(z,1) is identical to H(z,1)H(z,1) on a domain where H(z,1)H(z,1) is well defined. Let ε\varepsilon be a sufficiently small positive number. Since g(z,1)g(z,1) and each element of F(z,1)F(z,1) are analytic as a function of one variable in Δep1+ε\Delta_{e^{p_{1}}+\varepsilon}, H(z,1)H(z,1) is meromorphic as a function of one variable in that domain. We have, for zΔep1+εΔ¯1z\in\Delta_{e^{p_{1}}+\varepsilon}\setminus\bar{\Delta}_{1}, spr(C^(z,1))spr(C^(|z|,1))<1\mbox{\rm spr}(\hat{C}(z,1))\leq\mbox{\rm spr}(\hat{C}(|z|,1))<1, and by Proposition 4.5, we have, for zΔ1{1}z\in\partial\Delta_{1}\setminus\{1\}, spr(C^(z,1))<spr(C^(|z|,1))=1\mbox{\rm spr}(\hat{C}(z,1))<\mbox{\rm spr}(\hat{C}(|z|,1))=1. Hence, g(z,1)0g(z,1)\neq 0 for any zΔep1+ε(Δ1{1})z\in\Delta_{e^{p_{1}}+\varepsilon}\setminus(\Delta_{1}\cup\{1\}), and each element of H(z,1)H(z,1) is analytic on Δep1+ε(Δ1{1})\Delta_{e^{p_{1}}+\varepsilon}\setminus(\Delta_{1}\cup\{1\}). Furthermore, we have Φ^𝒙(+)(1,1)=H(1,1)<\hat{\Phi}_{\boldsymbol{x}}^{(+)}(1,1)=H(1,1)<\infty, and this implies that the point z=1z=1 is not a pole of any element of H(z,1)H(z,1); hence, it is a removable singularity. From this and the fact that Φ^𝒙(+)(z,1)\hat{\Phi}_{\boldsymbol{x}}^{(+)}(z,1) is analytic in Δ1\Delta_{1}, we see that Φ^𝒙(+)(z,1)\hat{\Phi}_{\boldsymbol{x}}^{(+)}(z,1) is analytic in Δep1+ε\Delta_{e^{p_{1}}+\varepsilon}. This implies that the radius of convergence of the Taylor series for Φ^𝒙(+)(z,1)\hat{\Phi}_{\boldsymbol{x}}^{(+)}(z,1) at z=0z=0 is greater than ep1e^{p_{1}}, and power series (4.24) converges at (z,w)=(ep1,1)(z,w)=(e^{p_{1}},1).

Next, we consider a matrix function of one variable given by Φ^𝒙(+)(ep1,w)\hat{\Phi}_{\boldsymbol{x}}^{(+)}(e^{p_{1}},w). By the fact obtained above, the Taylor series for Φ^𝒙(+)(ep1,w)\hat{\Phi}_{\boldsymbol{x}}^{(+)}(e^{p_{1}},w) at w=0w=0 is given by power series (4.24), where zz is set at ep1e^{p_{1}}, and Φ^𝒙(+)(ep1,w)\hat{\Phi}_{\boldsymbol{x}}^{(+)}(e^{p_{1}},w) is analytic as a function of one variable in Δ1\Delta_{1}. Furthermore, we know that Φ^𝒙(+)(ep1,w)\hat{\Phi}_{\boldsymbol{x}}^{(+)}(e^{p_{1}},w) is identical to H(ep1,w)H(e^{p_{1}},w) on a domain where H(ep1,w)H(e^{p_{1}},w) is well defined. If q0q\leq 0, then it is obvious that power series (4.24) converges at (z,w)=(ep1,eq)(z,w)=(e^{p_{1}},e^{q}). Therefore, we assume q>0q>0. Let ε\varepsilon be a sufficiently small positive number. For wΔeq+εΔ¯1εw\in\Delta_{e^{q}+\varepsilon}\setminus\bar{\Delta}_{1-\varepsilon}, we have spr(C^(ep1,w))spr(C^(ep1,|w|))<1\mbox{\rm spr}(\hat{C}(e^{p_{1}},w))\leq\mbox{\rm spr}(\hat{C}(e^{p_{1}},|w|))<1. Hence, for the same reason used in the case of H(z,1)H(z,1), we see that H(ep1,w)H(e^{p_{1}},w) is analytic in Δeq+εΔ¯1ε\Delta_{e^{q}+\varepsilon}\setminus\bar{\Delta}_{1-\varepsilon}, and this implies that Φ^𝒙(+)(ep1,w)\hat{\Phi}_{\boldsymbol{x}}^{(+)}(e^{p_{1}},w) is analytic in Δeq+ε\Delta_{e^{q}+\varepsilon}. Hence, the radius of convergence of the Taylor series for Φ^𝒙(+)(ep1,w)\hat{\Phi}_{\boldsymbol{x}}^{(+)}(e^{p_{1}},w) at w=0w=0 is greater than eqe^{q}, and power series (4.24) converges at (z,w)=(ep1,eq)(z,w)=(e^{p_{1}},e^{q}). Applying a similar procedure to the matrix function Φ^𝒙(+)(z,eq)\hat{\Phi}_{\boldsymbol{x}}^{(+)}(z,e^{q}), we see that power series (4.24) converges at (z,w)=(ep,eq)(z,w)=(e^{p},e^{q}), and this completes the proof. ∎

In the following section, we will prove that 𝒟=𝒟𝒙\mathcal{D}=\mathcal{D}_{\boldsymbol{x}} holds for every 𝒙+2\boldsymbol{x}\in\mathbb{Z}_{+}^{2} (see Corollary 5.1).

5 Asymptotics of the occupation measures

5.1 Asymptotic decay rate in an arbitrary direction

By Lemma 4.1 and the Cauchy-Hadamard theorem, we have, for every x1,x2,x1,x2+x_{1},x_{2},x_{1}^{\prime},x_{2}^{\prime}\in\mathbb{Z}_{+} and j,jS0j,j^{\prime}\in S_{0},

lim supk1klogq~(0,x2,j),(k,x2,j)=sup(θ1,θ2)Γθ1=sup𝜽Γ(1,0),𝜽,\displaystyle\limsup_{k\to\infty}\frac{1}{k}\log\tilde{q}_{(0,x_{2},j),(k,x_{2}^{\prime},j^{\prime})}=-\sup_{(\theta_{1},\theta_{2})\in\Gamma}\theta_{1}=-\sup_{\boldsymbol{\theta}\in\Gamma}\langle(1,0),\boldsymbol{\theta}\rangle, (5.1)
lim supk1klogq~(x1,0,j),(x1,k,j)=sup(θ1,θ2)Γθ2=sup𝜽Γ(0,1),𝜽.\displaystyle\limsup_{k\to\infty}\frac{1}{k}\log\tilde{q}_{(x_{1},0,j),(x_{1}^{\prime},k,j^{\prime})}=-\sup_{(\theta_{1},\theta_{2})\in\Gamma}\theta_{2}=-\sup_{\boldsymbol{\theta}\in\Gamma}\langle(0,1),\boldsymbol{\theta}\rangle. (5.2)

Furthermore, by (4.14) and Corollary 2.1 of [12], “lim sup\limsup” in equations (5.1) and (5.2) can be replaced with “lim\lim”. The following results are inferred from these equations.

Theorem 5.1.

For any vector 𝐜=(c1,c2)\boldsymbol{c}=(c_{1},c_{2}) of positive integers, for every x1,x2+x_{1},x_{2}\in\mathbb{Z}_{+} such that x1=0x_{1}=0 or x2=0x_{2}=0, for every l1,l2+l_{1},l_{2}\in\mathbb{Z}_{+} such that l1=0l_{1}=0 or l2=0l_{2}=0 and for every j,jS0j,j^{\prime}\in S_{0},

limk1klogq~(x1,x2,j),(c1k+l1,c2k+l2,j)\displaystyle\lim_{k\to\infty}\frac{1}{k}\log\tilde{q}_{(x_{1},x_{2},j),(c_{1}k+l_{1},c_{2}k+l_{2},j^{\prime})} =sup𝜽Γ𝒄,𝜽.\displaystyle=-\sup_{\boldsymbol{\theta}\in\Gamma}\,\langle\boldsymbol{c},\boldsymbol{\theta}\rangle. (5.3)

In order to prove this theorem, we introduce another representation of the 2d-MMRW {𝒀n}={(X1,n,X2,n,Jn)}\{\boldsymbol{Y}_{n}\}=\{(X_{1,n},X_{2,n},J_{n})\}. Let 𝒄=(c1,c2)\boldsymbol{c}=(c_{1},c_{2}) be a vector of positive integers. For i{1,2}i\in\{1,2\}, denote by Xi,n𝒄{}^{\boldsymbol{c}}\!X_{i,n} and Mi,n𝒄{}^{\boldsymbol{c}}\!M_{i,n} the quotient and remainder of Xi,nX_{i,n} divided by cic_{i}, respectively, i.e.,

X1,n=c1X1,n𝒄+M1,n𝒄,X2,n=c2X2,n𝒄+M2,n𝒄,X_{1,n}=c_{1}{}^{\boldsymbol{c}}\!X_{1,n}+{}^{\boldsymbol{c}}\!M_{1,n},\quad X_{2,n}=c_{2}{}^{\boldsymbol{c}}\!X_{2,n}+{}^{\boldsymbol{c}}\!M_{2,n},

where 0M1,n𝒄c110\leq{}^{\boldsymbol{c}}\!M_{1,n}\leq c_{1}-1 and 0M2,n𝒄c210\leq{}^{\boldsymbol{c}}\!M_{2,n}\leq c_{2}-1. Define a process {𝒀n𝒄}\{{}^{\boldsymbol{c}}\boldsymbol{Y}_{n}\} as {𝒀n𝒄}={(X1,n𝒄,X2,n𝒄,𝑱n𝒄)}\{{}^{\boldsymbol{c}}\boldsymbol{Y}_{n}\}=\{({}^{\boldsymbol{c}}\!X_{1,n},{}^{\boldsymbol{c}}\!X_{2,n},{}^{\boldsymbol{c}}\!\boldsymbol{J}_{n})\}, where 𝑱n𝒄=(M1,n𝒄,M2,n𝒄,Jn){}^{\boldsymbol{c}}\!\boldsymbol{J}_{n}=({}^{\boldsymbol{c}}\!M_{1,n},{}^{\boldsymbol{c}}\!M_{2,n},J_{n}). The process {𝒀n𝒄}\{{}^{\boldsymbol{c}}\boldsymbol{Y}_{n}\} is a 2d-MMRW with background process {𝑱n𝒄}\{{}^{\boldsymbol{c}}\!\boldsymbol{J}_{n}\} and its state space is given by 2×(0,c11×0,c21×S0)\mathbb{Z}^{2}\times(\mathbb{Z}_{0,c_{1}-1}\times\mathbb{Z}_{0,c_{2}-1}\times S_{0}), where, for k,lk,l\in\mathbb{Z} such that klk\leq l, we denote by k,l\mathbb{Z}_{k,l} the set of integers from kk through ll, i.e., k,l={k,k+1,,l}\mathbb{Z}_{k,l}=\{k,k+1,...,l\}. The transition probability matrix of {𝒀n𝒄}\{{}^{\boldsymbol{c}}\boldsymbol{Y}_{n}\}, denoted by P𝒄{}^{\boldsymbol{c}}\!P, has a double-tridiagonal block structure like PP. Denote by Ai,j𝒄,i,j{1,0,1}{}^{\boldsymbol{c}}\!A_{i,j},\,i,j\in\{-1,0,1\}, the nonzero blocks of P𝒄{}^{\boldsymbol{c}}\!P. For a positive integer kk and for k1,k2{1,2,,k}k_{1},k_{2}\in\{1,2,...,k\}, define a k×kk\times k matrix E(k1,k2)[k]E_{(k_{1},k_{2})}^{[k]} as

[E(k1,k2)[k]]i,j={1,i=k1 and j=k2,0,otherwise.[E_{(k_{1},k_{2})}^{[k]}]_{i,j}=\left\{\begin{array}[]{ll}1,&\mbox{$i=k_{1}$ and $j=k_{2}$},\cr 0,&\mbox{otherwise}.\end{array}\right.

Define the following c1×c1c_{1}\times c_{1} block matrices: for j{1,0,1}j\in\{-1,0,1\},

B1,j=E(1,c1)[c1]A1,j,B0,j=(A0,jA1,jA1,jA0,jA1,jA1,jA0,jA1,jA1,jA0,j),B1,j=E(c1,1)[c1]A1,j,\displaystyle B_{-1,j}=E_{(1,c_{1})}^{[c_{1}]}\otimes A_{-1,j},\ B_{0,j}=\begin{pmatrix}A_{0,j}&A_{1,j}&&&\cr A_{-1,j}&A_{0,j}&A_{1,j}&&\cr&\ddots&\ddots&\ddots&\cr&&A_{-1,j}&A_{0,j}&A_{1,j}\cr&&&A_{-1,j}&A_{0,j}\end{pmatrix},\ B_{1,j}=E_{(c_{1},1)}^{[c_{1}]}\otimes A_{1,j},

where \otimes is the Kronecker product operator. Each block Ai,j𝒄{}^{\boldsymbol{c}}\!A_{i,j} is a c1c2×c1c2c_{1}c_{2}\times c_{1}c_{2} block matrix and they are given as follows: for i{1,0,1}i\in\{-1,0,1\},

Ai,1𝒄=E(1,c2)[c2]Bi,1,Ai,0𝒄=(Bi,0Bi,1Bi,1Bi,0Bi,1Bi,1Bi,0Bi,1Bi,1Bi,0),Ai,1𝒄=E(c2,1)[c2]Bi,1.\displaystyle{}^{\boldsymbol{c}}\!A_{i,-1}=E_{(1,c_{2})}^{[c_{2}]}\otimes B_{i,-1},\ {}^{\boldsymbol{c}}\!A_{i,0}=\begin{pmatrix}B_{i,0}&B_{i,1}&&&\cr B_{i,-1}&B_{i,0}&B_{i,1}&&\cr&\ddots&\ddots&\ddots&\cr&&B_{i,-1}&B_{i,0}&B_{i,1}\cr&&&B_{i,-1}&B_{i,0}\end{pmatrix},\ {}^{\boldsymbol{c}}\!A_{i,1}=E_{(c_{2},1)}^{[c_{2}]}\otimes B_{i,1}.

Define a matrix function A,𝒄(θ1,θ2){}^{\boldsymbol{c}}\!A_{*,*}(\theta_{1},\theta_{2}) as

A,𝒄(θ1,θ2)=i,j{1,0,1}eiθ1+jθ2Ai,j𝒄.{}^{\boldsymbol{c}}\!A_{*,*}(\theta_{1},\theta_{2})=\sum_{i,j\in\{-1,0,1\}}e^{i\theta_{1}+j\theta_{2}}\,{}^{\boldsymbol{c}}\!A_{i,j}.

The following relation holds between A,(θ1,θ2)A_{*,*}(\theta_{1},\theta_{2}) and A,𝒄(θ1,θ2){}^{\boldsymbol{c}}\!A_{*,*}(\theta_{1},\theta_{2}).

Proposition 5.1.

For any vector 𝐜=(c1,c2)\boldsymbol{c}=(c_{1},c_{2}) of positive integers, we have

cp(A,(θ1,θ2))=cp(A,𝒄(c1θ1,c2θ2)).\mbox{\rm cp}(A_{*,*}(\theta_{1},\theta_{2}))=\mbox{\rm cp}({}^{\boldsymbol{c}}\!A_{*,*}(c_{1}\theta_{1},c_{2}\theta_{2})). (5.4)

Since the proof of this proposition is elementary, we give it in Appendix A.

Proof of Theorem 5.1.

From Proposition 2.4 and Lemma 4.3, we immediately obtain, for every x1,x2+x_{1},x_{2}\in\mathbb{Z}_{+}, l1,l2+l_{1},l_{2}\in\mathbb{Z}_{+} and j,jS0j,j^{\prime}\in S_{0},

lim supk1klogq~(x1,x2,j),(c1k+l1,c2k+l2,j)sup𝜽Γ𝒄,𝜽.\limsup_{k\to\infty}\frac{1}{k}\log\tilde{q}_{(x_{1},x_{2},j),(c_{1}k+l_{1},c_{2}k+l_{2},j^{\prime})}\leq-\sup_{\boldsymbol{\theta}\in\Gamma}\,\langle\boldsymbol{c},\boldsymbol{\theta}\rangle. (5.5)

In order to obtain the lower bounds, define a process {𝒀^n𝒄}={(X^1,n𝒄,X^2,n𝒄,𝑱^n𝒄)}\{{}^{\boldsymbol{c}}\hat{\boldsymbol{Y}}_{n}\}=\{({}^{\boldsymbol{c}}\!\hat{X}_{1,n},{}^{\boldsymbol{c}}\!\hat{X}_{2,n},{}^{\boldsymbol{c}}\!\hat{\boldsymbol{J}}_{n})\} as 𝒀^n𝒄=𝒀τn𝒄{}^{\boldsymbol{c}}\hat{\boldsymbol{Y}}_{n}={}^{\boldsymbol{c}}\boldsymbol{Y}_{\tau\wedge n}, where τ\tau is the stopping time given as τ=inf{n0;𝒀n𝕊𝕊+}\tau=\inf\{n\geq 0;\boldsymbol{Y}_{n}\in\mathbb{S}\setminus\mathbb{S}_{+}\}. According to {𝒀^n(1,1)}\{\hat{\boldsymbol{Y}}_{n}^{(1,1)}\} considered in Sect. 3, define a process {𝒀^n(1,1)𝒄}\{{}^{\boldsymbol{c}}\hat{\boldsymbol{Y}}_{n}^{(1,1)}\} as

𝒀^n(1,1)𝒄=(min{X^1,n𝒄,X^2,n𝒄},(X^1,n𝒄X^2,n𝒄,𝑱^n𝒄)).{}^{\boldsymbol{c}}\hat{\boldsymbol{Y}}_{n}^{(1,1)}=(\min\{{}^{\boldsymbol{c}}\!\hat{X}_{1,n},{}^{\boldsymbol{c}}\!\hat{X}_{2,n}\},({}^{\boldsymbol{c}}\!\hat{X}_{1,n}-{}^{\boldsymbol{c}}\!\hat{X}_{2,n},{}^{\boldsymbol{c}}\!\hat{\boldsymbol{J}}_{n})).

The process {𝒀^n(1,1)𝒄}\{{}^{\boldsymbol{c}}\hat{\boldsymbol{Y}}_{n}^{(1,1)}\} is a QBD process with countably many phases, where min{X^1,n𝒄,X^2,n𝒄}\min\{{}^{\boldsymbol{c}}\!\hat{X}_{1,n},{}^{\boldsymbol{c}}\!\hat{X}_{2,n}\} is the level and (X^1,n𝒄X^2,n𝒄,𝑱^n𝒄)({}^{\boldsymbol{c}}\!\hat{X}_{1,n}-{}^{\boldsymbol{c}}\!\hat{X}_{2,n},{}^{\boldsymbol{c}}\!\hat{\boldsymbol{J}}_{n}) is the phase. Let R(1,1)𝒄{}^{\boldsymbol{c}}\!R^{(1,1)} be the rate matrix of {𝒀^n(1,1)𝒄}\{{}^{\boldsymbol{c}}\hat{\boldsymbol{Y}}_{n}^{(1,1)}\}. By Propositions 3.2 and 5.1, we have

logcp(R(1,1)𝒄)\displaystyle\log\mbox{\rm cp}({}^{\boldsymbol{c}}\!R^{(1,1)}) sup{θ1+θ2;cp(A,𝒄(θ1,θ2))11}\displaystyle\leq\sup\{\theta_{1}+\theta_{2}\in\mathbb{R};\mbox{\rm cp}({}^{\boldsymbol{c}}\!A_{*,*}(\theta_{1},\theta_{2}))^{-1}\leq 1\} (5.6)
=sup{c1θ1+c2θ2;cp(A,𝒄(c1θ1,c2θ2))11}\displaystyle=\sup\{c_{1}\theta_{1}+c_{2}\theta_{2}\in\mathbb{R};\mbox{\rm cp}({}^{\boldsymbol{c}}\!A_{*,*}(c_{1}\theta_{1},c_{2}\theta_{2}))^{-1}\leq 1\} (5.7)
=sup{c1θ1+c2θ2;cp(A,(θ1,θ2))11}.\displaystyle=\sup\{c_{1}\theta_{1}+c_{2}\theta_{2}\in\mathbb{R};\mbox{\rm cp}(A_{*,*}(\theta_{1},\theta_{2}))^{-1}\leq 1\}. (5.8)

For some (i′′,l1′′,l2′′,j′′)×0,c11×0,c21×S0(i^{\prime\prime},l_{1}^{\prime\prime},l_{2}^{\prime\prime},j^{\prime\prime})\in\mathbb{Z}\times\mathbb{Z}_{0,c_{1}-1}\times\mathbb{Z}_{0,c_{2}-1}\times S_{0} and for any (i,l1,l2,j)×0,c11×0,c21×S0(i^{\prime},l_{1}^{\prime},l_{2}^{\prime},j^{\prime})\in\mathbb{Z}\times\mathbb{Z}_{0,c_{1}-1}\times\mathbb{Z}_{0,c_{2}-1}\times S_{0}, we have, by Corollary 2.1 of [12],

limk([(R(1,1)𝒄)k](i′′,l1′′,l2′′,j′′),(i,l1,l2,j))1k=cp(R(1,1)𝒄)1.\lim_{k\to\infty}\left([({}^{\boldsymbol{c}}\!R^{(1,1)})^{k}]_{(i^{\prime\prime},l_{1}^{\prime\prime},l_{2}^{\prime\prime},j^{\prime\prime}),(i^{\prime},l_{1}^{\prime},l_{2}^{\prime},j^{\prime})}\right)^{\frac{1}{k}}=\mbox{\rm cp}({}^{\boldsymbol{c}}\!R^{(1,1)})^{-1}. (5.9)

If i0i^{\prime}\geq 0, then the state 𝒀^n(1,1)𝒄=(k,i,(l1,l2,j)){}^{\boldsymbol{c}}\hat{\boldsymbol{Y}}_{n}^{(1,1)}=(k,i^{\prime},(l_{1}^{\prime},l_{2}^{\prime},j^{\prime})) corresponds to the state 𝒀n=(c1k+c1i+l1,c2k+l2,j)\boldsymbol{Y}_{n}=(c_{1}k+c_{1}i^{\prime}+l_{1}^{\prime},c_{2}k+l_{2}^{\prime},j^{\prime}). Hence, from (4.19), setting l=c1i+l1l=c_{1}i^{\prime}+l_{1}^{\prime} and l2=0l_{2}^{\prime}=0, we obtain, for every x1,x2+x_{1},x_{2}\in\mathbb{Z}_{+} such that x1=0x_{1}=0 or x2=0x_{2}=0 and for every jS0j\in S_{0},

q~(x1,x2,j),(c1k+l,c2k,j)q~(x1,x2,j),(c1i′′+l1′′,l2′′,j′′)[(R(1,1)𝒄)k](i′′,l1′′,l2′′,j′′),(i,l1,0,j).\tilde{q}_{(x_{1},x_{2},j),(c_{1}k+l,c_{2}k,j^{\prime})}\geq\tilde{q}_{(x_{1},x_{2},j),(c_{1}i^{\prime\prime}+l_{1}^{\prime\prime},l_{2}^{\prime\prime},j^{\prime\prime})}[({}^{\boldsymbol{c}}\!R^{(1,1)})^{k}]_{(i^{\prime\prime},l_{1}^{\prime\prime},l_{2}^{\prime\prime},j^{\prime\prime}),(i^{\prime},l_{1}^{\prime},0,j^{\prime})}. (5.10)

Analogously, if i<0i^{\prime}<0, then setting l1=0l_{1}^{\prime}=0 and l=c2i+l2l=-c_{2}i^{\prime}+l_{2}^{\prime}, we obtain, for every x1,x2+x_{1},x_{2}\in\mathbb{Z}_{+} such that x1=0x_{1}=0 or x2=0x_{2}=0 and for every jS0j\in S_{0},

q~(x1,x2,j),(c1k,c2k+l,j)q~(x1,x2,j),(l1′′,c2i′′+l2′′,j′′)[(R(1,1)𝒄)k](i′′,l1′′,l2′′,j′′),(i,0,l2,j).\tilde{q}_{(x_{1},x_{2},j),(c_{1}k,c_{2}k+l,j^{\prime})}\geq\tilde{q}_{(x_{1},x_{2},j),(l_{1}^{\prime\prime},c_{2}i^{\prime\prime}+l_{2}^{\prime\prime},j^{\prime\prime})}[({}^{\boldsymbol{c}}\!R^{(1,1)})^{k}]_{(i^{\prime\prime},l_{1}^{\prime\prime},l_{2}^{\prime\prime},j^{\prime\prime}),(i^{\prime},0,l_{2}^{\prime},j^{\prime})}. (5.11)

From (5.10), (5.11) and (5.8), we obtain the desired lower bound as follows: for every x1,x2+x_{1},x_{2}\in\mathbb{Z}_{+} such that x1=0x_{1}=0 or x2=0x_{2}=0, for every l1,l2+l_{1},l_{2}\in\mathbb{Z}_{+} such that l1=0l_{1}=0 or l2=0l_{2}=0 and for every j,jS0j,j^{\prime}\in S_{0},

lim infk1klogq~(x1,x2,j),(c1k+l1,c2k+l2,j)cp(R(1,1)𝒄))sup𝜽Γ𝒄,𝜽.\liminf_{k\to\infty}\frac{1}{k}\log\tilde{q}_{(x_{1},x_{2},j),(c_{1}k+l_{1},c_{2}k+l_{2},j^{\prime})}\geq-\mbox{\rm cp}({}^{\boldsymbol{c}}\!R^{(1,1)}))\geq-\sup_{\boldsymbol{\theta}\in\Gamma}\,\langle\boldsymbol{c},\boldsymbol{\theta}\rangle. (5.12)

This completes the proof. ∎

From Lemma 4.3 and Theorem 5.1, we obtain the following property of the convergence domain.

Corollary 5.1.

For every 𝐱+2\boldsymbol{x}\in\mathbb{Z}_{+}^{2}, 𝒟𝐱=𝒟\mathcal{D}_{\boldsymbol{x}}=\mathcal{D}.

In order to prove the corollary, we introduce some notation. Recall that, by Propositions 2.2 and 2.3, the point set Γ¯={(θ1,θ2)2;χ(θ1,θ2)=spr(A,(θ1,θ2))1}\bar{\Gamma}=\{(\theta_{1},\theta_{2})\in\mathbb{R}^{2};\chi(\theta_{1},\theta_{2})=\mbox{\rm spr}(A_{*,*}(\theta_{1},\theta_{2}))\leq 1\} is a closed convex set and bounded. For i{1,2}i\in\{1,2\}, define a point (θ¯1(i),θ¯2(i))(\underline{\theta}_{1}^{(i)},\underline{\theta}_{2}^{(i)}) as (θ¯1(i),θ¯2(i))=argmin(θ1,θ2)Γ¯θi(\underline{\theta}_{1}^{(i)},\underline{\theta}_{2}^{(i)})=\arg\min_{(\theta_{1},\theta_{2})\in\bar{\Gamma}}\theta_{i}. We have already defined (θ¯1(1),θ¯2(1))(\bar{\theta}_{1}^{(1)},\bar{\theta}_{2}^{(1)}) and (θ¯1(2),θ¯2(2))(\bar{\theta}_{1}^{(2)},\bar{\theta}_{2}^{(2)}). For a given θ1(θ¯1(1),θ¯1(1))\theta_{1}\in(\underline{\theta}_{1}^{(1)},\bar{\theta}_{1}^{(1)}), equation χ(θ1,θ2)=1\chi(\theta_{1},\theta_{2})=1 has just two different real solutions: θ2=ζ¯2(θ1),ζ¯2(θ1)\theta_{2}=\underline{\zeta}_{2}(\theta_{1}),\,\bar{\zeta}_{2}(\theta_{1}), where ζ¯2(θ1)<ζ¯2(θ1)\underline{\zeta}_{2}(\theta_{1})<\bar{\zeta}_{2}(\theta_{1}). The function ζ¯2(θ1)\bar{\zeta}_{2}(\theta_{1}) is monotonically decreasing in (θ¯1(2),θ¯1(1))(\bar{\theta}_{1}^{(2)},\bar{\theta}_{1}^{(1)}). Further define the following domains:

Γ(1)={(θ1,θ2)2;θ1<θ¯1(1)}={(θ1,θ2)2;cp(A(1)(θ1))1<1},\displaystyle\Gamma^{(1)}=\{(\theta_{1},\theta_{2})\in\mathbb{R}^{2};\theta_{1}<\bar{\theta}_{1}^{(1)}\}=\{(\theta_{1},\theta_{2})\in\mathbb{R}^{2};\mbox{\rm cp}(A_{*}^{(1)}(\theta_{1}))^{-1}<1\},
Γ(2)={(θ1,θ2)2;θ2<θ¯2(2)}={(θ1,θ2)2;cp(A(2)(θ2))1<1}.\displaystyle\Gamma^{(2)}=\{(\theta_{1},\theta_{2})\in\mathbb{R}^{2};\theta_{2}<\bar{\theta}_{2}^{(2)}\}=\{(\theta_{1},\theta_{2})\in\mathbb{R}^{2};\mbox{\rm cp}(A_{*}^{(2)}(\theta_{2}))^{-1}<1\}.
Refer to caption
Figure 2: Convergence domain of Φ𝒙(θ1,θ2)\Phi_{\boldsymbol{x}}(\theta_{1},\theta_{2})
Proof of Corollary 5.1.

We prove 𝒟𝟎=𝒟\mathcal{D}_{\mathbf{0}}=\mathcal{D}, where 𝟎=(0,0)\mathbf{0}=(0,0). By Proposition 4.3, this implies 𝒟𝒙=𝒟\mathcal{D}_{\boldsymbol{x}}=\mathcal{D} for every 𝒙+2\boldsymbol{x}\in\mathbb{Z}_{+}^{2}. Regarding Φ^𝟎(1)(eθ1)\hat{\Phi}_{\mathbf{0}}^{(1)}(e^{\theta_{1}}) and Φ^𝟎(2)(eθ2)\hat{\Phi}_{\mathbf{0}}^{(2)}(e^{\theta_{2}}) as functions of two variables θ1\theta_{1} and θ2\theta_{2}, we see, by Lemma 4.1, that the convergence domain of Φ^𝟎(1)(eθ1)\hat{\Phi}_{\mathbf{0}}^{(1)}(e^{\theta_{1}}) is given by Γ(1)\Gamma^{(1)} and that of Φ^𝟎(1)(eθ2)\hat{\Phi}_{\mathbf{0}}^{(1)}(e^{\theta_{2}}) by Γ(2)\Gamma^{(2)}. From (4.1), we, therefore, obtain 𝒟𝟎Γ(1)Γ(2)\mathcal{D}_{\mathbf{0}}\subset\Gamma^{(1)}\cap\Gamma^{(2)}. On the other hand, by Lemma 4.3, we have 𝒟𝒟𝟎\mathcal{D}\subset\mathcal{D}_{\mathbf{0}}. Hence, in order to prove 𝒟𝟎=𝒟\mathcal{D}_{\mathbf{0}}=\mathcal{D}, it suffices to demonstrate that ((Γ(1)Γ(2))𝒟)𝒟𝟎=((\Gamma^{(1)}\cap\Gamma^{(2)})\setminus\mathcal{D})\cap\mathcal{D}_{\mathbf{0}}=\emptyset (see Fig. 2).

Let 𝒄=(c1,c2)\boldsymbol{c}=(c_{1},c_{2}) be a vector of positive integers. Define a point 𝜽𝒄=(θ1𝒄,θ2𝒄){}^{\boldsymbol{c}}\boldsymbol{\theta}=({}^{\boldsymbol{c}}\theta_{1},{}^{\boldsymbol{c}}\theta_{2}) as

(θ1𝒄,θ2𝒄)=argmax𝜽Γ¯𝒄,𝜽.({}^{\boldsymbol{c}}\theta_{1},{}^{\boldsymbol{c}}\theta_{2})=\arg\max_{\boldsymbol{\theta}\in\bar{\Gamma}}\,\langle\boldsymbol{c},\boldsymbol{\theta}\rangle.

The point 𝜽𝒄{}^{\boldsymbol{c}}\boldsymbol{\theta} is represented as 𝜽𝒄=(θ1𝒄,ζ¯2(θ1𝒄)){}^{\boldsymbol{c}}\boldsymbol{\theta}=({}^{\boldsymbol{c}}\theta_{1},\bar{\zeta}_{2}({}^{\boldsymbol{c}}\theta_{1})) and satisfies

ddθ(c1θ+c2ζ¯2(θ))|θ=θ1𝒄=c1+c2ζ¯2(θ1𝒄)=0.\frac{d}{d\theta}(c_{1}\theta+c_{2}\bar{\zeta}_{2}(\theta))\Big{|}_{\theta={}^{\boldsymbol{c}}\theta_{1}}=c_{1}+c_{2}\bar{\zeta}^{\prime}_{2}({}^{\boldsymbol{c}}\theta_{1})=0.

Hence, we obtain ζ¯2(θ1𝒄)=c1/c2\bar{\zeta}^{\prime}_{2}({}^{\boldsymbol{c}}\theta_{1})=-c_{1}/c_{2}. Since ζ¯2(θ)\bar{\zeta}^{\prime}_{2}(\theta) monotonically decreases from 0 to -\infty when θ\theta increases from θ¯1(2)\bar{\theta}_{1}^{(2)} to θ¯1(1)\bar{\theta}_{1}^{(1)}, we see that the point set 𝔻0={(θ1𝒄,θ2𝒄);𝒄=(c1,c2)+2,c1>0,c2>0}\mathbb{D}_{0}=\{({}^{\boldsymbol{c}}\theta_{1},{}^{\boldsymbol{c}}\theta_{2});\boldsymbol{c}=(c_{1},c_{2})\in\mathbb{Z}_{+}^{2},\,c_{1}>0,\,c_{2}>0\} is dense in the curve 𝔻={(θ,ζ¯2(θ));θ[θ¯1(2),θ¯1(1)]}\mathbb{D}=\{(\theta,\bar{\zeta}_{2}(\theta));\theta\in[\bar{\theta}_{1}^{(2)},\bar{\theta}_{1}^{(1)}]\}. For j,jS0j,j^{\prime}\in S_{0}, define a moment generating function φ𝒄(θ1,θ2)\varphi_{\boldsymbol{c}}(\theta_{1},\theta_{2}) as

φ𝒄(θ1,θ2)=k=0e(c1θ1+c2θ2)kq~(0,0,j),(c1k,c2k,j).\varphi_{\boldsymbol{c}}(\theta_{1},\theta_{2})=\sum_{k=0}^{\infty}e^{(c_{1}\theta_{1}+c_{2}\theta_{2})k}\,\tilde{q}_{(0,0,j),(c_{1}k,c_{2}k,j^{\prime})}. (5.13)

By Theorem 5.1 and the Cauchy-Hadamard theorem, we see that the radius of convergence of the power series in the right hand side of (5.13) is ec1θ1𝒄+c2θ2𝒄e^{c_{1}{}^{\boldsymbol{c}}\theta_{1}+c_{2}{}^{\boldsymbol{c}}\theta_{2}} and this implies that φ𝒄(θ1,θ2)\varphi_{\boldsymbol{c}}(\theta_{1},\theta_{2}) diverges if θ1>θ1𝒄\theta_{1}>{}^{\boldsymbol{c}}\theta_{1} and θ2>θ2𝒄\theta_{2}>{}^{\boldsymbol{c}}\theta_{2}. From the definition of φ𝒄(θ1,θ2)\varphi_{\boldsymbol{c}}(\theta_{1},\theta_{2}), we obtain

φ𝒄(θ1,θ2)k1=0k2=0eθ1k1+θ2k2q~(0,0,j),(k1,k2,j)=[Φ𝟎(θ1,θ2)]j,j.\varphi_{\boldsymbol{c}}(\theta_{1},\theta_{2})\leq\sum_{k_{1}=0}^{\infty}\sum_{k_{2}=0}^{\infty}e^{\theta_{1}k_{1}+\theta_{2}k_{2}}\,\tilde{q}_{(0,0,j),(k_{1},k_{2},j^{\prime})}=[\Phi_{\mathbf{0}}(\theta_{1},\theta_{2})]_{j,j^{\prime}}. (5.14)

Here, we suppose ((Γ(1)Γ(2))𝒟)𝒟𝟎((\Gamma^{(1)}\cap\Gamma^{(2)})\setminus\mathcal{D})\cap\mathcal{D}_{\mathbf{0}}\neq\emptyset. Since 𝒟𝟎\mathcal{D}_{\mathbf{0}} is an open set, there exists a point (p0,q0)((Γ(1)Γ(2))𝒟¯)𝒟𝟎(p_{0},q_{0})\in((\Gamma^{(1)}\cap\Gamma^{(2)})\setminus\bar{\mathcal{D}})\cap\mathcal{D}_{\mathbf{0}}, where 𝒟¯\bar{\mathcal{D}} is the closure of 𝒟\mathcal{D}. We have Φ𝟎(p0,q0)<\Phi_{\mathbf{0}}(p_{0},q_{0})<\infty. On the other hand, from the definition of (p0,q0)(p_{0},q_{0}), there exists a θ(θ¯1(2),θ¯1(1))\theta\in(\bar{\theta}_{1}^{(2)},\bar{\theta}_{1}^{(1)}) such that p0>θp_{0}>\theta and q0>ζ¯2(θ)q_{0}>\bar{\zeta}_{2}(\theta), and such (θ,ζ¯2(θ))(\theta,\bar{\zeta}_{2}(\theta)) can be taken in the point set 𝔻0\mathbb{D}_{0} since 𝔻0\mathbb{D}_{0} is dense in 𝔻\mathbb{D}. Hence, we have [Φ𝟎(p0,q0)]j,jφ𝒄(p0,q0)=[\Phi_{\mathbf{0}}(p_{0},q_{0})]_{j,j^{\prime}}\geq\varphi_{\boldsymbol{c}}(p_{0},q_{0})=\infty, and this contradicts finiteness of Φ𝟎(p0,q0)\Phi_{\mathbf{0}}(p_{0},q_{0}). As a result, we have ((Γ(1)Γ(2))𝒟)𝒟𝟎=((\Gamma^{(1)}\cap\Gamma^{(2)})\setminus\mathcal{D})\cap\mathcal{D}_{\mathbf{0}}=\emptyset and this completes the proof. ∎

5.2 Asymptotic decay rates of marginal measures

Let (X1,X2)(X_{1},X_{2}) be a vector of random variables subject to the stationary distribution of a two-dimensional reflecting random walk. The asymptotic decay rate of the marginal tail distribution in a form (c1X1+c2X2>x)\mathbb{P}(c_{1}X_{1}+c_{2}X_{2}>x) has been considered in [5] (also see [3]), where (c1,c2)(c_{1},c_{2}) is a direction vector. In this subsection, we consider this type of asymptotic decay rate for the occupation measures.

Let c1c_{1} and c2c_{2} be mutually prime positive integers. We assume c1c2c_{1}\leq c_{2}; in the case of c1>c2c_{1}>c_{2}, an analogous result can be obtained. For k0k\geq 0, define an index set k\mathscr{I}_{k} as

k={l2+;c1l1+c2l2=c1kfor somel1+}.\mathscr{I}_{k}=\{l_{2}\in\mathbb{Z}_{+};c_{1}l_{1}+c_{2}l_{2}=c_{1}k\ \mbox{for some}\ l_{1}\in\mathbb{Z}_{+}\}.

For 𝒙+2\boldsymbol{x}\in\mathbb{Z}_{+}^{2}, the matrix moment generating function Φ𝒙(c1θ,c2θ)\Phi_{\boldsymbol{x}}(c_{1}\theta,c_{2}\theta) is represented as

Φ𝒙(c1θ,c2θ)=k=0ekc1θlnN𝒙,(k(c2/c1)l,l).\Phi_{\boldsymbol{x}}(c_{1}\theta,c_{2}\theta)=\sum_{k=0}^{\infty}e^{kc_{1}\theta}\sum_{l\in\mathscr{I}_{n}}N_{\boldsymbol{x},(k-(c_{2}/c_{1})l,l)}. (5.15)

By the Cauchy-Hadamard theorem, we obtain the following theorem.

Theorem 5.2.

For any mutually prime positive integers c1c_{1} and c2c_{2} such that c1c2c_{1}\leq c_{2} and for every (𝐱,j)𝕊+(\boldsymbol{x},j)\in\mathbb{S}_{+} and jS0j^{\prime}\in S_{0},

lim supk1kloglkq~(𝒙,j),(k(c2/c1)l,l,j)=sup(c1θ,c2θ)Γc1θ.\displaystyle\limsup_{k\to\infty}\frac{1}{k}\log\sum_{l\in\mathscr{I}_{k}}\tilde{q}_{(\boldsymbol{x},j),(k-(c_{2}/c_{1})l,l,j^{\prime})}=-\sup_{(c_{1}\theta,c_{2}\theta)\in\Gamma}c_{1}\theta. (5.16)

In the case where c2c1c_{2}\leq c_{1}, an analogous result holds.

6 Concluding remarks

Using our results, we can obtain a lower bound for the asymptotic decay rate of the stationary distribution in a 2d-QBD process. Let {𝒀~n}={(X~1,n,X~2,n,J~n)}\{\tilde{\boldsymbol{Y}}_{n}\}=\{(\tilde{X}_{1,n},\tilde{X}_{2,n},\tilde{J}_{n})\} be a 2d-QBD process on state space 𝕊+=+2×S0\mathbb{S}_{+}=\mathbb{Z}_{+}^{2}\times S_{0} and assume that the blocks of transition probabilities when X1,n>0X_{1,n}>0 and X2,n>0X_{2,n}>0 are given by Ai,j,i,j{1,0,1}A_{i,j},i,j\in\{-1,0,1\}. Assume that {𝒀n}\{\boldsymbol{Y}_{n}\} is irreducible, aperiodic and positive recurrent and denote by 𝝂=((νx1,νx2,j);(x1,x2,j)𝕊+)\boldsymbol{\nu}=((\nu_{x_{1}},\nu_{x_{2}},j);(x_{1},x_{2},j)\in\mathbb{S}_{+}) the stationary distribution of the 2d-QBD process. Further assume that the blocks Ai,j,i,j{1,0,1},A_{i,j},i,j\in\{-1,0,1\}, satisfy the property corresponding to Assumptions 2.1 and 3.1. For x1,x21x_{1},x_{2}\geq 1 and jS0j\in S_{0}, we have

ν(x1,x2,j)\displaystyle\nu_{(x_{1},x_{2},j)} =j,j′′S0ν(0,0,j)p~(0,0,j),(1,1,j′′)q~(0,0,j′′),(x11,x21,j)\displaystyle=\sum_{j^{\prime},j^{\prime\prime}\in S_{0}}\nu_{(0,0,j^{\prime})}\tilde{p}_{(0,0,j^{\prime}),(1,1,j^{\prime\prime})}\tilde{q}_{(0,0,j^{\prime\prime}),(x_{1}-1,x_{2}-1,j)} (6.1)
+l{0,1}j,j′′S0{ν(1,0,j)p~(1,0,j),(1+l,1,j′′)q~(l,0,j′′),(x11,x21,j)\displaystyle\quad+\sum_{l\in\{0,1\}}\,\sum_{j^{\prime},j^{\prime\prime}\in S_{0}}\bigl{\{}\nu_{(1,0,j^{\prime})}\tilde{p}_{(1,0,j^{\prime}),(1+l,1,j^{\prime\prime})}\tilde{q}_{(l,0,j^{\prime\prime}),(x_{1}-1,x_{2}-1,j)} (6.2)
+ν(0,1,j)p~(0,1,j),(1,1+l,j′′)q~(0,l,j′′),(x11,x21,j)}\displaystyle\qquad\qquad\qquad\qquad+\nu_{(0,1,j^{\prime})}\tilde{p}_{(0,1,j^{\prime}),(1,1+l,j^{\prime\prime})}\tilde{q}_{(0,l,j^{\prime\prime}),(x_{1}-1,x_{2}-1,j)}\bigr{\}} (6.3)
+k=2l{1,0,1}j,j′′S0{ν(k,0,j)p~(k,0,j),(k+l,1,j′′)q~(k+l1,0,j′′),(x11,x21,j)\displaystyle\quad+\sum_{k=2}^{\infty}\,\sum_{l\in\{-1,0,1\}}\,\sum_{j^{\prime},j^{\prime\prime}\in S_{0}}\bigl{\{}\nu_{(k,0,j^{\prime})}\tilde{p}_{(k,0,j^{\prime}),(k+l,1,j^{\prime\prime})}\tilde{q}_{(k+l-1,0,j^{\prime\prime}),(x_{1}-1,x_{2}-1,j)} (6.4)
+ν(0,k,j)p~(0,k,j),(1,k+l,j′′)q~(0,k+l1,j′′),(x11,x21,j)},\displaystyle\qquad\qquad\qquad\qquad\qquad+\nu_{(0,k,j^{\prime})}\tilde{p}_{(0,k,j^{\prime}),(1,k+l,j^{\prime\prime})}\tilde{q}_{(0,k+l-1,j^{\prime\prime}),(x_{1}-1,x_{2}-1,j)}\bigr{\}}, (6.5)

where, for 𝒚,𝒚𝕊+\boldsymbol{y},\boldsymbol{y}^{\prime}\in\mathbb{S}_{+}, p~𝒚,𝒚=(𝒀~1=𝒚|𝒀~0=𝒚)\tilde{p}_{\boldsymbol{y},\boldsymbol{y}^{\prime}}=\mathbb{P}(\tilde{\boldsymbol{Y}}_{1}=\boldsymbol{y}^{\prime}\,|\,\tilde{\boldsymbol{Y}}_{0}=\boldsymbol{y}) and q~𝒚,𝒚\tilde{q}_{\boldsymbol{y},\boldsymbol{y}^{\prime}} is an element of the occupation measures in the corresponding 2d-MMRW, defined by (1.2). By (6.5) and Theorem 5.1, for any vector 𝒄=(c1,c2)\boldsymbol{c}=(c_{1},c_{2}) of positive integers and for every jS0j\in S_{0}, a lower bound for the asymptotic decay rate of the stationary distribution in the 2d-QBD process in the direction specified by 𝒄\boldsymbol{c} is given as follows:

lim infk1klogν(c1k,c2k,j)sup{𝒄,𝜽;spr(A,(𝜽))<1,𝜽2},\liminf_{k\to\infty}\frac{1}{k}\log\nu_{(c_{1}k,c_{2}k,j)}\geq-\sup\{\langle\boldsymbol{c},\boldsymbol{\theta}\rangle;\mbox{\rm spr}(A_{*,*}(\boldsymbol{\theta}))<1,\,\boldsymbol{\theta}\in\mathbb{R}^{2}\}, (6.6)

where A,(𝜽)=i,j{1,0,1}e(i,j),𝜽Ai,jA_{*,*}(\boldsymbol{\theta})=\sum_{i,j\in\{-1,0,1\}}e^{\langle(i,j),\boldsymbol{\theta}\rangle}A_{i,j}. Note that an upper bound for the asymptotic decay rate can be obtained by using the convergence domain of the matrix moment generating function for the stationary distribution and an inequality corresponding to (2.6). The convergence domain can be determined by Lemma 3.1 of [10] and Corollary 5.1.

References

  • [1] Borovkov, A.A. and Mogul’skiĭ, A.A., Large deviations for Markov chains in the positive quadrant, Russian Mathematical Surveys 56 (2001), 803–916.
  • [2] Fayolle, G., Malyshev, V.A., and Menshikov, M.V.  (1995). Topics in the Constructive Theory of Countable Markov Chains. Cambridge University Press, Cambridge.
  • [3] Kobayashi, M. and Miyazawa, M., Tail asymptotics of the stationary distribution of a two-dimensional reflecting random walk with unbounded upward jumps, Adv. Appl. Prob. 46 (2014) 365-399.
  • [4] Miyazawa, M., Tail decay rates in double QBD processes and related reflected random walks, Mathematics of Operations Research 34(3) (2009), 547–575.
  • [5] Miyazawa, M., Light tail asymptotics in multidimensional reflecting processes for queueing networks, TOP 19(2) (2011), 233–299.
  • [6] Miyazawa, M. and Zwart, B., Wiener-Hopf factorizations for a multidimensional Markov additive process and their applications to reflected processes, Stochastic Systems 2(1)(2012), 67–114.
  • [7] Miyazawa, M., Superharmonic vector for a nonnegative matrix with QBD block structure and its application to a Markov modulated two dimensional reflecting process, Queueing Systems 81, 1–48 (2015).
  • [8] P. Ney and E. Nummelin, Markov additive processes I. Eigenvalue properties and limit theorems. The Annals of Probability 15(2) (1987), 561–592.
  • [9] Ozawa, T., Asymptotics for the stationary distribution in a discrete-time two-dimensional quasi-birth-and-death process, Queueing Systems 74 (2013), 109–149.
  • [10] Ozawa, T. and Kobayashi M., Exact asymptotic formulae of the stationary distribution of a discrete-time two-dimensional QBD process, Queueing Systems 90 (2018), 351–403.
  • [11] Ozawa, T., Stability condition of a two-dimensional QBD process and its application to estimation of efficiency for two-queue models, Performance Evaluation 130 (2019), 101–118.
  • [12] Ozawa, T., Convergence parameters of nonnegative block tri-diagonal matrices and their application to multi-dimensional QBD processes, working paper (2019). arXiv: 1611.02434
  • [13] Seneta, E., Non-negative Matrices and Markov Chains, revised printing, Springer-Verlag, New York (2006).

Appendix A Proof of Proposition 5.1

We use the following proposition for proving Proposition 5.1.

Proposition A.1.

Let C1C_{-1}, C0C_{0} and C1C_{1} be m×mm\times m nonnegative matrices, where mm can be countably infinite, and define a matrix function C(θ)C_{*}(\theta) as

C(θ)=eθC1+C0+eθC1.C_{*}(\theta)=e^{-\theta}C_{-1}+C_{0}+e^{\theta}C_{1}. (A.1)

Assume that, for any n+n\in\mathbb{Z}_{+}, C(0)nC_{*}(0)^{n} is finite and C(0)C_{*}(0) is irreducible. Let kk be a positive integer and define a k×kk\times k block matrix C[k](θ)C^{[k]}(\theta) as

C[k](θ)=(C0C1eθC1C1C0C1C1C0C1eθC1C1C0).\displaystyle C^{[k]}(\theta)=\begin{pmatrix}C_{0}&C_{1}&&&e^{-\theta}C_{-1}\cr C_{-1}&C_{0}&C_{1}&&&\cr&\ddots&\ddots&\ddots&&\cr&&C_{-1}&C_{0}&C_{1}\cr e^{\theta}C_{1}&&&C_{-1}&C_{0}\end{pmatrix}. (A.2)

Then, we have cp(C(θ))=cp(C[k](kθ))\mbox{\rm cp}(C_{*}(\theta))=\mbox{\rm cp}(C^{[k]}(k\theta)).

Proof.

First, assume that, for a positive number β\beta and a measure 𝒖\boldsymbol{u}, β𝒖C(θ)𝒖\beta\boldsymbol{u}C_{*}(\theta)\leq\boldsymbol{u}, and define a measure 𝒖[k]\boldsymbol{u}^{[k]} as

𝒖[k]=(e(k1)θ𝒖e(k2)θ𝒖eθ𝒖𝒖).\boldsymbol{u}^{[k]}=\begin{pmatrix}e^{(k-1)\theta}\boldsymbol{u}&e^{(k-2)\theta}\boldsymbol{u}&\cdots&e^{\theta}\boldsymbol{u}&\boldsymbol{u}\end{pmatrix}.

Then, we have β𝒖[k]C[k](kθ)𝒖[k]\beta\boldsymbol{u}^{[k]}C^{[k]}(k\theta)\leq\boldsymbol{u}^{[k]} and, by Theorem 6.3 of [13], we obtain cp(C(θ))cp(C[k](kθ))\mbox{\rm cp}(C_{*}(\theta))\leq\mbox{\rm cp}(C^{[k]}(k\theta)).

Next, assume that, for a positive number β\beta and a measure 𝒖[k]=(𝒖1𝒖2𝒖k)\boldsymbol{u}^{[k]}=\begin{pmatrix}\boldsymbol{u}_{1}&\boldsymbol{u}_{2}&\cdots&\boldsymbol{u}_{k}\end{pmatrix}, β𝒖[k]C[k](kθ)𝒖[k]\beta\boldsymbol{u}^{[k]}C^{[k]}(k\theta)\leq\boldsymbol{u}^{[k]}, and define a measure 𝒖\boldsymbol{u} as

𝒖=e(k1)θ𝒖1+e(k2)θ𝒖2++eθ𝒖k1+𝒖k.\boldsymbol{u}=e^{-(k-1)\theta}\boldsymbol{u}_{1}+e^{-(k-2)\theta}\boldsymbol{u}_{2}+\cdots+e^{-\theta}\boldsymbol{u}_{k-1}+\boldsymbol{u}_{k}.

Further, define a nonnegative matrix V[k]V^{[k]} as

V[k]=(e(k1)θIe(k2)θIeθII).V^{[k]}=\begin{pmatrix}e^{-(k-1)\theta}I&e^{-(k-2)\theta}I&\cdots&e^{-\theta}I&I\end{pmatrix}.

Then, we have β𝒖[k]C[k](kθ)V[k]=β𝒖C(θ)\beta\boldsymbol{u}^{[k]}C^{[k]}(k\theta)V^{[k]}=\beta\boldsymbol{u}C_{*}(\theta) and 𝒖[k]V[k]=𝒖\boldsymbol{u}^{[k]}V^{[k]}=\boldsymbol{u}. Hence, we have β𝒖C(θ)𝒖\beta\boldsymbol{u}C_{*}(\theta)\leq\boldsymbol{u} and this implies cp(C[k](kθ))cp(C(θ))\mbox{\rm cp}(C^{[k]}(k\theta))\leq\mbox{\rm cp}(C_{*}(\theta)). ∎

Proof of Proposition 5.1.

For j{1,0,1}j\in\{-1,0,1\}, define a matrix function B,j(θ1)B_{*,j}(\theta_{1}) as

B,j(θ1)=eθ1B1,j+B0,j+eθ1B1,j.B_{*,j}(\theta_{1})=e^{-\theta_{1}}B_{-1,j}+B_{0,j}+e^{\theta_{1}}B_{1,j}.

The matrix function A,𝒄(θ1,θ2){}^{\boldsymbol{c}}\!A_{*,*}(\theta_{1},\theta_{2}) is a c1c2×c1c2c_{1}c_{2}\times c_{1}c_{2} block matrix and given as

A,𝒄(θ1,θ2)=(B,0(θ1)B,1(θ1)eθ2B,1(θ1)B,1(θ1)B,0(θ1)B,1(θ1)B,1(θ1)B,0(θ1)B,1(θ1)eθ2B,1(θ1)B,1(θ1)B,0(θ1)).{}^{\boldsymbol{c}}\!A_{*,*}(\theta_{1},\theta_{2})=\begin{pmatrix}B_{*,0}(\theta_{1})&B_{*,1}(\theta_{1})&&&e^{-\theta_{2}}B_{*,-1}(\theta_{1})\cr B_{*,-1}(\theta_{1})&B_{*,0}(\theta_{1})&B_{*,1}(\theta_{1})&&\cr&\ddots&\ddots&\ddots&&\cr&&B_{*,-1}(\theta_{1})&B_{*,0}(\theta_{1})&B_{*,1}(\theta_{1})\cr e^{\theta_{2}}B_{*,1}(\theta_{1})&&&B_{*,-1}(\theta_{1})&B_{*,0}(\theta_{1})\end{pmatrix}. (A.3)

Define a matrix function B,(θ1,θ2)B_{*,*}(\theta_{1},\theta_{2}) as

B,(θ1,θ2)=eθ2B,1(θ1)+B,0(θ1)+eθ2B,1(θ1).B_{*,*}(\theta_{1},\theta_{2})=e^{-\theta_{2}}B_{*,-1}(\theta_{1})+B_{*,0}(\theta_{1})+e^{\theta_{2}}B_{*,1}(\theta_{1}).

Then, by Proposition A.1 and (A.3), we have cp(B,(θ1,θ2))=cp(A,𝒄(θ1,c2θ2))\mbox{\rm cp}(B_{*,*}(\theta_{1},\theta_{2}))=\mbox{\rm cp}({}^{\boldsymbol{c}}\!A_{*,*}(\theta_{1},c_{2}\theta_{2})). The matrix function B,(θ1,θ2)B_{*,*}(\theta_{1},\theta_{2}) is a c1×c1c_{1}\times c_{1} block matrix and given as

B,(θ1,θ2)=(A0,(θ2)A1,(θ2)eθ1A1,(θ2)A1,(θ2)A0,(θ2)A1,(θ2)A1,(θ2)A0,(θ2)A1,(θ2)eθ1A1,(θ2)A1,(θ2)A0,(θ2)).B_{*,*}(\theta_{1},\theta_{2})=\begin{pmatrix}A_{0,*}(\theta_{2})&A_{1,*}(\theta_{2})&&&e^{-\theta_{1}}A_{-1,*}(\theta_{2})\cr A_{-1,*}(\theta_{2})&A_{0,*}(\theta_{2})&A_{1,*}(\theta_{2})&&\cr&\ddots&\ddots&\ddots&&\cr&&A_{-1,*}(\theta_{2})&A_{0,*}(\theta_{2})&A_{1,*}(\theta_{2})\cr e^{\theta_{1}}A_{1,*}(\theta_{2})&&&A_{-1,*}(\theta_{2})&A_{0,*}(\theta_{2})\end{pmatrix}. (A.4)

Hence, by Proposition A.1, we have cp(A,(θ1,θ2))=cp(B,(c1θ1,θ2))\mbox{\rm cp}(A_{*,*}(\theta_{1},\theta_{2}))=\mbox{\rm cp}(B_{*,*}(c_{1}\theta_{1},\theta_{2})) and this implies

cp(A,(θ1,θ2))=cp(B,(c1θ1,θ2))=cp(A,𝒄(c1θ1,c2θ2)).\mbox{\rm cp}(A_{*,*}(\theta_{1},\theta_{2}))=\mbox{\rm cp}(B_{*,*}(c_{1}\theta_{1},\theta_{2}))=\mbox{\rm cp}({}^{\boldsymbol{c}}\!A_{*,*}(c_{1}\theta_{1},c_{2}\theta_{2})).