This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

An urn model for the Jacobi-Piñeiro polynomials

F. Alberto Grünbaum F. Alberto Grünbaum
Department of Mathematics. University of California, Berkeley. Berkeley, CA 94720 U.S.A.
[email protected]
 and  Manuel D. de la Iglesia Manuel D. de la Iglesia
Instituto de Matemáticas, Universidad Nacional Autónoma de México, Circuito Exterior, C.U., 04510, Ciudad de México, México.
[email protected]
Abstract.

The list of physically motivated urn models that can be solved in terms of classical orthogonal polynomials is very small. It includes a model proposed by D. Bernoulli and further analyzed by S. Laplace and a model proposed by P. and T. Ehrenfest and eventually connected with the Krawtchouk and Hahn polynomials. This connection was reversed recently in the case of the Jacobi polynomials where a rather contrived, and later a simpler urn model was proposed. Here we consider an urn model going with the Jacobi-Piñeiro multiple orthogonal polynomials. These polynomials have recently been put forth in connection with a stochastic matrix.

Key words and phrases:
Markov chains. LU factorizations. Multiple orthogonal polynomials. Urn models.
2010 Mathematics Subject Classification:
60J10, 15A23, 33C45, 42C05
This work of the second author was partially supported by PAPIIT-DGAPA-UNAM grant IN104219 (México) and CONACYT grant A1-S-16202 (México).

1. Introduction

Urn models for the flow of two incompressible liquids between two containers or for the exchange of heat between two isolated bodies, which are nowadays given as examples of finite state Markov chains, predate any formal introduction of these probabilistic notions and even a rigorous set-up for probability theory as given by A. Kolmogorov around 1930. Two celebrated examples are the Bernoulli-Laplace (1769-1812) and the Ehrenfest (1907, see [7]) models discussed in W. Feller’s book (see [8], pages 121 and 378).

At a much later time it was noticed that these physically motivated Markov chains can be analyzed completely in terms of families of orthogonal polynomials that occupy important places in the Askey-Wilson tableaux, namely Hahn and Krawtchouk polynomials. For an exposition of this see [10] and its references, or more recently the monograph [6].

The situation described above was part of the motivation for [11] where an urn model (albeit an elaborate one) was proposed for the Markov chain with an infinite number of states going with the Jacobi polynomials. In this case one is dealing with a discrete-time birth-death chain stemming from a semi-infinite tridiagonal matrix. This approach was adapted to the case of matrix-valued Jacobi polynomials (see [13, 14]) where one is dealing with a quasi-birth-and-death process (see [16, 5]). This results in a stochastic matrix that is a semi-infinite block tridiagonal matrix. The importance of studying different walks by means of “spectral methods” has been exploited beyond classical walks. In the case of quantum walks, results such as recurrence are related to spectral properties of a measure on the unit circle, see [15, 4].

In [12] a new idea was introduced and exploited: by using a stochastic LU (or UL) factorization of the stochastic matrix going with the Jacobi polynomials a much simpler model than the one in [11] was put forward. This idea has been adapted to other situations such as multivariate orthogonal polynomials (see [9]).

The purpose of this paper is to use the approach in [12] to give an urn model associated to the Jacobi-Piñeiro polynomials. They originate in [18] and give a basic example of so-called multiple orthogonal polynomials. For references on this kind of polynomials the reader can consult [18, 17, 1, 2]. These are polynomials in one variable indexed by two non-negative indices (n1,n2)(n_{1},n_{2}). They give rise to a family of polynomials indexed by one non-negative index nn (see Proposition 9 of [3]) and these polynomials satisfy a higher-order recursion relation. The fact that this gives rise to a stochastic matrix has been recently pointed out in [3].

2. Stochastic LU factorization of the stochastic matrix

Let {Xt:t=0,1,}\{X_{t}:t=0,1,\ldots\} be the Markov chain on 0\mathbb{Z}_{\geq 0} with transition probability matrix given by the stochastic matrix PIIP_{II} which appears in Corollary 7 of [3]. Here we will denote by PP this matrix, which is associated with the type II Jacobi-Piñeiro multiple orthogonal polynomials. We also denote by an,bn,cn+1,dn+2,n0,a_{n},b_{n},c_{n+1},d_{n+2},n\geq 0, the corresponding transition probabilities given by

an=(Xt+1=n+1|Xt=n),n0,bn=(Xt+1=n|Xt=n),n0,cn=(Xt+1=n1|Xt=n),n1,dn=(Xt+1=n2|Xt=n),n2.\begin{split}a_{n}&=\mathbb{P}\left(X_{t+1}=n+1\,|\,X_{t}=n\right),\quad n\geq 0,\\ b_{n}&=\mathbb{P}\left(X_{t+1}=n\,|\,X_{t}=n\right),\quad n\geq 0,\\ c_{n}&=\mathbb{P}\left(X_{t+1}=n-1\,|\,X_{t}=n\right),\quad n\geq 1,\\ d_{n}&=\mathbb{P}\left(X_{t+1}=n-2\,|\,X_{t}=n\right),\quad n\geq 2.\end{split} (2.1)

Therefore PP is the semi-infinite banded matrix given by

P=(b0a00c1b1a10d2c2b2a200d3c3b3a3).P=\begin{pmatrix}b_{0}&a_{0}&0&&\\ c_{1}&b_{1}&a_{1}&0&&\\ d_{2}&c_{2}&b_{2}&a_{2}&0&\\ 0&d_{3}&c_{3}&b_{3}&a_{3}&\\ &&\ddots&\ddots&\ddots&\ddots\end{pmatrix}. (2.2)

A diagram of the transitions between the states is given by

{psmatrix}[colsep=1.9cm]&a_0a_1a_2a_3a_4a_5c_1c_2c_3c_4c_5c_6b_0b_1b_2b_3b_4b_5d_2d_4d_6d_3d_5\nput9000\nput9011\nput9022\nput9033\nput9044\nput9055\psmatrix[colsep=1.9cm]{\cnode(0.0pt,0.0pt){11.38109pt}{0}}&{\cnode(0.0pt,0.0pt){11.38109pt}{1}}{\cnode(0.0pt,0.0pt){11.38109pt}{2}}{\cnode(0.0pt,0.0pt){11.38109pt}{3}}{\cnode(0.0pt,0.0pt){11.38109pt}{4}}{\cnode(0.0pt,0.0pt){11.38109pt}{5}}\rnode{6}{\Huge{\cdots}}\\ \psset{nodesep=3.0pt,arcangle=15.0,labelsep=25.66107pt,linewidth=0.85358pt,arrows=->,arrowsize=2.84528pt 3.0}{\psset{angleA=160.0,angleB=200.0,ncurv=4.0}\nccurve{->}{0}{0}}\uput[u](27.0301pt,55.48285pt){a_0}\uput[u](81.09032pt,55.48285pt){a_1}\uput[u](135.15053pt,55.48285pt){a_2}\uput[u](189.21074pt,55.48285pt){a_3}\uput[u](243.27095pt,55.48285pt){a_4}\uput[u](303.02171pt,55.48285pt){a_5}\uput[u](27.0301pt,31.298pt){c_1}\uput[u](81.09032pt,31.298pt){c_2}\uput[u](135.15053pt,31.298pt){c_3}\uput[u](189.21074pt,31.298pt){c_4}\uput[u](243.27095pt,31.298pt){c_5}\uput[u](303.02171pt,31.298pt){c_6}\uput[u](-21.33955pt,55.48285pt){b_0}\uput[u](54.06021pt,76.8224pt){b_1}\uput[u](108.12042pt,7.11317pt){b_2}\uput[u](162.18063pt,76.8224pt){b_3}\uput[u](216.24084pt,7.11317pt){b_4}\uput[u](270.30106pt,76.8224pt){b_5}\uput[u](54.06021pt,95.31668pt){d_2}\uput[u](162.18063pt,95.31668pt){d_4}\uput[u](270.30106pt,95.31668pt){d_6}\uput[u](108.12042pt,-11.38109pt){d_3}\uput[u](216.24084pt,-11.38109pt){d_5}{\psset{angleA=70.0,angleB=110.0,ncurv=4.0}\nccurve{->}{1}{1}}{\psset{angleA=-70.0,angleB=-110.0,ncurv=4.0}\nccurve{->}{2}{2}}{\psset{angleA=70.0,angleB=110.0,ncurv=4.0}\nccurve{->}{3}{3}}{\psset{angleA=-70.0,angleB=-110.0,ncurv=4.0}\nccurve{->}{4}{4}}{\psset{angleA=70.0,angleB=110.0,ncurv=4.0}\nccurve{->}{5}{5}}{\psset{angleA=115.0,angleB=65.0,ncurv=1.0}\nccurve{->}{2}{0}}{\psset{angleA=115.0,angleB=65.0,ncurv=1.0}\nccurve{->}{4}{2}}{\psset{angleA=115.0,angleB=65.0,ncurv=1.0}\nccurve{->}{6}{4}}{\psset{angleA=-115.0,angleB=-65.0,ncurv=1.0}\nccurve{->}{3}{1}}{\psset{angleA=-115.0,angleB=-65.0,ncurv=1.0}\nccurve{->}{5}{3}}{\ncarc{->}{0}{1}}{\ncarc{->}{1}{0}}{\ncarc{->}{1}{2}}{\ncarc{->}{2}{1}}{\ncarc{->}{2}{3}}{\ncarc{->}{3}{2}}{\ncarc{->}{3}{4}}{\ncarc{->}{4}{3}}{\ncarc{->}{4}{5}}{\ncarc{->}{5}{4}}{\ncarc{->}{5}{6}}{\ncarc{->}{6}{5}}\psset{labelsep=-43.62383pt}\nput{90}{0}{0}\psset{labelsep=-43.62383pt}\nput{90}{1}{1}\psset{labelsep=-43.62383pt}\nput{90}{2}{2}\psset{labelsep=-43.62383pt}\nput{90}{3}{3}\psset{labelsep=-43.62383pt}\nput{90}{4}{4}\psset{labelsep=-43.62383pt}\nput{90}{5}{5}

Following [3], PP depends on three parameters α,β\alpha,\beta and γ\gamma, and it is stochastic if and only if α,β,γ>1,αβ\alpha,\beta,\gamma>-1,\alpha\neq\beta and |αβ|<1|\alpha-\beta|<1 (although the case α=β\alpha=\beta is allowed too, as we will see below). The matrix PP (2.2) admits a stochastic LU factorization, as in [12], of the form

P=PLPU=(s00r1s10t2r2s200t3r3s30)(y0x000y1x100y2x30),P=P_{L}P_{U}=\begin{pmatrix}s_{0}&0&\\ r_{1}&s_{1}&0&\\ t_{2}&r_{2}&s_{2}&0&\\ 0&t_{3}&r_{3}&s_{3}&0&\\ &&\ddots&\ddots&\ddots&\ddots\end{pmatrix}\begin{pmatrix}y_{0}&x_{0}&0&\\ 0&y_{1}&x_{1}&0&\\ &0&y_{2}&x_{3}&0&\\ &&&\ddots&\ddots&\ddots\end{pmatrix}, (2.3)

where, for n0n\geq 0, we have

x2n=2n+γ+13n+α+γ+2,y2n=n+α+13n+α+γ+2,x2n+1=2n+γ+23n+β+γ+3,y2n+1=n+β+13n+β+γ+3,\begin{split}x_{2n}&=\frac{2n+\gamma+1}{3n+\alpha+\gamma+2},\quad y_{2n}=\frac{n+\alpha+1}{3n+\alpha+\gamma+2},\\ x_{2n+1}&=\frac{2n+\gamma+2}{3n+\beta+\gamma+3},\quad y_{2n+1}=\frac{n+\beta+1}{3n+\beta+\gamma+3},\end{split} (2.4)
t2n=n(n+αβ)(3n+α+γ)(3n+α+γ+1),s2n=(2n+α+γ+1)(2n+β+γ+1)(3n+α+γ+1)(3n+β+γ+1),t2n+1=n(n+βα)(3n+β+γ+1)(3n+β+γ+2),s2n+1=(2n+α+γ+2)(2n+β+γ+2)(3n+α+γ+3)(3n+β+γ+2),\begin{split}t_{2n}&=\frac{n(n+\alpha-\beta)}{(3n+\alpha+\gamma)(3n+\alpha+\gamma+1)},\;s_{2n}=\frac{(2n+\alpha+\gamma+1)(2n+\beta+\gamma+1)}{(3n+\alpha+\gamma+1)(3n+\beta+\gamma+1)},\\ t_{2n+1}&=\frac{n(n+\beta-\alpha)}{(3n+\beta+\gamma+1)(3n+\beta+\gamma+2)},\;s_{2n+1}=\frac{(2n+\alpha+\gamma+2)(2n+\beta+\gamma+2)}{(3n+\alpha+\gamma+3)(3n+\beta+\gamma+2)},\end{split} (2.5)

and

r2n=n(2n+β+γ)(3n+α+γ)(3n+α+γ+1)+n(2n+α+γ+1)(3n+α+γ+1)(3n+β+γ+1),r2n+1=n(2n+α+γ+1)(3n+β+γ+1)(3n+β+γ+2)+(n+1)(2n+β+γ+2)(3n+α+γ+3)(3n+β+γ+2).\begin{split}r_{2n}&=\frac{n(2n+\beta+\gamma)}{(3n+\alpha+\gamma)(3n+\alpha+\gamma+1)}+\frac{n(2n+\alpha+\gamma+1)}{(3n+\alpha+\gamma+1)(3n+\beta+\gamma+1)},\\ r_{2n+1}&=\frac{n(2n+\alpha+\gamma+1)}{(3n+\beta+\gamma+1)(3n+\beta+\gamma+2)}+\frac{(n+1)(2n+\beta+\gamma+2)}{(3n+\alpha+\gamma+3)(3n+\beta+\gamma+2)}.\end{split} (2.6)

Notice that all the coefficients are nonnegative, xn+yn=1x_{n}+y_{n}=1 and tn+sn+rn=1t_{n}+s_{n}+r_{n}=1 for n0n\geq 0, i.e. the matrices PLP_{L} and PUP_{U} are stochastic. Observe that the factor PUP_{U} is a pure-birth chain on 0\mathbb{Z}_{\geq 0} with diagram

{psmatrix}[colsep=1.9cm]&x_0x_1x_2x_3x_4x_5y_0y_1y_2y_3y_4y_5\nput9000\nput9011\nput9022\nput9033\nput9044\nput9055\psmatrix[colsep=1.9cm]{\cnode(0.0pt,0.0pt){11.38109pt}{0}}&{\cnode(0.0pt,0.0pt){11.38109pt}{1}}{\cnode(0.0pt,0.0pt){11.38109pt}{2}}{\cnode(0.0pt,0.0pt){11.38109pt}{3}}{\cnode(0.0pt,0.0pt){11.38109pt}{4}}{\cnode(0.0pt,0.0pt){11.38109pt}{5}}\rnode{6}{\Huge{\cdots}}\\ \psset{nodesep=3.0pt,arcangle=15.0,labelsep=25.66107pt,linewidth=0.85358pt,arrows=->,arrowsize=2.84528pt 3.0}{\psset{angleA=160.0,angleB=200.0,ncurv=4.0}\nccurve{->}{0}{0}}\uput[u](27.0301pt,55.48285pt){x_0}\uput[u](81.09032pt,55.48285pt){x_1}\uput[u](135.15053pt,55.48285pt){x_2}\uput[u](189.21074pt,55.48285pt){x_3}\uput[u](243.27095pt,55.48285pt){x_4}\uput[u](303.02171pt,55.48285pt){x_5}\uput[u](-21.33955pt,55.48285pt){y_0}\uput[u](54.06021pt,76.8224pt){y_1}\uput[u](108.12042pt,76.8224pt){y_2}\uput[u](162.18063pt,76.8224pt){y_3}\uput[u](216.24084pt,76.8224pt){y_4}\uput[u](270.30106pt,76.8224pt){y_5}{\psset{angleA=70.0,angleB=110.0,ncurv=4.0}\nccurve{->}{1}{1}}{\psset{angleA=70.0,angleB=110.0,ncurv=4.0}\nccurve{->}{2}{2}}{\psset{angleA=70.0,angleB=110.0,ncurv=4.0}\nccurve{->}{3}{3}}{\psset{angleA=70.0,angleB=110.0,ncurv=4.0}\nccurve{->}{4}{4}}{\psset{angleA=70.0,angleB=110.0,ncurv=4.0}\nccurve{->}{5}{5}}{\ncarc{->}{0}{1}}{\ncarc{->}{1}{2}}{\ncarc{->}{2}{3}}{\ncarc{->}{3}{4}}{\ncarc{->}{4}{5}}{\ncarc{->}{5}{6}}\psset{labelsep=-43.62383pt}\nput{90}{0}{0}\psset{labelsep=-43.62383pt}\nput{90}{1}{1}\psset{labelsep=-43.62383pt}\nput{90}{2}{2}\psset{labelsep=-43.62383pt}\nput{90}{3}{3}\psset{labelsep=-43.62383pt}\nput{90}{4}{4}\psset{labelsep=-43.62383pt}\nput{90}{5}{5}

while PLP_{L} is a pure-death chain on 0\mathbb{Z}_{\geq 0} with absorbing state at 0 with diagram

{psmatrix}[colsep=1.9cm]&r_1r_2r_3r_4r_5r_61s_1s_3s_5s_2s_4t_2t_4t_6t_3t_5\nput9000\nput9011\nput9022\nput9033\nput9044\nput9055\psmatrix[colsep=1.9cm]{\cnode(0.0pt,0.0pt){11.38109pt}{0}}&{\cnode(0.0pt,0.0pt){11.38109pt}{1}}{\cnode(0.0pt,0.0pt){11.38109pt}{2}}{\cnode(0.0pt,0.0pt){11.38109pt}{3}}{\cnode(0.0pt,0.0pt){11.38109pt}{4}}{\cnode(0.0pt,0.0pt){11.38109pt}{5}}\rnode{6}{\Huge{\cdots}}\\ \psset{nodesep=3.0pt,arcangle=15.0,labelsep=25.66107pt,linewidth=0.85358pt,arrows=->,arrowsize=2.84528pt 3.0}{\psset{angleA=160.0,angleB=200.0,ncurv=4.0}\nccurve{->}{0}{0}}\uput[u](27.0301pt,31.298pt){r_1}\uput[u](81.09032pt,31.298pt){r_2}\uput[u](135.15053pt,31.298pt){r_3}\uput[u](189.21074pt,31.298pt){r_4}\uput[u](243.27095pt,31.298pt){r_5}\uput[u](303.02171pt,31.298pt){r_6}\uput[u](-21.33955pt,55.48285pt){1}\uput[u](54.06021pt,76.8224pt){s_1}\uput[u](162.18063pt,76.8224pt){s_3}\uput[u](270.30106pt,76.8224pt){s_5}\uput[u](108.12042pt,7.11317pt){s_2}\uput[u](216.24084pt,7.11317pt){s_4}\uput[u](54.06021pt,95.31668pt){t_2}\uput[u](162.18063pt,95.31668pt){t_4}\uput[u](270.30106pt,95.31668pt){t_6}\uput[u](108.12042pt,-11.38109pt){t_3}\uput[u](216.24084pt,-11.38109pt){t_5}{\psset{angleA=70.0,angleB=110.0,ncurv=4.0}\nccurve{->}{1}{1}}{\psset{angleA=-70.0,angleB=-110.0,ncurv=4.0}\nccurve{->}{2}{2}}{\psset{angleA=70.0,angleB=110.0,ncurv=4.0}\nccurve{->}{3}{3}}{\psset{angleA=-70.0,angleB=-110.0,ncurv=4.0}\nccurve{->}{4}{4}}{\psset{angleA=70.0,angleB=110.0,ncurv=4.0}\nccurve{->}{5}{5}}{\psset{angleA=115.0,angleB=65.0,ncurv=1.0}\nccurve{->}{2}{0}}{\psset{angleA=115.0,angleB=65.0,ncurv=1.0}\nccurve{->}{4}{2}}{\psset{angleA=115.0,angleB=65.0,ncurv=1.0}\nccurve{->}{6}{4}}{\psset{angleA=-115.0,angleB=-65.0,ncurv=1.0}\nccurve{->}{3}{1}}{\psset{angleA=-115.0,angleB=-65.0,ncurv=1.0}\nccurve{->}{5}{3}}{\ncarc{->}{1}{0}}{\ncarc{->}{2}{1}}{\ncarc{->}{3}{2}}{\ncarc{->}{4}{3}}{\ncarc{->}{5}{4}}{\ncarc{->}{6}{5}}\psset{labelsep=-43.62383pt}\nput{90}{0}{0}\psset{labelsep=-43.62383pt}\nput{90}{1}{1}\psset{labelsep=-43.62383pt}\nput{90}{2}{2}\psset{labelsep=-43.62383pt}\nput{90}{3}{3}\psset{labelsep=-43.62383pt}\nput{90}{4}{4}\psset{labelsep=-43.62383pt}\nput{90}{5}{5}

The factorization (2.3) gives the following relations among all coefficients

an=snxn,n0bn=rnxn1+snyn,n1,b0=s0y0,cn=rnyn1+tnxn2,n2,c1=r1y0,cn=tnyn2,n2,\begin{split}a_{n}&=s_{n}x_{n},\quad n\geq 0\\ b_{n}&=r_{n}x_{n-1}+s_{n}y_{n},\quad n\geq 1,\quad b_{0}=s_{0}y_{0},\\ c_{n}&=r_{n}y_{n-1}+t_{n}x_{n-2},\quad n\geq 2,\quad c_{1}=r_{1}y_{0},\\ c_{n}&=t_{n}y_{n-2},\quad n\geq 2,\end{split} (2.7)

which is an alternative way of writing the transition probabilities appearing in Corollary 7 of [3]. Now we can see from the coefficients in (2.4)–(2.6) that the case α=β\alpha=\beta can be allowed (although it may be exceptional in terms of the Jacobi-Piñeiro polynomials).

In [3] one finds a different matrix PIP_{I} going with the type I Jacobi-Piñeiro polynomials. Its shape is given as that of the transpose of the one in (2.2). The methods and models used here for PIIP_{II} can be easily extended to the matrix PIP_{I}.

3. The urn model

In this section we assume α=1/M,β=1/N,γ0,M,N,γ>0\alpha=1/M,\beta=1/N,\gamma\geq 0,M,N,\gamma\in\mathbb{Z}_{>0}. The condition |αβ|<1|\alpha-\beta|<1 is now equivalent to |MN|<MN|M-N|<MN. After this substitution we have that the coefficients in (2.4), (2.5) and (2.6) are given, for n0n\geq 0, by

x2n=M(2n+γ+1)3Mn+Mγ+2M+1,y2n=M(n+1)+13Mn+Mγ+2M+1,x2n+1=N(2n+γ+2)3Nn+Nγ+3N+1,y2n+1=N(n+1)+13Nn+Nγ+3N+1,\begin{split}x_{2n}&=\frac{M(2n+\gamma+1)}{3Mn+M\gamma+2M+1},\quad y_{2n}=\frac{M(n+1)+1}{3Mn+M\gamma+2M+1},\\ x_{2n+1}&=\frac{N(2n+\gamma+2)}{3Nn+N\gamma+3N+1},\quad y_{2n+1}=\frac{N(n+1)+1}{3Nn+N\gamma+3N+1},\end{split} (3.1)
t2n=Mn(MNn+NM)N(3Mn+Mγ+1)(3Mn+Mγ+M+1),s2n=(2Mn+Mγ+M+1)(2Nn+Nγ+N+1)(3Mn+Mγ+M+1)(3Nn+Nγ+N+1),t2n+1=Nn(MNn+MN)M(3Nn+Nγ+N+1)(3Nn+Nγ+2N+1),s2n+1=(2Mn+Mγ+2M+1)(2Nn+Nγ+2N+1)(3Mn+Mγ+3M+1)(3Nn+Nγ+2N+1),\begin{split}t_{2n}&=\frac{Mn(MNn+N-M)}{N(3Mn+M\gamma+1)(3Mn+M\gamma+M+1)},\\ s_{2n}&=\frac{(2Mn+M\gamma+M+1)(2Nn+N\gamma+N+1)}{(3Mn+M\gamma+M+1)(3Nn+N\gamma+N+1)},\\ t_{2n+1}&=\frac{Nn(MNn+M-N)}{M(3Nn+N\gamma+N+1)(3Nn+N\gamma+2N+1)},\\ s_{2n+1}&=\frac{(2Mn+M\gamma+2M+1)(2Nn+N\gamma+2N+1)}{(3Mn+M\gamma+3M+1)(3Nn+N\gamma+2N+1)},\end{split} (3.2)

and

r2n=M2n(2Nn+Nγ+1)N(3Mn+Mγ+1)(3Mn+Mγ+M+1)+Nn(2Mn+Mγ+M+1)(3Mn+Mγ+M+1)(3Nn+Nγ+N+1),r2n+1=N2n(2Mn+Mγ+M+1)M(3Nn+Nγ+N+1)(3Nn+Nγ+2N+1)+M(n+1)(2Nn+Nγ+2N+1)(3Mn+Mγ+3M+1)(3Nn+Nγ+2N+1).\begin{split}r_{2n}&=\frac{M^{2}n(2Nn+N\gamma+1)}{N(3Mn+M\gamma+1)(3Mn+M\gamma+M+1)}\\ &\qquad\qquad+\frac{Nn(2Mn+M\gamma+M+1)}{(3Mn+M\gamma+M+1)(3Nn+N\gamma+N+1)},\\ r_{2n+1}&=\frac{N^{2}n(2Mn+M\gamma+M+1)}{M(3Nn+N\gamma+N+1)(3Nn+N\gamma+2N+1)}\\ &\qquad\qquad+\frac{M(n+1)(2Nn+N\gamma+2N+1)}{(3Mn+M\gamma+3M+1)(3Nn+N\gamma+2N+1)}.\end{split} (3.3)

Consider the LU factorization P=PLPUP=P_{L}P_{U} (2.3) where the entries of PLP_{L} and PUP_{U} are given in (3.1), (3.2) and (3.3). Each one of these matrices PLP_{L} and PUP_{U} will give rise to an experiment in terms of an urn model, which we call Experiment 1 and Experiment 2, respectively. At times t=0,1,2,t=0,1,2,\ldots an urn A contains mm blue balls and this determines the state of our Markov chain at that time. Additionally, for Experiment 1, we will have two other urns, one painted in blue, which we call B, and the other one painted in red, which we call R. These auxiliary urns are initially empty and will be emptied after their use in going from one time step to the next. All the urns sit in a bath consisting of an infinite number of blue and red balls. These urns are not used in Experiment 2.

Let us consider first the (slighty simpler) Experiment 2 (for PUP_{U}), and let us call this chain {Xt(2),t=0,1,}\{X_{t}^{(2)},t=0,1,\ldots\}. Assume that urn A contains mm blue balls (m0m\geq 0) at time 0 (i.e. X0(2)=mX_{0}^{(2)}=m). Consider first the case where mm is even and write m=2n,n0m=2n,n\geq 0. Remove all the balls from urn A and place in A M(2n+γ+1)M(2n+\gamma+1) blue balls and M(n+1)+1M(n+1)+1 red balls from the bath. Draw one ball from urn A at random with the uniform distribution. We have two possibilities:

  • If we get a blue ball then we remove into the bath or add from the bath balls until we have m+1=2n+1m+1=2n+1 blue balls in urn A and start over. Therefore

    (X1(2)=2n+1|X0(2)=2n)=M(2n+γ+1)3Mn+Mγ+2M+1.\mathbb{P}\left(X_{1}^{(2)}=2n+1\,|\,X_{0}^{(2)}=2n\right)=\frac{M(2n+\gamma+1)}{3Mn+M\gamma+2M+1}.

    Observe that this probability is given by x2nx_{2n} in (3.1).

  • If we get a red ball then we remove into the bath or add from the bath balls until we have m=2nm=2n blue balls in urn A and start over. Therefore

    (X1(2)=2n|X0(2)=2n)=M(n+1)+13Mn+Mγ+2M+1.\mathbb{P}\left(X_{1}^{(2)}=2n\,|\,X_{0}^{(2)}=2n\right)=\frac{M(n+1)+1}{3Mn+M\gamma+2M+1}.

    Observe that this probability is given by y2ny_{2n} in (3.1).

Consider now the case where mm is odd and write m=2n+1,n0m=2n+1,n\geq 0. Remove all the balls from urn A and place in A N(2n+γ+2)N(2n+\gamma+2) blue balls and N(n+1)+1N(n+1)+1 red balls from the bath. Draw one ball from urn A at random with the uniform distribution. We have two possibilities:

  • If we get a blue ball then we remove into the bath or add from the bath balls until we have m+1=2n+2m+1=2n+2 blue balls in urn A and start over. Therefore

    (X1(2)=2n+2|X0(2)=2n+1)=N(2n+γ+2)3Nn+Nγ+3N+1.\mathbb{P}\left(X_{1}^{(2)}=2n+2\,|\,X_{0}^{(2)}=2n+1\right)=\frac{N(2n+\gamma+2)}{3Nn+N\gamma+3N+1}.

    Observe that this probability is given by x2n+1x_{2n+1} in (3.1).

  • If we get a red ball then we remove into the bath or add from the bath balls until we have m=2n+1m=2n+1 blue balls in urn A and start over. Therefore

    (X1(2)=2n+1|X0(2)=2n+1)=N(n+1)+13Nn+Nγ+3N+1.\mathbb{P}\left(X_{1}^{(2)}=2n+1\,|\,X_{0}^{(2)}=2n+1\right)=\frac{N(n+1)+1}{3Nn+N\gamma+3N+1}.

    Observe that this probability is given by y2n+1y_{2n+1} in (3.1).

Now we describe Experiment 1 (for PLP_{L}), and let us call this chain {Xt(1),t=0,1,}\{X_{t}^{(1)},t=0,1,\ldots\}. Recall that now we will use two auxiliary urns B and R which are initially empty and will be emptied after their use in each time step. Assume that urn A contains mm blue balls (m0m\geq 0) at time 0. Consider first the case where mm is even and write m=2n,n1m=2n,n\geq 1 (the state 0 is absorbing). Remove all balls from urn A and place in A MnMn blue balls and 2Mn+Mγ+M+12Mn+M\gamma+M+1 red balls from the bath. In urn B we place MNn+NMMNn+N-M blue balls and M(2Nn+Nγ+1)M(2Nn+N\gamma+1) red balls. In urn R we place NnNn blue balls and 2Nn+Nγ+N+12Nn+N\gamma+N+1 red balls. Draw one ball from urn A at random with the uniform distribution. If we get a blue ball then we go to urn B and draw a ball, while if we get a red ball then we go urn R and draw a ball. We have three possibilities:

  • If we get two blue balls in a row, i.e. one from urn A and then one from urn B, then we remove into the bath or add from the bath balls until we have m2=2n2m-2=2n-2 blue balls in urn A and start over. Therefore

    (X1(1)=2n2|X0(1)=2n)=Mn3Mn+Mγ+M+1MNn+NMN(3Mn+Mγ+1).\mathbb{P}\left(X_{1}^{(1)}=2n-2\,|\,X_{0}^{(1)}=2n\right)=\frac{Mn}{3Mn+M\gamma+M+1}\cdot\frac{MNn+N-M}{N(3Mn+M\gamma+1)}.

    Observe that this probability is given by t2nt_{2n} in (3.2).

  • If we get two red balls in a row, i.e. one from urn A and then one from urn R, then we remove into the bath or add from the bath balls until we have m=2nm=2n blue balls in urn A and start over. Therefore

    (X1(1)=2n|X0(1)=2n)=2Mn+Mγ+M+13Mn+Mγ+M+12Nn+Nγ+N+13Nn+Nγ+N+1.\mathbb{P}\left(X_{1}^{(1)}=2n\,|\,X_{0}^{(1)}=2n\right)=\frac{2Mn+M\gamma+M+1}{3Mn+M\gamma+M+1}\cdot\frac{2Nn+N\gamma+N+1}{3Nn+N\gamma+N+1}.

    Observe that this probability is given by s2ns_{2n} in (3.2).

  • If we get either a blue and a red ball or a red and a blue ball then we remove into the bath or add from the bath balls until we have m1=2n1m-1=2n-1 blue balls in urn A and start over. Therefore

    (X1(1)=2n1|X0(1)=2n)\displaystyle\mathbb{P}\left(X_{1}^{(1)}=2n-1\,|\,X_{0}^{(1)}=2n\right) =Mn3Mn+Mγ+M+1M(2Nn+Nγ+1)N(3Mn+Mγ+1)\displaystyle=\frac{Mn}{3Mn+M\gamma+M+1}\cdot\frac{M(2Nn+N\gamma+1)}{N(3Mn+M\gamma+1)}
    +2Mn+Mγ+M+13Mn+Mγ+M+1Nn3Nn+Nγ+N+1.\displaystyle\qquad\qquad+\frac{2Mn+M\gamma+M+1}{3Mn+M\gamma+M+1}\cdot\frac{Nn}{3Nn+N\gamma+N+1}.

    Observe that this probability is given by r2nr_{2n} in (3.3).

Consider now the case where mm is odd and write m=2n+1,n1m=2n+1,n\geq 1 (the simpler case m=1m=1 will be treated separately). Remove all balls from urn A and place in urn A NnNn blue balls and 2Nn+Nγ+2N+12Nn+N\gamma+2N+1 red balls from the bath. In urn B we place MNn+MNMNn+M-N blue balls and N(2Mn+Mγ+M+1)N(2Mn+M\gamma+M+1) red balls. In urn R we place M(n+1)M(n+1) blue balls and 2Mn+Mγ+2M+12Mn+M\gamma+2M+1 red balls. Draw one ball from urn A at random with the uniform distribution. If we get a blue ball then we go to urn B and draw a ball, while if we get a red ball then we go urn R and draw a ball. Again, we have three possibilities:

  • If we get two blue balls in a row, i.e. one from urn A and then one from urn B, then we remove into the bath or add from the bath balls until we have m2=2n1m-2=2n-1 blue balls in urn A and start over. Therefore

    (X1(1)=2n1|X0(1)=2n+1)=Nn3Nn+Nγ+2N+1MNn+MNM(3Nn+Nγ+N+1).\mathbb{P}\left(X_{1}^{(1)}=2n-1\,|\,X_{0}^{(1)}=2n+1\right)=\frac{Nn}{3Nn+N\gamma+2N+1}\cdot\frac{MNn+M-N}{M(3Nn+N\gamma+N+1)}.

    Observe that this probability is given by t2n+1t_{2n+1} in (3.2).

  • If we get two red balls in a row, i.e. one from urn A and then one from urn R, then we remove into the bath or add from the bath balls until we have m=2n+1m=2n+1 blue balls in urn A and start over. Therefore

    (X1(1)=2n+1|X0(1)=2n+1)=2Nn+Nγ+2N+13Nn+Nγ+2N+12Mn+Mγ+2M+13Mn+Mγ+3M+1.\mathbb{P}\left(X_{1}^{(1)}=2n+1\,|\,X_{0}^{(1)}=2n+1\right)=\frac{2Nn+N\gamma+2N+1}{3Nn+N\gamma+2N+1}\cdot\frac{2Mn+M\gamma+2M+1}{3Mn+M\gamma+3M+1}.

    Observe that this probability is given by s2n+1s_{2n+1} in (3.2).

  • If we get either a blue and a red ball or a red and a blue ball then we remove into the bath or add from the bath balls until we have m1=2nm-1=2n blue balls in urn A and start over. Therefore

    (X1(1)=2n|X0(1)=2n+1)\displaystyle\mathbb{P}\left(X_{1}^{(1)}=2n\,|\,X_{0}^{(1)}=2n+1\right) =Nn3Nn+Nγ+2N+1N(2Mn+Mγ+M+1)M(3Nn+Nγ+N+1)\displaystyle=\frac{Nn}{3Nn+N\gamma+2N+1}\cdot\frac{N(2Mn+M\gamma+M+1)}{M(3Nn+N\gamma+N+1)}
    +2Nn+Nγ+2N+13Nn+Nγ+2N+1M(n+1)3Mn+Mγ+3M+1.\displaystyle\qquad\qquad+\frac{2Nn+N\gamma+2N+1}{3Nn+N\gamma+2N+1}\cdot\frac{M(n+1)}{3Mn+M\gamma+3M+1}.

    Observe that this probability is given by r2n+1r_{2n+1} in (3.3).

For the case m=1m=1 we only need to place in urn A MM blue balls and Mγ+2M+1M\gamma+2M+1 red balls and draw one ball at random. If we get a blue ball (with probability r1r_{1} in (3.3)) then we remove all balls from urn A and stop (since the state 0 is absorbing). If we get a red ball (with probability s1s_{1} in (3.2)) then we leave only 11 blue ball in urn A and start over.


The urn model going with PP (on 0\mathbb{Z}_{\geq 0}) is obtained by repeatedly alternating Experiments 1 and 2 in that order (see Figures 1 and 2 below, included here for the benefit of the reader). If we first perform Experiment 1 (for m=2nm=2n) we will end up with an urn with 2n2,2n12n-2,2n-1 or 2n2n blue balls. Now we perform Experiment 2 starting with 2n2,2n12n-2,2n-1 or 2n2n blue balls, in which case we may end with either 2n12n-1 or 2n22n-2 blue balls in the first case, 2n2n or 2n12n-1 blue balls in the second case and 2n+12n+1 or 2n2n blue balls in the final third case. The situation is similar if we start with m=2n+1m=2n+1. We first perform Experiment 1 and we will end up with an urn with 2n1,2n2n-1,2n or 2n+12n+1 blue balls. Now we perform Experiment 2 starting with 2n1,2n2n-1,2n or 2n+12n+1 blue balls, in which case we may end with either 2n2n or 2n12n-1 blue balls in the first case, 2n+12n+1 or 2n2n blue balls in the second case and 2n+22n+2 or 2n+12n+1 blue balls in the final third case. The combination of possibilities of these six cases yields the transition probabilities in (2.1) (see also (2.7)).

Refer to caption
Figure 1. A schematic of Experiments 1 and 2. The boxed regions represent the state or urn with BB_{\cdot} and RR_{\cdot} indicating the number of blue or red balls, respectively, contained within the urn. When a ball is drawn from an urn, this is indicated by B1DrawB_{1}^{\tiny{\mbox{Draw}}} or R1DrawR_{1}^{\tiny{\mbox{Draw}}}. The initial state is B2nB_{2n}.
Refer to caption
Figure 2. A schematic of Experiments 1 and 2. The boxed regions represent the state or urn with BB_{\cdot} and RR_{\cdot} indicating the number of blue or red balls, respectively, contained within the urn. When a ball is drawn from an urn, this is indicated by B1DrawB_{1}^{\tiny{\mbox{Draw}}} or R1DrawR_{1}^{\tiny{\mbox{Draw}}}. The initial state is B2n+1B_{2n+1}.

References

  • [1] van Assche, W. and Coussement, E., Some classical multiple orthogonal polynomials, J. Comput. Appl. Math. 127 (2001) 317–347.
  • [2] Aptekarev, A.I., Branquinho, A. and Van Assche, W., Multiple orthogonal polynomials for classical weights, Trans. Amer. Math. Soc 355 (2003) 3887–3914.
  • [3] Branquinho, A., Foulquié-Moreno, A., Mañas, M., Álvarez-Fernández, C. and Fernández-Díaz, J.E., Multiple orthogonal polynomials and random walks, ArXiv: 2103.13715.
  • [4] Bourgain, J., Grünbaum, F. A., Velázquez, L. and Wilkening, J., Quantum recurrence of a subspace and operator-valued Schur functions, Comm. Math. Phys. 329 (2014) 1031–1067.
  • [5] Dette, H., Reuther, B., Studden, W. and Zygmunt, M., Matrix measures and random walks with a block tridiagonal transition matrix, SIAM J. Matrix Anal. Applic. 29, No. 1 (2006), 117–142.
  • [6] Domínguez de la Iglesia, M., Orthogonal polynomials in the spectral analysis of Markov processes. Birth-death models and diffusion, to appear in Encyclopedia of Mathematics and its Applications, Cambridge University Press, 2021.
  • [7] Ehrenfest, P. and Eherenfest, T., Über zwei bekannte Einwände gegen das Boltzmannsche H-Theorem, Physikalische Zeitschrift, vol. 8 (1907), 311–314.
  • [8] Feller, W., An introduction to Probability Theory and its Applications, vol. 1, 3rd edition, Wiley, 1967.
  • [9] Fernández, L. and de la Iglesia, M.D., Quasi-birth-and-death processes and multivariate orthogonal polynomials, J. Math. Anal. Appl. 499 (2021), 125029, 33 pp.
  • [10] Grünbaum, F.A., Random walks and orthogonal polynomials: some challenges, Probability, Geometry and Integrable Systems, MSRI Publication, volumen 55, 2007.
  • [11] Grünbaum, F.A., An urn model associated with Jacobi polynomials, Commun. Applied Math. Comput. Sciences 5 (2010), no. 1, 55–63.
  • [12] Grünbaum, F.A. and de la Iglesia, M.D., Stochastic LU factorizations, Darboux transformations and urn models, J. Appl. Prob. 55 (2018), 862–886.
  • [13] Grünbaum, F.A. and de la Iglesia, M.D., Stochastic Darboux transformations for quasi-birth-and-death processes and urn models, J. Math. Anal. Appl. 478 (2019), 634–654.
  • [14] Grünbaum, F. A., Pacharoni, I. and Tirao, J. A., Two stochastic models of a random walk in the U(nn)-spherical duals of U(n+1n+1), Ann. Mat. Pura Appl. 192 (2013), no. 3, 447–473.
  • [15] Grünbaum, F.A., Velázquez, L., Werner, A.H. and Werner, R.F., Recurrence for discrete time unitary evolutions, Comm. Math. Phys. 320 (2013), 543–569.
  • [16] Latouche, G. and Ramaswami, V., Introduction to Matrix Analytic Methods in Stochastic Modeling, ASA-SIAM Series on Statistics and Applied Probability, 1999.
  • [17] Nikishin, E.M. and Sorokin, V.N., Rational Approximations and Orthogonality, Translations of Mathe- matical Monographs, 92, American Mathematical Society, Providence, 1991.
  • [18] Piñeiro, L.R., On simultaneous approximations for a collection of Markov functions, Vestnik Moskov. Univ. Ser. I Mat. Mekh. 1987, no. 2, 67–70, 103; translated in Moscow University Mathematical Bulletin 42 (2) (1987) 52–55.