This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

On the Generalized Birth-Death Process and its Linear Versions

P. Vishwakarma Pradeep Vishwakarma, Department of Mathematics, Indian Institute of Technology Bhilai, Durg 491001, INDIA. [email protected]  and  K.k. Kataria Kuldeep Kumar Kataria, Department of Mathematics, Indian Institute of Technology Bhilai, Durg 491001, INDIA. [email protected]
Abstract.

In this paper, we consider a generalized birth-death process (GBDP) and examined its linear versions. Using its transition probabilities, we obtain the system of differential equations that governs its state probabilities. The distribution function of its waiting-time in state ss given that it starts in state ss is obtained. For a linear version of it, namely, the generalized linear birth-death process (GLBDP), we obtain the probability generating function, mean, variance and the probability of ultimate extinction of population. Also, we obtain the maximum likelihood estimate of one of its parameter. The differential equations that govern the joint cumulant generating functions of the population size with cumulative births and cumulative deaths are derived. In the case of constant birth and death rates in GBDP, the explicit forms of the state probabilities, joint probability mass functions of population size with cumulative births and cumulative deaths, and their marginal probability mass functions are obtained. It is shown that the Laplace transform of a stochastic integral of GBDP satisfies its Kolmogorov backward equation with certain scaled parameters. Also, the first two moments of the stochastic path integral of GLBDP are obtained. Later, we consider the immigration effect in GLBDP for two different cases. An application of a linear version of GBDP and its stochastic path integral to vehicles parking management system is discussed.

Key words and phrases:
Extinction probability; birth-death process; generalized birth-death process; generalized linear birth-death process; generalized linear birth-death process with immigration.
2010 Mathematics Subject Classification:
Primary : 60J27; Secondary: 60J20

1. Introduction

The birth-death process (BDP) is a continuous-time and discrete state-space Markov process which is used to model the growth of population with time. If λn>0\lambda_{n}>0 for all n0n\geq 0 are the birth rates and μn>0\mu_{n}>0 for all n1n\geq 1 are the death rates then, in an infinitesimal time interval of length hh, the probability of a birth equals λnh+o(h)\lambda_{n}h+o(h), probability of a death equals μnh+o(h)\mu_{n}h+o(h) and probability of any other event is o(h)o(h). Here, nn denotes the population size at a given time t0t\geq 0. In BDP, the transition rates can depend on the state of the process. If the transition rates in BDP are linear, that is, λn=nλ\lambda_{n}=n\lambda and μn=nμ\mu_{n}=n\mu then it is called the linear birth-death process (LBDP). The BDP has various applications in the fields of engineering, queuing theory and biological sciences etc. For example, it can be used to model the number of customers in queue at a service center or the growth of bacteria over time. However, it has certain limitation, for example, it is not a suitable process to model the situations involving multiple births or multiple deaths in an infinitesimal time interval.

Doubleday (1973) introduced and studied a linear birth-death process with multiple births and single death. Here, we consider a generalized version of the BDP where in an infinitesimal time interval there is a possibility of multiple but finitely many births or deaths with the assumption that the chance of simultaneous birth and death is negligible. We call this process as the generalized birth-death process (GBDP) and denote it by {𝒩(t)}t0\{\mathcal{N}(t)\}_{t\geq 0}. It is shown that the state probabilities q(n,t)=Pr{𝒩(t)=n}q(n,t)=\mathrm{Pr}\{\mathcal{N}(t)=n\}, n0n\geq 0 of GBDP solve the following system of differential equations:

ddtq(n,t)={i=1k1λ(0)iq(0,t)+j=1k2μ(j)jq(j,t),n=0,(i=1k1λ(n)i+j=1k2μ(n)j)q(n,t)+i=1min{n,k1}λ(ni)iq(ni,t)+j=1k2μ(n+j)jq(n+j,t),n1\frac{\mathrm{d}}{\mathrm{d}t}q(n,t)=\begin{cases}-\sum_{i=1}^{k_{1}}\lambda_{(0)_{i}}q(0,t)+\sum_{j=1}^{k_{2}}\mu_{(j)_{j}}q(j,t),\ n=0,\vspace{0.1cm}\\ -\big{(}\sum_{i=1}^{k_{1}}\lambda_{(n)_{i}}+\sum_{j=1}^{k_{2}}\mu_{(n)_{j}}\big{)}q(n,t)\\ \hskip 51.21504pt+\sum_{i=1}^{\mathrm{min}\{n,k_{1}\}}\lambda_{(n-i)_{i}}q(n-i,t)+\sum_{j=1}^{k_{2}}\mu_{(n+j)_{j}}q(n+j,t),\ n\geq 1\end{cases} (1.1)

with initial conditions q(1,0)=1q(1,0)=1 and q(n,0)=0q(n,0)=0 for all n1n\neq 1. Here, for each n0n\geq 0, λ(n)i\lambda_{(n)_{i}} is the rate of birth of size i{1,2,,k1}i\in\{1,2,\dots,k_{1}\} and for each n1n\geq 1, μ(n)j\mu_{(n)_{j}} is the rate of death of size j{1,2,,k2}j\in\{1,2,\dots,k_{2}\}.

If birth and death rates are linear in GBDP, that is, λ(n)i=nλi\lambda_{(n)_{i}}=n\lambda_{i} and μ(n)j=nμj\mu_{(n)_{j}}=n\mu_{j} for all n1n\geq 1, then we call this process as the generalized linear birth-death process (GLBDP). It is denoted by {N(t)}t0\{N(t)\}_{t\geq 0}. The state probabilities p(n,t)=Pr{N(t)=n}p(n,t)=\mathrm{Pr}\{N(t)=n\} of GLBDP are the solution of the following system of differential equations:

ddtp(n,t)={j=1k2jμjp(j,t),n=0,(i=1k1nλi+j=1k2nμj)p(n,t)+i=1k1(ni)λip(ni,t)+j=1k2(n+j)μjp(n+j,t),n1,\frac{\mathrm{d}}{\mathrm{d}t}p(n,t)=\begin{cases}\sum_{j=1}^{k_{2}}j\mu_{j}p(j,t),\ n=0,\vspace{0.1cm}\\ -\big{(}\sum_{i=1}^{k_{1}}n\lambda_{i}+\sum_{j=1}^{k_{2}}n\mu_{j}\big{)}p(n,t)\\ \hskip 54.06006pt+\sum_{i=1}^{k_{1}}(n-i)\lambda_{i}p(n-i,t)+\sum_{j=1}^{k_{2}}(n+j)\mu_{j}p(n+j,t),\ n\geq 1,\end{cases}

where p(n,t)=0p(n,t)=0 for all n<0n<0.

It is shown that the cumulative distribution function Ws(t)=Pr{Tst}W_{s}(t)=\mathrm{Pr}\{T_{s}\leq t\} is given by

Ws(t)=exp(i=1k1λ(n)i+j=1k2μ(n)j)t,t0,W_{s}(t)=\exp\Bigg{(}-\sum_{i=1}^{k_{1}}\lambda_{(n)_{i}}+\sum_{j=1}^{k_{2}}\mu_{(n)_{j}}\Bigg{)}t,\ t\geq 0,

where TsT_{s} is the waiting time of GBDP in state ss given that it starts in state ss. On using the distributions of these waiting times, we obtain the maximum likelihood estimate of the parameter i=1k1λi+j=1k2μj\sum_{i=1}^{k_{1}}\lambda_{i}+\sum_{j=1}^{k_{2}}\mu_{j} in GLBDP.

In Section 3, for GLBDP, we obtain some results for the cumulative births B(t)B(t), that is, the total number of births by time tt and for the cumulative deaths D(t)D(t), that is, the total number of deaths by time tt. We study a particular case of GBDP, where the birth and death rates are constant. We denote this process by {N(t)}t0\{N^{*}(t)\}_{t\geq 0}. Let B(t)B^{*}(t) and D(t)D^{*}(t) be the total number of births and deaths in {N(t)}t0\{N^{*}(t)\}_{t\geq 0} up to time tt, respectively. We obtain the explicit forms of probability mass functions (pmfs) of N(t)N^{*}(t), B(t)B^{*}(t) and D(t)D^{*}(t) and their joint distributions.

In Section 4, we consider a stochastic integral of GBDP and show that its Laplace transform satisfies the backward equation for GBDP with some scaled parameters. We study the joint distribution of GLBDP and its stochastic path integral. A similar study is done for GBDP with constant birth and death rates.

In Section 5, we study the effect of immigration in GLBDP for two different cases. Let ν>0\nu>0 is the rate of immigration from outside environment. First, we consider the immigration effect only if the population vanishes at any time t0t\geq 0. In this case, the birth rates are λ(0)i=ν\lambda_{(0)_{i}}=\nu, λ(n)i=nλi,n1\lambda_{(n)_{i}}=n\lambda_{i},\ n\geq 1 for all i{1,2,,k1}i\in\{1,2,\dots,k_{1}\} and the death rates are μ(n)j=nμj\mu_{(n)_{j}}=n\mu_{j}, n1n\geq 1 for all j{1,2,,k2}j\in\{1,2,\dots,k_{2}\}. Then, we consider the effect of immigration at every state of the GLBDP. In this case, the birth and death rates are λ(n)i=ν+nλi\lambda_{(n)_{i}}=\nu+n\lambda_{i} and μ(n)j=nμj\mu_{(n)_{j}}=n\mu_{j}, n0n\geq 0, respectively.

In the last section, we discuss an application of a linear case of GBDP to vehicle parking management system.

2. Generalized birth-death process

First, we consider a generalized birth-death process (GBDP) where in an infinitesimal time interval of length hh there is a possibility of either finitely many births i{1,2,,k1}i\in\{1,2,\ldots,k_{1}\} with positive rates λ(n)i\lambda_{(n)_{i}} or finitely many deaths j{1,2,,k2}j\in\{1,2,\ldots,k_{2}\} with positive rates μ(n)j\mu_{(n)_{j}}. It is important to note that the birth and death rates may depend on the number of individuals nn present in the population at time t0t\geq 0. It is assumed that the chances of simultaneous occurrence of births and deaths in such small intervals are negligible.

We denote the GBDP by {𝒩(t)}t0\{\mathcal{N}(t)\}_{t\geq 0}. With the above assumptions, its transition probabilities are given by

Pr{𝒩(t+h)=n+j|𝒩(t)=n}={1i=1k1λ(n)ihi=1k2μ(n)ih+o(h),j=0,λ(n)jh+o(h),j=1,2,,k1,μ(n)jh+o(h),j=1,2,,k2,o(h),otherwise,\mathrm{Pr}\{\mathcal{N}(t+h)=n+j|\mathcal{N}(t)=n\}=\begin{cases}1-\sum_{i=1}^{k_{1}}\lambda_{(n)_{i}}h-\sum_{i=1}^{k_{2}}\mu_{(n)_{i}}h+o(h),\ j=0,\vspace{0.1cm}\\ \lambda_{(n)_{j}}h+o(h),\ j=1,2,\ldots,k_{1},\vspace{0.1cm}\\ \mu_{(n)_{-j}}h+o(h),\ j=-1,-2,\ldots,-k_{2},\vspace{0.1cm}\\ o(h),\ \mathrm{otherwise},\end{cases} (2.1)

where o(h)/h0o(h)/h\to 0 as h0h\to 0.

Let q(n,t)=Pr{𝒩(t)=n}q(n,t)=\mathrm{Pr}\{\mathcal{N}(t)=n\}, n0n\geq 0 be the state probabilities of GBDP. Then, we have

q(n,t+h)\displaystyle q(n,t+h) =Pr{𝒩(t+h)=n|𝒩(t)=n}q(n,t)+i=1min{n,k1}Pr{𝒩(t+h)=n|𝒩(t)=ni}q(ni,t)\displaystyle=\mathrm{Pr}\{\mathcal{N}(t+h)=n|\mathcal{N}(t)=n\}q(n,t)+\sum_{i=1}^{\mathrm{min}\{n,k_{1}\}}\mathrm{Pr}\{\mathcal{N}(t+h)=n|\mathcal{N}(t)=n-i\}q(n-i,t)
+j=1k2Pr{𝒩(t+h)=n|𝒩(t)=n+j}q(n+j,t).\displaystyle\ \ +\sum_{j=1}^{k_{2}}\mathrm{Pr}\{\mathcal{N}(t+h)=n|\mathcal{N}(t)=n+j\}q(n+j,t).

Using (2.1), we get

q(n,t+h)\displaystyle q(n,t+h) =(1i=1k1λ(n)ihj=1k2μ(n)jh)q(n,t)+i=1min{n,k1}λ(ni)ihq(ni,t)\displaystyle=\bigg{(}1-\sum_{i=1}^{k_{1}}\lambda_{(n)_{i}}h-\sum_{j=1}^{k_{2}}\mu_{(n)_{j}}h\bigg{)}q(n,t)+\sum_{i=1}^{\mathrm{min}\{n,k_{1}\}}\lambda_{(n-i)_{i}}hq(n-i,t)
+j=1k2μ(n+j)jhq(n+j,t)+o(h).\displaystyle\ \ +\sum_{j=1}^{k_{2}}\mu_{(n+j)_{j}}hq(n+j,t)+o(h).

Equivalently,

q(n,t+h)q(n,t)h\displaystyle\frac{q(n,t+h)-q(n,t)}{h} =(i=1k1λ(n)i+j=1k2μ(n)j)q(n,t)+i=1min{n,k1}λ(ni)iq(ni,t)\displaystyle=-\bigg{(}\sum_{i=1}^{k_{1}}\lambda_{(n)_{i}}+\sum_{j=1}^{k_{2}}\mu_{(n)_{j}}\bigg{)}q(n,t)+\sum_{i=1}^{\mathrm{min}\{n,k_{1}\}}\lambda_{(n-i)_{i}}q(n-i,t)
+j=1k2μ(n+j)jq(n+j,t)+o(h)h.\displaystyle\ \ +\sum_{j=1}^{k_{2}}\mu_{(n+j)_{j}}q(n+j,t)+\frac{o(h)}{h}.

On letting h0h\to 0, we get

ddtq(n,t)=(i=1k1λ(n)i+j=1k2μ(n)j)q(n,t)+i=1min{n,k1}λ(ni)iq(ni,t)+j=1k2μ(n+j)jq(n+j,t),n1.\frac{\mathrm{d}}{\mathrm{d}t}q(n,t)=-\bigg{(}\sum_{i=1}^{k_{1}}\lambda_{(n)_{i}}+\sum_{j=1}^{k_{2}}\mu_{(n)_{j}}\bigg{)}q(n,t)+\sum_{i=1}^{\mathrm{min}\{n,k_{1}\}}\lambda_{(n-i)_{i}}q(n-i,t)+\sum_{j=1}^{k_{2}}\mu_{(n+j)_{j}}q(n+j,t),\ n\geq 1.

Similarly, for n=0n=0, we have

q(0,t+h)=(1i=1k1λ(0)ih)q(0,t)+j=1k2μ(j)jq(j,t)h+o(h).q(0,t+h)=\bigg{(}1-\sum_{i=1}^{k_{1}}\lambda_{(0)_{i}}h\bigg{)}q(0,t)+\sum_{j=1}^{k_{2}}\mu_{(j)_{j}}q(j,t)h+o(h).

Thus,

ddtq(0,t)=i=1k1λ(0)iq(0,t)+j=1k2μ(j)jq(j,t).\frac{\mathrm{d}}{\mathrm{d}t}q(0,t)=-\sum_{i=1}^{k_{1}}\lambda_{(0)_{i}}q(0,t)+\sum_{j=1}^{k_{2}}\mu_{(j)_{j}}q(j,t).

Let us assume that there is one progenitor at time t=0t=0. Thus, the state probabilities of GBDP solves the following system of differential equations:

ddtq(n,t)=(i=1k1λ(n)i+j=1k2μ(n)j)q(n,t)+i=1k1λ(ni)iq(ni,t)+j=1k2μ(n+j)jq(n+j,t),n0\frac{\mathrm{d}}{\mathrm{d}t}q(n,t)=-\bigg{(}\sum_{i=1}^{k_{1}}\lambda_{(n)_{i}}+\sum_{j=1}^{k_{2}}\mu_{(n)_{j}}\bigg{)}q(n,t)+\sum_{i=1}^{k_{1}}\lambda_{(n-i)_{i}}q(n-i,t)+\sum_{j=1}^{k_{2}}\mu_{(n+j)_{j}}q(n+j,t),\ n\geq 0 (2.2)

with initial conditions q(1,0)=1q(1,0)=1 and q(n,0)=0q(n,0)=0 for all n1n\neq 1. Here, q(ni,t)=0q(n-i,t)=0 for all i>ni>n and μ(0)j=0\mu_{(0)_{j}}=0 for all j=1,2,,k2j=1,2,\ldots,k_{2}.

Remark 2.1.

For k1=k2=1k_{1}=k_{2}=1, GBDP reduces to BDP. If μ(n)j=0\mu_{(n)_{j}}=0 for all nn and for all j{1,2,,k2}j\in\{1,2,\ldots,k_{2}\}, then the GBDP reduces to the generalized pure birth process.

Let TsT_{s} be the waiting time of GBDP in the state ss, that is, TsT_{s} is the total time before the process leave the state ss given that it start from ss.

Proposition 2.1.

Let Ws(t)=Pr{Tst}.W_{s}(t)=\mathrm{Pr}\{T_{s}\geq t\}. Then, Ws(t)W_{s}(t) solves the following linear differential equation:

ddtWs(t)=(i=1k1λ(s)i+j=1k2μ(s)j)Ws(t)\frac{\mathrm{d}}{\mathrm{d}t}W_{s}(t)=-\left(\sum_{i=1}^{k_{1}}\lambda_{(s)_{i}}+\sum_{j=1}^{k_{2}}\mu_{(s)_{j}}\right)W_{s}(t) (2.3)

with initial condition Ws(0)=1.W_{s}(0)=1.

Proof.

Consider an infinitesimal time interval of length hh such that o(h)/h0o(h)/h\to 0 as h0h\to 0. On using (2.1), we have

Ws(t+h)\displaystyle W_{s}(t+h) =Ws(t)Pr{𝒩(t+h)=s|𝒩(t)=s}=Ws(t)(1i=1k1λ(s)ihj=1k2μ(s)jh)+o(h).\displaystyle=W_{s}(t)\mathrm{Pr}\{\mathcal{N}(t+h)=s|\mathcal{N}(t)=s\}=W_{s}(t)\bigg{(}1-\sum_{i=1}^{k_{1}}\lambda_{(s)_{i}}h-\sum_{j=1}^{k_{2}}\mu_{(s)_{j}}h\bigg{)}+o(h).

So,

Ws(t+h)Ws(t)h=(i=1k1λ(s)i+j=1k2μ(s)j)Ws(t)+o(h)h.\frac{W_{s}(t+h)-W_{s}(t)}{h}=-\bigg{(}\sum_{i=1}^{k_{1}}\lambda_{(s)_{i}}+\sum_{j=1}^{k_{2}}\mu_{(s)_{j}}\bigg{)}W_{s}(t)+\frac{o(h)}{h}.

On letting h0h\to 0, we get the required result. ∎

Remark 2.2.

On solving (2.3), we get

Ws(t)=e(i=1k1λ(s)i+j=1k2μ(s)j)t,t0.W_{s}(t)=e^{-\left(\sum_{i=1}^{k_{1}}\lambda_{(s)_{i}}+\sum_{j=1}^{k_{2}}\mu_{(s)_{j}}\right)t},\ t\geq 0.

Thus, the waiting time TsT_{s} of GBDP is exponentially distributed with parameter i=1k1λ(s)i+j=1k2μ(s)j\sum_{i=1}^{k_{1}}\lambda_{(s)_{i}}+\sum_{j=1}^{k_{2}}\mu_{(s)_{j}}. For k1=k2=1k_{1}=k_{2}=1, it follows exponential distribution with parameter λs+μs\lambda_{s}+\mu_{s} (see Karlin and Taylor (1975), p. 133).

2.1. A linear case of GBDP

When the birth and death rates are linear, that is, λ(n)i=nλi\lambda_{(n)_{i}}=n\lambda_{i} and μ(n)j=nμj\mu_{(n)_{j}}=n\mu_{j} for all ii, jj and nn, we call the GBDP as the generalized linear birth-death process (GLBDP) and denote it by {N(t)}t0\{N(t)\}_{t\geq 0}. From (2.2), it follows that the state probabilities p(n,t)=Pr{N(t)=n}p(n,t)=\mathrm{Pr}\{N(t)=n\} of GLBDP solve the following system of differential equations:

ddtp(n,t)=(i=1k1nλi+j=1k2nμj)p(n,t)+i=1k1(ni)λip(ni,t)+j=1k2(n+j)μjp(n+j,t),n0,\frac{\mathrm{d}}{\mathrm{d}t}p(n,t)=-\left(\sum_{i=1}^{k_{1}}n\lambda_{i}+\sum_{j=1}^{k_{2}}n\mu_{j}\right)p(n,t)+\sum_{i=1}^{k_{1}}(n-i)\lambda_{i}p(n-i,t)+\sum_{j=1}^{k_{2}}(n+j)\mu_{j}p(n+j,t),\ n\geq 0, (2.4)

with initial conditions

p(n,0)={1,n=1,0,n1,p(n,0)=\begin{cases}1,\ n=1,\vspace{0.1cm}\\ 0,\ n\neq 1,\end{cases} (2.5)

where p(ni,t)=0p(n-i,t)=0 for all i>ni>n.

Remark 2.3.

If μj=0\mu_{j}=0 for all j{1,2,,k2}j\in\{1,2,\ldots,k_{2}\}, then the GLBDP reduces to generalized linear pure birth process.

Remark 2.4.

In the case of GLBDP, the waiting time in state ss has the exponential distribution with parameter s(i=1k1λi+j=1k2μj)s\big{(}\sum_{i=1}^{k_{1}}\lambda_{i}+\sum_{j=1}^{k_{2}}\mu_{j}\big{)}.

Next, we obtain the system of differential equations that governs the probability generating function (pgf) of GLBDP.

Proposition 2.2.

Let H(u,t)=𝔼(uN(t)),|u|1H(u,t)=\mathbb{E}(u^{N(t)}),\ |u|\leq 1 be the pgf of GLBDP. It solves the following partial differential equation:

tH(u,t)=(i=1k1λiu(ui1)+j=1k2μju(uj1))uH(u,t),t0,\frac{\partial}{\partial t}H(u,t)=\left(\sum_{i=1}^{k_{1}}\lambda_{i}u(u^{i}-1)+\sum_{j=1}^{k_{2}}\mu_{j}u(u^{-j}-1)\right)\frac{\partial}{\partial u}H(u,t),\ t\geq 0, (2.6)

with initial condition H(u,0)=uH(u,0)=u.

Proof.

By using (2.5), we have H(u,0)=n=0unp(n,0)=u.H(u,0)=\sum_{n=0}^{\infty}u^{n}p(n,0)=u. On multiplying unu^{n} on both sides of (2.4) and by adjusting the terms, we get

tunp(n,t)\displaystyle\frac{\partial}{\partial t}u^{n}p(n,t) =(i=1k1λi+j=1k2μj)uuunp(n,t)+i=1k1λiui+1uunip(ni,t)\displaystyle=-\bigg{(}\sum_{i=1}^{k_{1}}\lambda_{i}+\sum_{j=1}^{k_{2}}\mu_{j}\bigg{)}u\frac{\partial}{\partial u}u^{n}p(n,t)+\sum_{i=1}^{k_{1}}\lambda_{i}u^{i+1}\frac{\partial}{\partial u}u^{n-i}p(n-i,t)
+j=1k2μjuj+1uun+jp(n+j,t).\displaystyle\ \ +\sum_{j=1}^{k_{2}}\mu_{j}u^{-j+1}\frac{\partial}{\partial u}u^{n+j}p(n+j,t). (2.7)

Summing over n=k2,k2+1,n=-k_{2},-k_{2}+1,\ldots on both sides of (2.7) gives the required result. Here, the change of sum and derivative is justified because

|n=0uunp(n,t)|\displaystyle\left|\sum_{n=0}^{\infty}\frac{\partial}{\partial u}u^{n}p(n,t)\right| =|n=0nun1p(n,t)|\displaystyle=\left|\sum_{n=0}^{\infty}nu^{n-1}p(n,t)\right|
n=0np(n,t)=𝔼(N(t))<.\displaystyle\leq\sum_{n=0}^{\infty}np(n,t)=\mathbb{E}(N(t))<\infty.

This completes the proof. ∎

Remark 2.5.

For k1=k2=1k_{1}=k_{2}=1, the GLBDP reduces to LBDP. Its pgf solves (see Bailey (1964))

tH(u,t)=(λ1uμ1)(u1)uH(u,t)\frac{\partial}{\partial t}{H}(u,t)=(\lambda_{1}u-\mu_{1})(u-1)\frac{\partial}{\partial u}{H}(u,t)

with H(u,0)=u{H}(u,0)=u.

Remark 2.6.

The Laplace transform H~(u,z)=0eztH(u,t)dt\tilde{H}(u,z)=\int_{0}^{\infty}e^{-zt}H(u,t)\,\mathrm{d}t of the pgf of GLBDP is the solution of following differential equation:

u+zH~(u,z)=(i=1k1λiu(ui1)+j=1k2μju(uj1))uH~(u,z).-u+z\tilde{H}(u,z)=\bigg{(}\sum_{i=1}^{k_{1}}\lambda_{i}u(u^{i}-1)+\sum_{j=1}^{k_{2}}\mu_{j}u(u^{-j}-1)\bigg{)}\frac{\partial}{\partial u}\tilde{H}(u,z).

Next, we discuss the solution of (2.6). First, we rewrite (2.6) as follows:

tH(u,t)=ϕ(u)uH(u,t),\frac{\partial}{\partial t}H(u,t)=\phi(u)\frac{\partial}{\partial u}H(u,t), (2.8)

where ϕ(u)=i=1k1λiu(ui1)+j=1k2μju(uj1).\phi(u)=\sum_{i=1}^{k_{1}}\lambda_{i}u(u^{i}-1)+\sum_{j=1}^{k_{2}}\mu_{j}u(u^{-j}-1). We apply the method of characteristics to solve (2.8), and thus we get the following subsidiary equations:

dt=uk21ψ(u)du\mathrm{d}t=-\frac{u^{k_{2}-1}}{\psi(u)}\mathrm{d}u (2.9)

and

tH(u,t)=0,\frac{\partial}{\partial t}H(u,t)=0, (2.10)

where ψ(u)=uk21ϕ(u)\psi(u)=u^{k_{2}-1}\phi(u). Moreover, we assume that the roots ri,i=1,2,,k1+k2r_{i},\ i=1,2,\ldots,k_{1}+k_{2} of ψ(u)\psi(u) are distinct which can be ensured by a small perturbation in λi\lambda_{i} and μj\mu_{j}. So, (2.9) can be rewritten as

dt=(i=1k1+k2ci(uri))du,\mathrm{d}t=-\left(\sum_{i=1}^{k_{1}+k_{2}}\frac{c_{i}}{(u-r_{i})}\right)\mathrm{d}u, (2.11)

where ci=λk11rik21/ji(rirj).c_{i}=\lambda_{k_{1}}^{-1}{r_{i}^{k_{2}-1}}/{\prod_{j\neq i}(r_{i}-r_{j})}.

On solving (2.10) and (2.11), we get H(u,t)=β1H(u,t)=\beta_{1} and etg(u)=β2e^{-t}g(u)=\beta_{2}, respectively. Here, β1\beta_{1} and β2\beta_{2} are real constants and

g(u)=i=1k1+k2(uri)ci.g(u)=\prod_{i=1}^{k_{1}+k_{2}}(u-r_{i})^{-c_{i}}. (2.12)

Thus, the arbitrary solution of (2.6) is obtained in the following form:

H(u,t)=f(etg(u)),H(u,t)=f\left(e^{-t}g(u)\right), (2.13)

where ff is some real-valued function which can be determined by using the given initial condition, that is, H(u,0)=u=f(g(u))H(u,0)=u=f\left(g(u)\right). Note that there always exist a δ>0\delta>0 such that |ri|>δ|r_{i}|>\delta and ψ(u)\psi(u) has same sign for all u[0,δ]u\in[0,\delta]. So, g(u)g(u) is a monotone function in [0,δ][0,\delta] and its inverse exist. Hence, (2.13) holds with f(u)=g1(u)f(u)=g^{-1}(u) for all u[0,δ]u\in[0,\delta] and by using the analytic continuation it can be extended to all uu such that |u|1|u|\leq 1.

Remark 2.7.

Let (ak:k{0,1,2,})\left(a_{k}:k\in\{0,1,2,\ldots\}\right) be the initial distribution of GLBDP. Then, its pgf has the following form:

H(u,t)=k=0ak(f(eti=1k1+k2(uri)ci))k.H(u,t)=\sum_{k=0}^{\infty}a_{k}\left(f\left(e^{-t}\prod_{i=1}^{k_{1}+k_{2}}(u-r_{i})^{-c_{i}}\right)\right)^{k}.

In particular, if N(0)=n0>1N(0)=n_{0}>1 then

H(u,t)=(f(eti=1k1+k2(uri)ci))n0.H(u,t)=\left(f\left(e^{-t}\prod_{i=1}^{k_{1}+k_{2}}(u-r_{i})^{-c_{i}}\right)\right)^{n_{0}}.

Its proof follows along the similar lines to that of (2.13).

Next, we obtain the probability of ultimate extinction of GLBDP.

Let p(0)=limtp(0,t)p(0)=\lim_{t\to\infty}p(0,t) be the probability of ultimate extinction in GLBDP. We use the inverse function representation of H(u,t)H(u,t) to approximate the extinction probability p(0,t)p(0,t). Let g(u)=dg(u)du{g}^{\prime}(u)=\frac{\mathrm{d}g(u)}{\mathrm{d}u}, where gg is given in (2.12). If ϵ\epsilon is the least positive root of ψ(u)\psi(u) then g()g(\cdot) is a monotone function in [0,ϵ][0,\epsilon] and g(ϵ)=0g(\epsilon)=0. So, f(u)=g1(u)f(u)=g^{-1}(u) for all u[0,ϵ]u\in[0,\epsilon] andf(0)=ϵ.\ f(0)=\epsilon. As

f(u)\displaystyle f(u) =ϵ+0uf(x)dx\displaystyle=\epsilon+\int_{0}^{u}f^{\prime}(x)\,\mathrm{d}x
=ϵ+0udxg(f(x))foru[0,δ],\displaystyle=\epsilon+\int_{0}^{u}\frac{\mathrm{d}x}{{g}^{\prime}(f(x))}\ \ \ \text{for}\ u\in[0,\delta],

where δ\delta is some positive real number. So, we have

H(u,t)=ϵ+0etg(u)dxg(f(x)).\displaystyle H(u,t)=\epsilon+\int_{0}^{e^{-t}g(u)}\frac{\mathrm{d}x}{{g}^{\prime}(f(x))}.

On taking u=0u=0, we get

p(0,t)=H(0,t)=ϵ+0etg(0)dxg(f(x)).p(0,t)=H(0,t)=\epsilon+\int_{0}^{e^{-t}g(0)}\frac{\mathrm{d}x}{{g}^{\prime}(f(x))}.

Thus, if the process run for infinitely long time then the probability of ultimate extinction converges exponentially to ϵ\epsilon, that is,

p(0)=limtp(0,t)=ϵ.p(0)=\lim_{t\to\infty}p(0,t)=\epsilon. (2.14)
Remark 2.8.

In the case of LBDP, the above results reduce to the results given in Bailey (1964).

For k1=k2=1k_{1}=k_{2}=1, λ1=λ\lambda_{1}=\lambda and μ1=μ\mu_{1}=\mu such that λμ\lambda\neq\mu, we get the two distinct roots of ψ(u)\psi(u), that is, r1=μ/λr_{1}=\mu/\lambda and r2=1r_{2}=1. So, c1=1/(μλ)c_{1}=1/(\mu-\lambda) and c2=1/(λμ)c_{2}=1/(\lambda-\mu). From (2.12), we have

g(u)=(uμ/λu1)1/(λμ).g(u)=\left(\frac{u-\mu/\lambda}{u-1}\right)^{1/(\lambda-\mu)}.

Thus,

f(u)=g1(u)=μλuλμλ(1uλμ).f(u)=g^{-1}(u)=\frac{\mu-\lambda u^{\lambda-\mu}}{\lambda(1-u^{\lambda-\mu})}.

So, from (2.13), the pgf of LBDP is

H(u,t)=μ(u1)et(λμ)(λuμ)λ(u1)et(λμ)(λuμ).H(u,t)=\frac{\mu(u-1)-e^{-t(\lambda-\mu)}(\lambda u-\mu)}{\lambda(u-1)-e^{-t(\lambda-\mu)}(\lambda u-\mu)}.

Its extinction probability is given by (see Bailey (1964), p. 95)

p(0,t)=H(0,t)=μμet(λμ)λμet(λμ)p(0,t)=H(0,t)=\frac{\mu-\mu e^{-t(\lambda-\mu)}}{\lambda-\mu e^{-t(\lambda-\mu)}}

and its state probabilities are (see Bailey (1964), p. 95)

p(n,t)=(λμ)2et(λμ)(λλet(λμ))n1(λμet(λμ))n+1,n1.p(n,t)=(\lambda-\mu)^{2}e^{-t(\lambda-\mu)}\frac{(\lambda-\lambda e^{-t(\lambda-\mu)})^{n-1}}{(\lambda-\mu e^{-t(\lambda-\mu)})^{n+1}},\ n\geq 1.

From (2.14), the probability of ultimate extinction p(0)p(0) of LBDP is given by (see Bailey (1964), p. 96)

p(0)={μλ,λ>μ,1,λ<μ.p(0)=\begin{cases}\frac{\mu}{\lambda},\ \lambda>\mu,\vspace{0.1cm}\\ 1,\ \lambda<\mu.\end{cases}
Remark 2.9.

The moment generating function (mgf) M(u,t)H(eu,t)M(u,t)\coloneqq H(e^{u},t) of GLBDP solves

tM(u,t)=(i=1k1λi(eiu1)+j=1k2μj(eju1))uM(u,t),\frac{\partial}{\partial t}M(u,t)=\left(\sum_{i=1}^{k_{1}}\lambda_{i}(e^{iu}-1)+\sum_{j=1}^{k_{2}}\mu_{j}(e^{-ju}-1)\right)\frac{\partial}{\partial u}M(u,t),

with M(u,0)=euM(u,0)=e^{u}.

Next, we obtain the mean and variance of GLBDP.

Theorem 2.1.

Let η=i=1k1iλij=1k2jμj\eta=\sum_{i=1}^{k_{1}}i\lambda_{i}-\sum_{j=1}^{k_{2}}j\mu_{j}. For t0t\geq 0, the mean m(t)=𝔼(N(t))m(t)=\mathbb{E}(N(t)) and the variance v(t)=Var(N(t))v(t)=\mathrm{Var}(N(t)) of GLBDP are given by

m(t)=eηtandv(t)=i=1k1i2λi+j=1k2j2μjη(e2ηteηt),m(t)=e^{\eta t}\ \ \text{and}\ \ v(t)=\frac{\sum_{i=1}^{k_{1}}i^{2}\lambda_{i}+\sum_{j=1}^{k_{2}}j^{2}\mu_{j}}{\eta}\left(e^{2\eta t}-e^{\eta t}\right), (2.15)

respectively.

Proof.

From (2.5), we get m(0)=n=1np(n,0)=1m(0)=\sum_{n=1}^{\infty}np(n,0)=1. On multiplying nn on both sides of (2.4) and substituting n(ni)=(ni)2+i(ni)n(n-i)=(n-i)^{2}+i(n-i) and n(n+j)=(n+j)2j(n+j)n(n+j)=(n+j)^{2}-j(n+j) yields

ddtm(t)=(i=1k1λi+j=1k2μj)n2p(n,t)+i=1k1λi(ni)2p(ni,t)+j=1k2μj(n+j)2p(n+j,t)+i=1k1λii(ni)p(ni,t)j=1k2μjj(n+j)p(n+j,t).\frac{\mathrm{d}}{\mathrm{d}t}m(t)=-\left(\sum_{i=1}^{k_{1}}\lambda_{i}+\sum_{j=1}^{k_{2}}\mu_{j}\right)n^{2}p(n,t)+\sum_{i=1}^{k_{1}}\lambda_{i}(n-i)^{2}p(n-i,t)+\sum_{j=1}^{k_{2}}\mu_{j}(n+j)^{2}p(n+j,t)\\ +\sum_{i=1}^{k_{1}}\lambda_{i}i(n-i)p(n-i,t)-\sum_{j=1}^{k_{2}}\mu_{j}j(n+j)p(n+j,t).

Summing over n=k2,k2+1,n=-k_{2},-k_{2}+1,\ldots gives

ddtm(t)=(i=1k1iλij=1k2jμj)m(t).\displaystyle\frac{\mathrm{d}}{\mathrm{d}t}m(t)=\left(\sum_{i=1}^{k_{1}}i\lambda_{i}-\sum_{j=1}^{k_{2}}j\mu_{j}\right)m(t). (2.16)

On solving (2.16) with initial condition m(0)=1m(0)=1, we get m(t)=exp(i=1k1iλij=1k2jμj)tm(t)=\exp{\big{(}\sum_{i=1}^{k_{1}}i\lambda_{i}-\sum_{j=1}^{k_{2}}j\mu_{j}\big{)}t}.

Let m2(t)=n=0n2p(n,t)m_{2}(t)=\sum_{n=0}^{\infty}n^{2}p(n,t). In order to obtain the variance, we multiply n2n^{2} on both sides of (2.4) and substitute n2(ni)=(ni)3+2i(ni)2+i2(ni)n^{2}(n-i)=(n-i)^{3}+2i(n-i)^{2}+i^{2}(n-i) and n2(n+j)=(n+j)32j(n+j)2+j2(n+j)n^{2}(n+j)=(n+j)^{3}-2j(n+j)^{2}+j^{2}(n+j), to get

ddtm2(t)\displaystyle\frac{\mathrm{d}}{\mathrm{d}t}m_{2}(t) =(i=1k1λi+j=1k2μj)n3p(n,t)+i=1k1λi(ni)3p(ni,t)+2i=1k1λii(ni)2p(ni,t)\displaystyle=-\left(\sum_{i=1}^{k_{1}}\lambda_{i}+\sum_{j=1}^{k_{2}}\mu_{j}\right)n^{3}p(n,t)+\sum_{i=1}^{k_{1}}\lambda_{i}(n-i)^{3}p(n-i,t)+2\sum_{i=1}^{k_{1}}\lambda_{i}i(n-i)^{2}p(n-i,t)
+i=1k1λii2(ni)p(ni,t)+j=1k2μj(n+j)3p(n+j,t)2j=1k2μjj(n+j)2p(n+j,t)\displaystyle\quad+\sum_{i=1}^{k_{1}}\lambda_{i}i^{2}(n-i)p(n-i,t)+\sum_{j=1}^{k_{2}}\mu_{j}(n+j)^{3}p(n+j,t)-2\sum_{j=1}^{k_{2}}\mu_{j}j(n+j)^{2}p(n+j,t)
+j=1k2μjj2(n+j)p(n+j,t)\displaystyle\quad+\sum_{j=1}^{k_{2}}\mu_{j}j^{2}(n+j)p(n+j,t)
=2i=1k1λii(ni)2p(ni,t)+i=1k1λii2(ni)p(ni,t)2j=1k2μjj(n+j)2p(n+j,t)\displaystyle=2\sum_{i=1}^{k_{1}}\lambda_{i}i(n-i)^{2}p(n-i,t)+\sum_{i=1}^{k_{1}}\lambda_{i}i^{2}(n-i)p(n-i,t)-2\sum_{j=1}^{k_{2}}\mu_{j}j(n+j)^{2}p(n+j,t)
+j=1k2μjj2(n+j)p(n+j,t).\displaystyle\quad+\sum_{j=1}^{k_{2}}\mu_{j}j^{2}(n+j)p(n+j,t).

By summing over n=k2,k2+1,n=-k_{2},-k_{2}+1,\ldots, we get

ddtm2(t)=2(i=1k1iλij=1k2jμj)m2(t)+(i=1k1i2λi+j=1k2j2μj)m(t).\frac{\mathrm{d}}{\mathrm{d}t}m_{2}(t)=2\left(\sum_{i=1}^{k_{1}}i\lambda_{i}-\sum_{j=1}^{k_{2}}j\mu_{j}\right)m_{2}(t)+\left(\sum_{i=1}^{k_{1}}i^{2}\lambda_{i}+\sum_{j=1}^{k_{2}}j^{2}\mu_{j}\right)m(t). (2.17)

On solving (2.17) with initial condition m2(0)=1m_{2}(0)=1, we get

m2(t)=(1+i=1k1i2λi+j=1k2j2μji=1k1iλij=1k2jμj)m(t)2i=1k1i2λi+j=1k2j2μji=1k1iλij=1k2jμjm(t).m_{2}(t)=\left(1+\frac{\sum_{i=1}^{k_{1}}i^{2}\lambda_{i}+\sum_{j=1}^{k_{2}}j^{2}\mu_{j}}{\sum_{i=1}^{k_{1}}i\lambda_{i}-\sum_{j=1}^{k_{2}}j\mu_{j}}\right)m(t)^{2}-\frac{\sum_{i=1}^{k_{1}}i^{2}\lambda_{i}+\sum_{j=1}^{k_{2}}j^{2}\mu_{j}}{\sum_{i=1}^{k_{1}}i\lambda_{i}-\sum_{j=1}^{k_{2}}j\mu_{j}}m(t).

Hence,

v(t)\displaystyle v(t) =m2(t)m(t)2=(i=1k1i2λi+j=1k2j2μji=1k1iλij=1k2jμj)(m(t)2m(t)).\displaystyle=m_{2}(t)-m(t)^{2}=\left(\frac{\sum_{i=1}^{k_{1}}i^{2}\lambda_{i}+\sum_{j=1}^{k_{2}}j^{2}\mu_{j}}{\sum_{i=1}^{k_{1}}i\lambda_{i}-\sum_{j=1}^{k_{2}}j\mu_{j}}\right)(m(t)^{2}-m(t)).

This completes the proof. ∎

2.1.1. Parameter estimation in GLBDP

Moran (1951) and Moran (1953) used the inter-event time to estimate the parameters in evolutive processes. Doubleday (1973) used similar technique to obtain the maximum likelihood estimation (MLE) of the parameters in multiple birth-death process. Here, we use the same method to obtain the MLE of the parameter Λ=i=1k1λi+i=1k2μi\Lambda=\sum_{i=1}^{k_{1}}\lambda_{i}+\sum_{i=1}^{k_{2}}\mu_{i} in GLBDP.

Let \mathcal{E} be the total number of transitions made by N(t)N(t) upto some fixed time 𝒯\mathcal{T}. Suppose t1,t2,,tt_{1},t_{2},\ldots,t_{\mathcal{E}} are epochs when these transitions have occurred. We define τk=tktk1for all 1kwitht0=0,\tau_{k}=t_{k}-t_{k-1}\ \text{for all}\ 1\leq k\leq\mathcal{E}\ \text{with}\ t_{0}=0, where τ1,τ2,,τ\tau_{1},\tau_{2},\ldots,\tau_{\mathcal{E}} denote the inter-event times. Note that τk\tau_{k}’s are independent and exponentially distributed with parameter ΛN(tk1)\Lambda N(t_{k-1}), k=1,2,,k=1,2,\ldots,\mathcal{E}. The likelihood function is given by

L(Λ;τ1,τ2,,τ)=Λnk=1N(tk1)eΛk=1N(tk1)τk.L(\Lambda;\tau_{1},\tau_{2},\ldots,\tau_{\mathcal{E}})=\Lambda^{n}\prod_{k=1}^{\mathcal{E}}N(t_{k-1})e^{-\Lambda\sum_{k=1}^{\mathcal{E}}N(t_{k-1})\tau_{k}}. (2.18)

On taking the derivative of log-likelihood function lnL(Λ;τ1,τ2,,τ)\ln L(\Lambda;\tau_{1},\tau_{2},\ldots,\tau_{\mathcal{E}}) with respect to Λ\Lambda, we get

ddΛlnL(Λ;τ1,τ2,,τ)=Λk=1N(tk1)τk.\frac{\mathrm{d}}{\mathrm{d}\Lambda}\ln L(\Lambda;\tau_{1},\tau_{2},\ldots,\tau_{\mathcal{E}})=\frac{\mathcal{E}}{\Lambda}-\sum_{k=1}^{\mathcal{E}}N(t_{k-1})\tau_{k}. (2.19)

On equating (2.19) to zero, we get the MLE of Λ\Lambda as follows:

Λ^mle=k=1N(tk1)τk.\hat{\Lambda}_{mle}=\frac{\mathcal{E}}{\sum_{k=1}^{\mathcal{E}}N(t_{k-1})\tau_{k}}.

Note that 2ΛN(tk1)τk2\Lambda N(t_{k-1})\tau_{k} follows χ2\chi^{2} distribution with two degrees of freedom. So,

2Λk=1N(tk1)τk=2Λ0𝒯N(s)ds2\Lambda\sum_{k=1}^{\mathcal{E}}N(t_{k-1})\tau_{k}=2\Lambda\int_{0}^{\mathcal{T}}N(s)\,\mathrm{d}s

has χ2\chi^{2} distribution with 22\mathcal{E} degrees of freedom. Thus, E(2Λ0𝒯N(s)ds)=2.\mathrm{E}\big{(}2\Lambda\int_{0}^{\mathcal{T}}N(s)\,\mathrm{d}s\big{)}=2\mathcal{E}. Hence, Λ~=10𝒯N(s)ds\tilde{\Lambda}={\mathcal{E}}^{-1}\int_{0}^{\mathcal{T}}N(s)\,\mathrm{d}s is an unbiased estimator of 1/Λ1/\Lambda whose distribution is known. From (2.18), it follows by using the Fisher factorization theorem that Λ~=1k=1N(tk1)τk\tilde{\Lambda}={\mathcal{E}}^{-1}\sum_{k=1}^{\mathcal{E}}N(t_{k-1})\tau_{k} is a sufficient statistics for Λ\Lambda. Indeed, it is a minimal sufficient statistics for Λ\Lambda.

2.2. GBDP with constant birth and death rates

Here, we obtain the explicit form of the state probabilities of GBDP in the case of constant birth and death rates, that is, λ(n)i=λi>0\lambda_{(n)_{i}}=\lambda_{i}>0 for all n0n\geq 0 and μ(n)j=μj>0\mu_{(n)_{j}}=\mu_{j}>0 for all n1n\geq 1. In this case, we denote the process by {N(t)}t0\{N^{*}(t)\}_{t\geq 0}.

From (2.2), the state probabilities q(n,t)=Pr{N(t)=n}q^{*}(n,t)=\mathrm{Pr}\{N^{*}(t)=n\} solve the following system of differential equations:

ddtq(n,t)=(i=1k1λi+j=1k2μj)q(n,t)+i=1k1λiq(ni,t)+j=1k2μjq(n+j,t),n0\frac{\mathrm{d}}{\mathrm{d}t}q^{*}(n,t)=-\bigg{(}\sum_{i=1}^{k_{1}}\lambda_{i}+\sum_{j=1}^{k_{2}}\mu_{j}\bigg{)}q^{*}(n,t)+\sum_{i=1}^{k_{1}}\lambda_{i}q^{*}(n-i,t)+\sum_{j=1}^{k_{2}}\mu_{j}q^{*}(n+j,t),\ n\geq 0 (2.20)

with initial conditions q(1,0)=1q^{*}(1,0)=1 and q(n,0)=0q^{*}(n,0)=0 for all n1n\neq 1.

Remark 2.10.

If there is no progenitor at t=0t=0 and λ(n)i=λi>0\lambda_{(n)_{i}}=\lambda_{i}>0, μ(n)j=0\mu_{(n)_{j}}=0 for all ii, jj and n0n\geq 0 then the GBDP reduces to the generalized counting process (GCP) (see Crescenzo et. al. (2016)).

Remark 2.11.

The pgf H(u,t)=𝔼(uN(t)),|u|1H^{*}(u,t)=\mathbb{E}\left(u^{N^{*}(t)}\right),\ |u|\leq 1 is the solution of the following differential equation:

tH(u,t)=(i=1k1λi(ui1)+j=1k2μj(uj1))H(u,t)\frac{\partial}{\partial t}H^{*}(u,t)=\left(\sum_{i=1}^{k_{1}}\lambda_{i}\left(u^{i}-1\right)+\sum_{j=1}^{k_{2}}\mu_{j}\left(u^{-j}-1\right)\right)H^{*}(u,t)

with H(u,0)=uH^{*}(u,0)=u. It is given by

H(u,t)=uexp(i=1k1λi(ui1)+j=1k2μj(uj1))t.H^{*}(u,t)=u\exp\left(\sum_{i=1}^{k_{1}}\lambda_{i}\left(u^{i}-1\right)+\sum_{j=1}^{k_{2}}\mu_{j}\left(u^{-j}-1\right)\right)t. (2.21)
Theorem 2.2.

The state probabilities q(n,t)q^{*}(n,t) of the process {N(t)}t0\{N^{*}(t)\}_{t\geq 0} are given by

q(n,t)=Ω(k1,k2,n)i=1k1j=1k2λixiμjyjtxi+yjxi!yj!eΛt,n0,q^{*}(n,t)=\sum_{\Omega^{*}(k_{1},k_{2},n)}\prod_{i=1}^{k_{1}}\prod_{j=1}^{k_{2}}\frac{\lambda_{i}^{x_{i}}\mu_{j}^{y_{j}}t^{x_{i}+y_{j}}}{x_{i}!y_{j}!}e^{-\Lambda t},\ n\geq 0,

where Ω(k1,k2,n)={(x1,x2,,xk1,y1,y2,,yk2)0k1+k2:i=1k1ixij=1k2jyj=n1}\Omega^{*}(k_{1},k_{2},n)=\bigg{\{}(x_{1},x_{2},\ldots,x_{k_{1}},y_{1},y_{2},\ldots,y_{k_{2}})\in\mathbb{N}_{0}^{k_{1}+k_{2}}:\sum_{i=1}^{k_{1}}ix_{i}-\sum_{j=1}^{k_{2}}jy_{j}=n-1\bigg{\}} and Λ=(i=1k1λi+j=1k2μj)\Lambda=\left(\sum_{i=1}^{k_{1}}\lambda_{i}+\sum_{j=1}^{k_{2}}\mu_{j}\right).

Proof.

From (2.21), we have

H(u,t)\displaystyle H^{*}(u,t) =ueΛtl=0(i=1k1λiui+j=1k2μjuj)ltll!\displaystyle=ue^{-\Lambda t}\sum_{l=0}^{\infty}\bigg{(}\sum_{i=1}^{k_{1}}\lambda_{i}u^{i}+\sum_{j=1}^{k_{2}}\mu_{j}u^{-j}\bigg{)}^{l}\frac{t^{l}}{l!}
=eΛtl=0S(k1,k2,l)i=1k1j=1k2λixiμjyjtxi+yjxi!yj!uixijyj+1,\displaystyle=e^{-\Lambda t}\sum_{l=0}^{\infty}\sum_{S(k_{1},k_{2},l)}\prod_{i=1}^{k_{1}}\prod_{j=1}^{k_{2}}\frac{\lambda_{i}^{x_{i}}\mu_{j}^{y_{j}}t^{x_{i}+y_{j}}}{x_{i}!y_{j}!}u^{ix_{i}-jy_{j}+1}, (2.22)

where

S(k1,k2,l)={(x1,x2,,xk1,y1,y2,,yk2):xi,yj{0,1,,l}andi=1k1xi+j=1k2yj=l}.S(k_{1},k_{2},l)=\bigg{\{}(x_{1},x_{2},\ldots,x_{k_{1}},y_{1},y_{2},\ldots,y_{k_{2}}):x_{i},y_{j}\in\{0,1,\ldots,l\}\ \text{and}\ \sum_{i=1}^{k_{1}}x_{i}+\sum_{j=1}^{k_{2}}y_{j}=l\bigg{\}}.

On rearranging the terms and extracting the coefficients of unu^{n} in (2.22), we get the required result. ∎

Theorem 2.3.

For t0t\geq 0, the mean and the variance of N(t)N^{*}(t) are given by

𝔼(N(t))=1+ηtandVar(N(t))=(i=1k1i2λi+j=1k2j2μj)t,\mathbb{E}(N^{*}(t))=1+\eta t\ \ \text{and}\ \ \mathrm{Var}(N^{*}(t))=\Bigg{(}\sum_{i=1}^{k_{1}}i^{2}\lambda_{i}+\sum_{j=1}^{k_{2}}j^{2}\mu_{j}\Bigg{)}t,

respectively.

Proof.

On taking the derivatives with respect to uu on both sides of (2.21), we get

uH(u,t)=eΛte(i=1k1λiui+j=1k2μjuj)t(1+(i=1k1iλiuij=1k2jμjuj)t)\frac{\partial}{\partial u}H^{*}(u,t)=e^{-\Lambda t}e^{\left(\sum_{i=1}^{k_{1}}\lambda_{i}u^{i}+\sum_{j=1}^{k_{2}}\mu_{j}u^{-j}\right)t}\left(1+\bigg{(}\sum_{i=1}^{k_{1}}i\lambda_{i}u^{i}-\sum_{j=1}^{k_{2}}j\mu_{j}u^{-j}\bigg{)}t\right)

and

2u2H(u,t)\displaystyle\frac{\partial^{2}}{\partial u^{2}}H^{*}(u,t) =eΛte(i=1k1λiui+j=1k2μjuj)t(i=1k1iλiui1j=1k2jμjuj1)t\displaystyle=e^{-\Lambda t}e^{\left(\sum_{i=1}^{k_{1}}\lambda_{i}u^{i}+\sum_{j=1}^{k_{2}}\mu_{j}u^{-j}\right)t}\left(\sum_{i=1}^{k_{1}}i\lambda_{i}u^{i-1}-\sum_{j=1}^{k_{2}}j\mu_{j}u^{-j-1}\right)t
+(i=1k1i(i1)λiui2+j=1k2j(j+1)μjuj2)tH(u,t)\displaystyle\ \ +\left(\sum_{i=1}^{k_{1}}i(i-1)\lambda_{i}u^{i-2}+\sum_{j=1}^{k_{2}}j(j+1)\mu_{j}u^{-j-2}\right)tH^{*}(u,t)
+(i=1k1iλiui1j=1k2jμjuj1)tuH(u,t).\displaystyle\ \ +\left(\sum_{i=1}^{k_{1}}i\lambda_{i}u^{i-1}-\sum_{j=1}^{k_{2}}j\mu_{j}u^{-j-1}\right)t\frac{\partial}{\partial u}H^{*}(u,t).

So, 𝔼(N(t))=uH(u,t)|u=1=1+ηt\mathbb{E}(N^{*}(t))=\frac{\partial}{\partial u}H^{*}(u,t)\bigg{|}_{u=1}=1+\eta t and

Var(N(t))=2u2H(u,t)|u=1+𝔼(N(t))(𝔼(N(t)))2=(i=1k1i2λi+j=1k2j2μj)t.\mathrm{Var}(N^{*}(t))=\frac{\partial^{2}}{\partial u^{2}}H^{*}(u,t)\bigg{|}_{u=1}+\mathbb{E}(N^{*}(t))-(\mathbb{E}(N^{*}(t)))^{2}=\left(\sum_{i=1}^{k_{1}}i^{2}\lambda_{i}+\sum_{j=1}^{k_{2}}j^{2}\mu_{j}\right)t.

This completes the proof. ∎

3. Cumulative births and deaths in GLBDP

Let B(t)B(t) denotes the total number of individuals born by time tt.

Proposition 3.1.

The joint pmf p(b,n,t)=Pr{B(t)=b,N(t)=n}p(b,n,t)=\mathrm{Pr}\{B(t)=b,{N}(t)=n\} solves the following system of differential equations:

ddtp(b,n,t)\displaystyle\frac{\mathrm{d}}{\mathrm{d}t}p(b,n,t) =(i=1k1λi+j=1k2μj)np(b,n,t)+i=1k1(ni)λip(bi,ni,t)\displaystyle=-\left(\sum_{i=1}^{k_{1}}\lambda_{i}+\sum_{j=1}^{k_{2}}\mu_{j}\right)np(b,n,t)+\sum_{i=1}^{k_{1}}(n-i)\lambda_{i}p(b-i,n-i,t)
+j=1k2(n+j)μjp(b,n+j,t),b1,n0,\displaystyle\ \ +\sum_{j=1}^{k_{2}}(n+j)\mu_{j}p(b,n+j,t),\ b\geq 1,\,n\geq 0, (3.1)

with initial conditions

p(b,n,0)={1,b=n=1,0,otherwise,p(b,n,0)=\begin{cases}1,\ b=n=1,\\ 0,\ \text{otherwise},\end{cases} (3.2)

where p(b,n,t)=0p(b,n,t)=0 for all b<0b<0 or n<0n<0.

Proof.

In an infinitesimal time interval of length hh such that o(h)/h0o(h)/h\to 0 as h0h\to 0, we have

Pr{B(t+h)=b+j,N(t+h)=n+k|B(t)=b,N(t)=n}={1(i=1k1λi+i=1k2μi)nh+o(h),j=k=0,nλjh+o(h),j=k=1,2,,k1,nμkh+o(h),j=0,k=1,2,,k2,o(h),otherwise.\mathrm{Pr}\{B(t+h)=b+j,{N}(t+h)=n+k|\,B(t)=b,{N}(t)=n\}\\ =\begin{cases}1-\left(\sum_{i=1}^{k_{1}}\lambda_{i}+\sum_{i=1}^{k_{2}}\mu_{i}\right)nh+o(h),\ j=k=0,\vspace{0.1cm}\\ n\lambda_{j}h+o(h),\ j=k=1,2,\ldots,k_{1},\vspace{0.1cm}\\ n\mu_{-k}h+o(h),\ j=0,\,-k=1,2,\ldots,k_{2},\vspace{0.1cm}\\ o(h),\ \mathrm{otherwise}.\end{cases}

So,

p(b,n,t+h)\displaystyle p(b,n,t+h) =(1(i=1k1λi+j=1k2μj)nh)p(b,n,t)+i=1k1(ni)λihp(bi,ni,t)\displaystyle=\left(1-\Bigg{(}\sum_{i=1}^{k_{1}}\lambda_{i}+\sum_{j=1}^{k_{2}}\mu_{j}\Bigg{)}nh\right)p(b,n,t)+\sum_{i=1}^{k_{1}}(n-i)\lambda_{i}hp(b-i,n-i,t)
+j=1k2(n+j)μjhp(b,n+j,t)+o(h).\displaystyle\ \ +\sum_{j=1}^{k_{2}}(n+j)\mu_{j}hp(b,n+j,t)+o(h).

Equivalently,

p(b,n,t+h)p(b,n,t)h\displaystyle\frac{p(b,n,t+h)-p(b,n,t)}{h} =(i=1k1λi+j=1k2μj)np(b,n,t)+i=1k1(ni)λip(bi,ni,t)\displaystyle=-\left(\sum_{i=1}^{k_{1}}\lambda_{i}+\sum_{j=1}^{k_{2}}\mu_{j}\right)np(b,n,t)+\sum_{i=1}^{k_{1}}(n-i)\lambda_{i}p(b-i,n-i,t)
+j=1k2(n+j)μjp(b,n+j,t)+o(h)h.\displaystyle\ \ +\sum_{j=1}^{k_{2}}(n+j)\mu_{j}p(b,n+j,t)+\frac{o(h)}{h}.

On letting h0h\to 0, we get the required result. ∎

Proposition 3.2.

The pgf G(u,v,t)=𝔼(uB(t)vN(t))G(u,v,t)=\mathbb{E}\left(u^{B(t)}v^{{N}(t)}\right), |u|1|u|\leq 1, |v|1|v|\leq 1 solves

tG(u,v,t)=(i=1k1λiv((uv)i1)+j=1k2μjv(vj1))vG(u,v,t)\frac{\partial}{\partial t}G(u,v,t)=\left(\sum_{i=1}^{k_{1}}\lambda_{i}v((uv)^{i}-1)+\sum_{j=1}^{k_{2}}\mu_{j}v(v^{-j}-1)\right)\frac{\partial}{\partial v}G(u,v,t) (3.3)

with G(u,v,0)=uv.G(u,v,0)=uv.

Proof.

Using (3.2), we get G(u,v,0)=b=0n=0p(b,n,0)ubvn=uvG(u,v,0)=\sum_{b=0}^{\infty}\sum_{n=0}^{\infty}p(b,n,0)u^{b}v^{n}=uv. On multiplying with ubvnu^{b}v^{n} on both sides of (3.1) and by adjusting the terms, we get

tG(u,v,t)\displaystyle\frac{\partial}{\partial t}G(u,v,t) =(i=1k1λi+j=1k2μj)vvubvnp(b,n,t)+i=1k1λiuivi+1vubivnip(bi,ni,t)\displaystyle=-\left(\sum_{i=1}^{k_{1}}\lambda_{i}+\sum_{j=1}^{k_{2}}\mu_{j}\right)v\frac{\partial}{\partial v}u^{b}v^{n}p(b,n,t)+\sum_{i=1}^{k_{1}}\lambda_{i}u^{i}v^{i+1}\frac{\partial}{\partial v}u^{b-i}v^{n-i}p(b-i,n-i,t)
+j=1k2μjvj+1vubvn+jp(b,n+j,t).\displaystyle\ \ +\sum_{j=1}^{k_{2}}\mu_{j}v^{-j+1}\frac{\partial}{\partial v}u^{b}v^{n+j}p(b,n+j,t).

On summing over b=0,1,2,b=0,1,2,\dots and n=k2,k2+1,n=-k_{2},-k_{2}+1,\dots, we get the required result. ∎

Let θu0\theta_{u}\leq 0 and θv0\theta_{v}\leq 0. On taking u=eθuu=e^{\theta_{u}} and v=eθvv=e^{\theta_{v}} in (3.3), we get the cumulant generating function (cgf) K(θu,θv,t)=lnG(eθu,eθv,t)K(\theta_{u},\theta_{v},t)=\ln G(e^{\theta_{u}},e^{\theta_{v}},t) as a solution of the following differential equation:

tK(θu,θv,t)=(i=1k1λi(ei(θu+θv)1)+j=1k2μj(ejθv1))θvK(θu,θv,t)\frac{\partial}{\partial t}K(\theta_{u},\theta_{v},t)=\left(\sum_{i=1}^{k_{1}}\lambda_{i}\left(e^{i(\theta_{u}+\theta_{v})}-1\right)+\sum_{j=1}^{k_{2}}\mu_{j}\left(e^{-j\theta_{v}}-1\right)\right)\frac{\partial}{\partial\theta_{v}}K(\theta_{u},\theta_{v},t) (3.4)

with K(θu,θv,0)=θu+θv.K(\theta_{u},\theta_{v},0)={\theta_{u}+\theta_{v}}.

In (3.4), we use the following expanded form of the cgf (see Kendall (1948), Eq. (35)):

K(θu,θv,t)=θu𝔼(B(t))+θv𝔼(N(t))+θu2/2Var(B(t))+θv2/2Var(N(t))+θuθvCov(B(t),N(t))+.K(\theta_{u},\theta_{v},t)=\theta_{u}\mathbb{E}(B(t))+\theta_{v}\mathbb{E}(N(t))+\theta_{u}^{2}/2\mathrm{Var}(B(t))+\theta_{v}^{2}/2\mathrm{Var}(N(t))+\theta_{u}\theta_{v}\mathrm{Cov}(B(t),N(t))+\dots. (3.5)

to obtain

ddt(θu𝔼(B(t))\displaystyle\frac{\mathrm{d}}{\mathrm{d}t}\bigg{(}\theta_{u}\mathbb{E}(B(t)) +θv𝔼(N(t))+θu22Var(B(t))+θv22Var(N(t))+θuθvCov(B(t),N(t))+)\displaystyle+\theta_{v}\mathbb{E}({N}(t))+\frac{\theta_{u}^{2}}{2}\mathrm{Var}(B(t))+\frac{\theta_{v}^{2}}{2}\mathrm{Var}({N}(t))+\theta_{u}\theta_{v}\mathrm{Cov}(B(t),{N}(t))+\cdots\bigg{)}
=l=1(i=1k1λi((θu+θv)i)ll!+j=1k2μj(θvj)ll!)(𝔼(N(t))+θvVar(N(t))\displaystyle=\sum_{l=1}^{\infty}\left(\sum_{i=1}^{k_{1}}\lambda_{i}\frac{((\theta_{u}+\theta_{v})i)^{l}}{l!}+\sum_{j=1}^{k_{2}}\mu_{j}\frac{(-\theta_{v}j)^{l}}{l!}\right)\big{(}\mathbb{E}({N}(t))+\theta_{v}\mathrm{Var}({N}(t))
+θuCov(B(t),N(t))+).\displaystyle\ \ +\theta_{u}\mathrm{Cov}(B(t),{N}(t))+\cdots\big{)}. (3.6)
Proposition 3.3.

For t0t\geq 0, we have
(i) 𝔼(N(t))={eηt,η0,1,η=0,\mathbb{E}({N}(t))=\begin{cases}e^{\eta t},\ \eta\neq 0,\vspace{0.1cm}\\ 1,\ \eta=0,\end{cases}
(ii) Var(N(t))={ζ(e2ηteηt)η,η0,ζt,η=0,\mathrm{Var}({N}(t))=\begin{cases}\frac{\zeta(e^{2\eta t}-e^{\eta t})}{\eta},\ \eta\neq 0,\vspace{0.1cm}\\ \zeta t,\ \eta=0,\end{cases}
(iii) Cov(B(t),N(t))={ζi=1k1iλiη2(e2ηteηt)ξηteηt,η0,i=1k1i2λit+ζi=1k1iλit22,η=0,\mathrm{Cov}(B(t),{N}(t))=\begin{cases}\frac{\zeta\sum_{i=1}^{k_{1}}i\lambda_{i}}{\eta^{2}}\left(e^{2\eta t}-e^{\eta t}\right)-\frac{\xi}{\eta}te^{\eta t},\ \eta\neq 0,\vspace{0.1cm}\\ \sum_{i=1}^{k_{1}}i^{2}\lambda_{i}t+\zeta\sum_{i=1}^{k_{1}}i\lambda_{i}\frac{t^{2}}{2},\ \eta=0,\end{cases}

(iv) 𝔼(B(t))={i=1k1iλiηeηtj=1k2jμjη,η0,i=1k1iλit+1,η=0,\mathbb{E}(B(t))=\begin{cases}\frac{\sum_{i=1}^{k_{1}}i\lambda_{i}}{\eta}e^{\eta t}-\frac{\sum_{j=1}^{k_{2}}j\mu_{j}}{\eta},\ \eta\neq 0,\vspace{0.1cm}\\ \sum_{i=1}^{k_{1}}i\lambda_{i}t+1,\ \eta=0,\end{cases}

(v) Var(B(t))={i=1k1i2λiη(eηt1)+2i=1k1iλi(ζi=1k1iλi2η3(eηt1)2ξη3(ηteηteηt+1)),η0,i=1k1i2λit+i=1k1iλi(i=1k1i2λit2+ζi=1k1iλit33),η=0,\mathrm{Var}(B(t))=\begin{cases}\frac{\sum_{i=1}^{k_{1}}i^{2}\lambda_{i}}{\eta}(e^{\eta t}-1)+2\sum_{i=1}^{k_{1}}i\lambda_{i}\bigg{(}\frac{\zeta\sum_{i=1}^{k_{1}}i\lambda_{i}}{2\eta^{3}}\left(e^{\eta t}-1\right)^{2}-\frac{\xi}{\eta^{3}}\left(\eta te^{\eta t}-e^{\eta t}+1\right)\bigg{)},\ \eta\neq 0,\vspace{0.1cm}\\ \sum_{i=1}^{k_{1}}i^{2}\lambda_{i}t+\sum_{i=1}^{k_{1}}i\lambda_{i}\left(\sum_{i=1}^{k_{1}}i^{2}\lambda_{i}{t^{2}}+\zeta\sum_{i=1}^{k_{1}}i\lambda_{i}\frac{t^{3}}{3}\right),\ \eta=0,\end{cases} where η=i=1k1iλ1j=1k2jμj\eta=\sum_{i=1}^{k_{1}}i\lambda_{1}-\sum_{j=1}^{k_{2}}j\mu_{j}, ζ=i=1k1i2λi+j=1k2j2μj\zeta=\sum_{i=1}^{k_{1}}i^{2}\lambda_{i}+\sum_{j=1}^{k_{2}}j^{2}\mu_{j} and ξ=i=1k1i2λij=1k2jμj+i=1k1iλij=1k2j2μj\xi=\sum_{i=1}^{k_{1}}i^{2}\lambda_{i}\sum_{j=1}^{k_{2}}j\mu_{j}+\sum_{i=1}^{k_{1}}i\lambda_{i}\sum_{j=1}^{k_{2}}j^{2}\mu_{j}.

Proof.

On comparing the coefficients on the both sides of the (3.6), we get

ddt𝔼(N(t))\displaystyle\frac{\mathrm{d}}{\mathrm{d}t}\mathbb{E}({N}(t)) ={η𝔼(N(t)),η0,0,η=0.\displaystyle=\begin{cases}\eta\mathbb{E}({N}(t)),\ \eta\neq 0,\vspace{0.1cm}\\ 0,\ \eta=0.\end{cases}
ddtVar(N(t))\displaystyle\frac{\mathrm{d}}{\mathrm{d}t}\mathrm{Var}({N}(t)) ={2ηVar(N(t))+ζ𝔼(N(t)),η0,ζ,η=0.\displaystyle=\begin{cases}2\eta\mathrm{Var}({N}(t))+\zeta\mathbb{E}({N}(t)),\ \eta\neq 0,\vspace{0.1cm}\\ \zeta,\ \eta=0.\end{cases}
ddtCov(B(t),N(t))\displaystyle\frac{\mathrm{d}}{\mathrm{d}t}\mathrm{Cov}(B(t),{N}(t)) ={i=1k1i2λi𝔼(N(t))+i=1k1iλiVar(N(t))+ηCov(B(t),N(t)),η0,i=1k1i2λi+i=1k1iλiζt,η=0.\displaystyle=\begin{cases}\sum_{i=1}^{k_{1}}i^{2}\lambda_{i}\mathbb{E}({N}(t))+\sum_{i=1}^{k_{1}}i\lambda_{i}\mathrm{Var}({N}(t))+\eta\mathrm{Cov}(B(t),{N}(t)),\ \eta\neq 0,\vspace{0.1cm}\\ \sum_{i=1}^{k_{1}}i^{2}\lambda_{i}+\sum_{i=1}^{k_{1}}i\lambda_{i}\zeta t,\ \eta=0.\end{cases}
ddt𝔼(B(t))\displaystyle\frac{\mathrm{d}}{\mathrm{d}t}\mathbb{E}(B(t)) ={i=1k1iλi𝔼(N(t)),η0,i=1k1iλi,η=0,\displaystyle=\begin{cases}\sum_{i=1}^{k_{1}}i\lambda_{i}\mathbb{E}({N}(t)),\ \eta\neq 0,\vspace{0.1cm}\\ \sum_{i=1}^{k_{1}}i\lambda_{i},\ \eta=0,\end{cases}

and

ddtVar(B(t))=i=1k1i2λi𝔼(N(t))+2i=1k1iλiCov(B(t),N(t)).\frac{\mathrm{d}}{\mathrm{d}t}\mathrm{Var}(B(t))=\sum_{i=1}^{k_{1}}i^{2}\lambda_{i}\mathbb{E}({N}(t))+2\sum_{i=1}^{k_{1}}i\lambda_{i}\mathrm{Cov}(B(t),{N}(t)).

On solving the above differential equations with initial conditions 𝔼(N(0))=1\mathbb{E}({N}(0))=1, Var(N(0))=0\mathrm{Var}({N}(0))=0, Cov(B(0),N(0))=0\mathrm{Cov}(B(0),{N}(0))=0, 𝔼(B(0))=1\mathbb{E}(B(0))=1 and Var(B(0))=0\mathrm{Var}(B(0))=0, respectively, we get the required results. ∎

On using (ii), (iii) and (v) of Proposition 3.3, we can obtain the correlation coefficient between B(t)B(t) and N(t)N(t) as follows:

RB(t)=Corr(B(t),N(t))=Cov(B(t),N(t))Var(B(t))Var(N(t)).R_{B}(t)=\mathrm{Corr}(B(t),N(t))=\frac{\mathrm{Cov}(B(t),N(t))}{\sqrt{\mathrm{Var}(B(t))\mathrm{Var}(N(t))}}.
Remark 3.1.

The mean and variance of GLBDP obtained in Proposition 3.2 coincide with (2.15). From the mean of N(t)N(t) and B(t)B(t), we get the following limiting results:

limt𝔼(N(t))={0,η<0,1,η=0,,η>0,\lim_{t\to\infty}\mathbb{E}({N}(t))=\begin{cases}0,\ \eta<0,\vspace{0.1cm}\\ 1,\ \eta=0,\vspace{0.1cm}\\ \infty,\ \eta>0,\end{cases}

and

limt𝔼(B(t))={,η0,j=1k2jμjη,η<0,\lim_{t\to\infty}\mathbb{E}(B(t))=\begin{cases}\infty,\ \eta\geq 0,\\ -\frac{\sum_{j=1}^{k_{2}}j\mu_{j}}{\eta},\ \eta<0,\end{cases}

respectively. Thus, in the long run, if η>0\eta>0 then population is expected to grow beyond any bound, for η<0\eta<0 the population is expected to grow up to some finite number then it is expected go extinct, and for η=0\eta=0 it is expected that infinitely many people will be born however all of them will die out except one.

In the case of k1=k2=2k_{1}=k_{2}=2, some numerical examples of cumulative births are given in Figure 1.

Refer to caption
Figure 1. Cumulative births in GLBDP for k1=k2=2k_{1}=k_{2}=2 at t=1t=1 .

For different values of η\eta, the variation of correlation coefficient RB(t)R_{B}(t) with respect to tt is shown in Figure 2. It can be observed that if η>0\eta>0 then correlation is high for the larger value of η\eta and for significantly large value of tt, we have perfect positive correlation between N(t)N(t) and B(t)B(t). For η<0\eta<0, RB(t)R_{B}(t) decreases with time and there is no linear correlation between N(t)N(t) and B(t)B(t) for the very large value of tt.

Refer to caption
Figure 2. Correlation between N(t)N(t) and B(t)B(t) for k1=k2=2k_{1}=k_{2}=2

Let D(t)D(t) denotes the total number of deaths by time tt.

Proposition 3.4.

The joint pmf q(d,n,t)=Pr{D(t)=d,N(t)=n}q(d,n,t)=\mathrm{Pr}\{D(t)=d,{N}(t)=n\} is the solution of the following system of differential equations:

ddtq(d,n,t)\displaystyle\frac{\mathrm{d}}{\mathrm{d}t}q(d,n,t) =n(i=1k1λi+j=1k2μj)q(d,n,t)+i=1k1(ni)λiq(d,ni,t)\displaystyle=-n\left(\sum_{i=1}^{k_{1}}\lambda_{i}+\sum_{j=1}^{k_{2}}\mu_{j}\right)q(d,n,t)+\sum_{i=1}^{k_{1}}(n-i)\lambda_{i}q(d,n-i,t)
+j=1k2(n+j)μjq(dj,n+j,t),d0,n0,\displaystyle\ \ +\sum_{j=1}^{k_{2}}(n+j)\mu_{j}q(d-j,n+j,t),\ d\geq 0,\,n\geq 0, (3.7)

with initial conditions q(0,1,0)=1q(0,1,0)=1 and q(d,n,0)=0q(d,n,0)=0 for all d0d\neq 0 and n1n\neq 1, where q(d,n,t)=0q(d,n,t)=0 for all d<0d<0 or n<0n<0.

Proof.

In an infinitesimal interval of length hh such that o(h)/ho(h)/h as h0h\to 0, we have

Pr{D(t+h)=d+j,N(t+h)=n+k|D(t)=d,N(t)=n}={1(i=1k1λi+i=1k2μi)nh+o(h),j=k=0,nλkh+o(h),j=0,k=1,2,,k1,nμjh+o(h),k=j,j=1,2,,k2,o(h),otherwise.\mathrm{Pr}\{D(t+h)=d+j,{N}(t+h)=n+k|D(t)=d,{N}(t)=n\}\\ =\begin{cases}1-\bigg{(}\sum_{i=1}^{k_{1}}\lambda_{i}+\sum_{i=1}^{k_{2}}\mu_{i}\bigg{)}nh+o(h),\ j=k=0,\vspace{0.1cm}\\ n\lambda_{k}h+o(h),\ j=0,\,k=1,2,\ldots,k_{1},\vspace{0.1cm}\\ n\mu_{j}h+o(h),\ k=-j,\,j=1,2,\ldots,k_{2},\vspace{0.1cm}\\ o(h),\ \mathrm{otherwise}.\end{cases}

So,

q(d,n,t+h)\displaystyle q(d,n,t+h) =(1(i=1k1λi+j=1k2μj)nh)q(d,n,t)+i=1k1(ni)hλip(d,ni,t)\displaystyle=\left(1-\left(\sum_{i=1}^{k_{1}}\lambda_{i}+\sum_{j=1}^{k_{2}}\mu_{j}\right)nh\right)q(d,n,t)+\sum_{i=1}^{k_{1}}(n-i)h\lambda_{i}p(d,n-i,t)
+j=1k2(n+j)hμjq(dj,n+j,t)+o(h).\displaystyle\ \ +\sum_{j=1}^{k_{2}}(n+j)h\mu_{j}q(d-j,n+j,t)+o(h).

After rearranging the terms and on letting h0h\to 0, we get the required result. ∎

The proof of the next result is along the similar line to that of Proposition 3.2, thus it is omitted.

Proposition 3.5.

The pgf 𝒢(u,v,t)=𝔼(uD(t)vN(t))\mathcal{G}(u,v,t)=\mathbb{E}\left(u^{D(t)}v^{{N}(t)}\right), |u|1|u|\leq 1, |v|1|v|\leq 1 solves

t𝒢(u,v,t)=(i=1k1λiv(vi1)+j=1k2μjv((uv1)j1))v𝒢(u,v,t)\frac{\partial}{\partial t}\mathcal{G}(u,v,t)=\left(\sum_{i=1}^{k_{1}}\lambda_{i}v(v^{i}-1)+\sum_{j=1}^{k_{2}}\mu_{j}v((uv^{-1})^{j}-1)\right)\frac{\partial}{\partial v}\mathcal{G}(u,v,t)

with 𝒢(u,v,0)=v.\mathcal{G}(u,v,0)=v.

Thus, the cgf 𝒦(θu,θv,t)=ln𝒢(eθu,eθv,t)\mathcal{K}(\theta_{u},\theta_{v},t)=\ln\mathcal{G}(e^{\theta_{u}},e^{\theta_{v}},t), θu0\theta_{u}\leq 0, θv0\theta_{v}\leq 0 solves

t𝒦(θu,θv,t)=(i=1k1λi(eiθv1)+j=1k2μj(ej(θuθv)1))θv𝒦(θu,θv,t)\frac{\partial}{\partial t}\mathcal{K}(\theta_{u},\theta_{v},t)=\left(\sum_{i=1}^{k_{1}}\lambda_{i}(e^{i\theta_{v}}-1)+\sum_{j=1}^{k_{2}}\mu_{j}(e^{j(\theta_{u}-\theta_{v})}-1)\right)\frac{\partial}{\partial\theta_{v}}\mathcal{K}(\theta_{u},\theta_{v},t) (3.8)

with 𝒦(θu,θv,0)=θv.\mathcal{K}(\theta_{u},\theta_{v},0)={\theta_{v}}.

On using (3.5), we get

ddt(θu𝔼(D(t))\displaystyle\frac{\mathrm{d}}{\mathrm{d}t}\big{(}\theta_{u}\mathbb{E}(D(t)) +θv𝔼(N(t))+θu22Var(D(t))+θv22Var(N(t))+θuθvCov(D(t),N(t))+)\displaystyle+\theta_{v}\mathbb{E}({N}(t))+\frac{\theta_{u}^{2}}{2}\mathrm{Var}(D(t))+\frac{\theta_{v}^{2}}{2}\mathrm{Var}({N}(t))+\theta_{u}\theta_{v}\mathrm{Cov}(D(t),{N}(t))+\cdots\big{)}
=l=1(i=1k1λi(iθv)ll!+j=1k2μj(j(θuθv))ll!)(𝔼(N(t))+θvVar(N(t))\displaystyle=\sum_{l=1}^{\infty}\bigg{(}\sum_{i=1}^{k_{1}}\lambda_{i}\frac{(i\theta_{v})^{l}}{l!}+\sum_{j=1}^{k_{2}}\mu_{j}\frac{(j(\theta_{u}-\theta_{v}))^{l}}{l!}\bigg{)}\big{(}\mathbb{E}({N}(t))+\theta_{v}\mathrm{Var}({N}(t))
+θuCov(D(t),N(t))+)\displaystyle\ \ +\theta_{u}\mathrm{Cov}(D(t),{N}(t))+\cdots\big{)} (3.9)

On comparing the coefficients on the both side of the (3), we get

ddt𝔼(D(t))\displaystyle\frac{\mathrm{d}}{\mathrm{d}t}\mathbb{E}(D(t)) ={j=1k2jμj𝔼(N(t)),η0,j=1k2jμj,η=0,\displaystyle=\begin{cases}\sum_{j=1}^{k_{2}}j\mu_{j}\mathbb{E}(N(t)),\ \eta\neq 0,\vspace{0.1cm}\\ \sum_{j=1}^{k_{2}}j\mu_{j},\ \eta=0,\end{cases}
ddtCov(D(t),N(t))\displaystyle\frac{\mathrm{d}}{\mathrm{d}t}\mathrm{Cov}(D(t),{N}(t)) ={j=1k2(j2μj𝔼(N(t))+jμjVar(N(t)))+ηCov(D(t),N(t)),η0,j=1k2(j2μj+jμj(i=1k1i2λi+j=1k2j2μj)t),η=0,\displaystyle=\begin{cases}\sum_{j=1}^{k_{2}}\left(-j^{2}\mu_{j}\mathbb{E}({N}(t))+j\mu_{j}\mathrm{Var}({N}(t))\right)+\eta\mathrm{Cov}(D(t),{N}(t)),\ \eta\neq 0,\vspace{0.1cm}\\ \sum_{j=1}^{k_{2}}\left(-j^{2}\mu_{j}+j\mu_{j}\left(\sum_{i=1}^{k_{1}}i^{2}\lambda_{i}+\sum_{j=1}^{k_{2}}j^{2}\mu_{j}\right)t\right),\ \eta=0,\end{cases}

and

ddtVar(D(t))=j=1k2j2μj𝔼(N(t))+2j=1k2jμjCov(D(t),N(t)).\frac{\mathrm{d}}{\mathrm{d}t}\mathrm{Var}(D(t))=\sum_{j=1}^{k_{2}}j^{2}\mu_{j}\mathbb{E}(N(t))+2\sum_{j=1}^{k_{2}}j\mu_{j}\mathrm{Cov}(D(t),N(t)).

On solving these differential equations with initial conditions 𝔼(D(0))=0\mathbb{E}(D(0))=0, Cov(D(0),N(0))=0\mathrm{Cov}(D(0),{N}(0))=0 and Var(D(0))=0\mathrm{Var}(D(0))=0, respectively, we get the following result.

Proposition 3.6.

For t0t\geq 0, we have

(i) 𝔼(D(t))={j=1k2jμjη(eηt1),η0,j=1k2jμjt,η=0,\mathbb{E}(D(t))=\begin{cases}\frac{\sum_{j=1}^{k_{2}}j\mu_{j}}{\eta}(e^{\eta t}-1),\ \eta\neq 0,\vspace{0.1cm}\\ \sum_{j=1}^{k_{2}}j\mu_{j}t,\ \eta=0,\end{cases}
(ii) Cov(D(t),N(t))={ζj=1k2jμjη2(e2ηteηt)ξηteηt,η0,j=1k2j2μjt+ζj=1k2jμjt22,η=0,\mathrm{Cov}(D(t),{N}(t))=\begin{cases}\frac{\zeta\sum_{j=1}^{k_{2}}j\mu_{j}}{\eta^{2}}\left(e^{2\eta t}-e^{\eta t}\right)-\frac{\xi}{\eta}te^{\eta t},\ \eta\neq 0,\vspace{0.1cm}\\ -\sum_{j=1}^{k_{2}}j^{2}\mu_{j}t+\zeta\sum_{j=1}^{k_{2}}j\mu_{j}\frac{t^{2}}{2},\ \eta=0,\end{cases}
(iii) Var(D(t))={j=1k2j2μjη(eηt1)+2j=1k2jμj(ζj=1k2jμj2η3(eηt1)2ξη3(ηteηteηt+1)),η0,j=1k2j2μjt+j=1k2jμj(j=1k2j2μjt2+ζj=1k2jμjt33),η=0.\mathrm{Var}(D(t))=\begin{cases}\frac{\sum_{j=1}^{k_{2}}j^{2}\mu_{j}}{\eta}(e^{\eta t}-1)+2\sum_{j=1}^{k_{2}}j\mu_{j}\bigg{(}\frac{\zeta\sum_{j=1}^{k_{2}}j\mu_{j}}{2\eta^{3}}\left(e^{\eta t}-1\right)^{2}-\frac{\xi}{\eta^{3}}\left(\eta te^{\eta t}-e^{\eta t}+1\right)\bigg{)},\ \eta\neq 0,\vspace{0.1cm}\\ \sum_{j=1}^{k_{2}}j^{2}\mu_{j}t+\sum_{j=1}^{k_{2}}j\mu_{j}\left(-\sum_{j=1}^{k_{2}}j^{2}\mu_{j}{t^{2}}+\zeta\sum_{j=1}^{k_{2}}j\mu_{j}\frac{t^{3}}{3}\right),\ \eta=0.\end{cases}

Here, again using the results of Proposition 3.2 and Proposition 3.6, we can calculate the correlation coefficient RD(t)=Corr(D(t),N(t)){R_{D}}(t)=\mathrm{Corr}(D(t),N(t)) between D(t)D(t) and N(t)N(t). For k1=k2=2k_{1}=k_{2}=2, some numerical examples of cumulative deaths are given in Figure 3.

Refer to caption
Figure 3. Cumulative deaths in GLBDP for k1=k2=2k_{1}=k_{2}=2 at time t=1t=1.

From Figure 4, we can observe that for η>0\eta>0, as tt increases the correlation coefficient RD(t)R_{D}(t) changes from negative to positive and it is strong for large values of tt. For η<0\eta<0 there is no correlation between N(t)N(t) and D(t)D(t).

Refer to caption
Figure 4. Correlation between N(t)N(t) and D(t)D(t) for k1=k2=2k_{1}=k_{2}=2
Proposition 3.7.

The joint pmf p(d,b,n,t)=Pr{D(t)=d,B(t)=b,N(t)=n}p(d,b,n,t)=\mathrm{Pr}\{D(t)=d,B(t)=b,{N}(t)=n\} is the solution of the following system of differential equations:

ddtp(d,b,n,t)\displaystyle\frac{\mathrm{d}}{\mathrm{d}t}p(d,b,n,t) =(i=1k1λi+j=1k2μj)np(d,b,n,t)+i=1k1(ni)λip(d,bi,ni,t)\displaystyle=-\bigg{(}\sum_{i=1}^{k_{1}}\lambda_{i}+\sum_{j=1}^{k_{2}}\mu_{j}\bigg{)}np(d,b,n,t)+\sum_{i=1}^{k_{1}}(n-i)\lambda_{i}p(d,b-i,n-i,t)
+j=1k2(n+j)μjp(dj,b,n+j,t),d0,b1,n0,\displaystyle\ \ +\sum_{j=1}^{k_{2}}(n+j)\mu_{j}p(d-j,b,n+j,t),\ d\geq 0,\,b\geq 1,\,n\geq 0, (3.10)

with initial conditions p(0,1,1,0)=1p(0,1,1,0)=1 and p(d,b,n,0)=0p(d,b,n,0)=0 for all d0d\neq 0, b1b\neq 1 and n1n\neq 1, where p(d,b,n,t)=0p(d,b,n,t)=0 for d<0d<0 or b<0b<0 or n<0n<0.

Proof.

Let hh be the length of an infinitesimal interval such that o(h)/h0o(h)/h\to 0 as h0h\to 0. Then,

Pr{D(t+h)=d+j,B(t+h)=b+k,N(t+h)=n+m|D(t)=d,B(t)=b,N(t)=n}={1(i=1k1λi+i=1k2μi)nh+o(h),j=k=m=0,nλkh+o(h),j=0,k=m=1,2,,k1,nμjh+o(h),k=0,m=j,j=1,2,,k2,o(h),otherwise.\mathrm{Pr}\{D(t+h)=d+j,B(t+h)=b+k,{N}(t+h)=n+m|D(t)=d,B(t)=b,{N}(t)=n\}\\ =\begin{cases}1-\left(\sum_{i=1}^{k_{1}}\lambda_{i}+\sum_{i=1}^{k_{2}}\mu_{i}\right)nh+o(h),\ j=k=m=0,\vspace{0.1cm}\\ n\lambda_{k}h+o(h),\ j=0,\,k=m=1,2,\ldots,k_{1},\vspace{0.1cm}\\ n\mu_{j}h+o(h),\ k=0,\,m=-j,\,j=1,2,\ldots,k_{2},\vspace{0.1cm}\\ o(h),\ \mathrm{otherwise}.\end{cases}

So,

p(d,b,n,t+h)\displaystyle p(d,b,n,t+h) =(1(i=1k1λi+j=1k2μj)nh)p(d,b,n,t)+i=1k1(ni)λihp(d,bi,ni,t)\displaystyle=\bigg{(}1-\bigg{(}\sum_{i=1}^{k_{1}}\lambda_{i}+\sum_{j=1}^{k_{2}}\mu_{j}\bigg{)}nh\bigg{)}p(d,b,n,t)+\sum_{i=1}^{k_{1}}(n-i)\lambda_{i}hp(d,b-i,n-i,t)
+j=1k2(n+j)μjhp(dj,b,n+j,t)+o(h).\displaystyle\ \ +\sum_{j=1}^{k_{2}}(n+j)\mu_{j}hp(d-j,b,n+j,t)+o(h).

After rearranging the terms and on letting h0h\to 0, we get the required result. ∎

The pgf U(u,v,w,t)=𝔼(uD(t)vB(t)wN(t))U(u,v,w,t)=\mathbb{E}(u^{D(t)}v^{B(t)}w^{{N}(t)}), |u|1|u|\leq 1, |v|1|v|\leq 1, |w|1|w|\leq 1 of (D(t),B(t),N(t))(D(t),B(t),N(t)) satisfies the following differential equation:

tU(u,v,w,t)=(i=1k1λiw((vw)i1)+j=1k2μjw((uw1)j1))wU(u,v,w,t)\frac{\partial}{\partial t}U(u,v,w,t)=\bigg{(}\sum_{i=1}^{k_{1}}\lambda_{i}w((vw)^{i}-1)+\sum_{j=1}^{k_{2}}\mu_{j}w((uw^{-1})^{j}-1)\bigg{)}\frac{\partial}{\partial w}U(u,v,w,t)

with initial condition U(u,v,w,0)=vw.U(u,v,w,0)=vw.

As N(t)=B(t)D(t)N(t)=B(t)-D(t), t0t\geq 0, we have Var(N(t))=Var(D(t))+Var(B(t))2Cov(D(t),B(t)).\mathrm{Var}(N(t))=\mathrm{Var}(D(t))+\mathrm{Var}(B(t))-2\mathrm{Cov}(D(t),B(t)). So, the covariance of B(t)B(t) and D(t)D(t), t0t\geq 0 is

Cov(D(t),B(t))\displaystyle\mathrm{Cov}(D(t),B(t)) =12(Var(D(t))+Var(B(t))Var(N(t)))\displaystyle=\frac{1}{2}(\mathrm{Var}(D(t))+\mathrm{Var}(B(t))-\mathrm{Var}(N(t)))
=12(ζη3((i=1k1iλi)2+(j=1k2jμj)2)(eηt1)2ζη(eηt1)2\displaystyle=\frac{1}{2}\Bigg{(}\frac{\zeta}{\eta^{3}}\Bigg{(}\bigg{(}\sum_{i=1}^{k_{1}}i\lambda_{i}\bigg{)}^{2}+\bigg{(}\sum_{j=1}^{k_{2}}j\mu_{j}\bigg{)}^{2}\Bigg{)}\left(e^{\eta t}-1\right)^{2}-\frac{\zeta}{\eta}\left(e^{\eta t}-1\right)^{2}
2ξη3(i=1k1iλi+j=1k2jμj)(ηteηteηt+1))\displaystyle\ \ -\frac{2\xi}{\eta^{3}}\Bigg{(}\sum_{i=1}^{k_{1}}i\lambda_{i}+\sum_{j=1}^{k_{2}}j\mu_{j}\Bigg{)}\left(\eta te^{\eta t}-e^{\eta t}+1\right)\Bigg{)}
=ζi=1k1j=1k2λiμjη3(eηt1)2ξη3(i=1k1iλi+j=1k2jμj)(ηteηteηt+1),η0,\displaystyle=\frac{\zeta\sum_{i=1}^{k_{1}}\sum_{j=1}^{k_{2}}\lambda_{i}\mu_{j}}{\eta^{3}}\left(e^{\eta t}-1\right)^{2}-\frac{\xi}{\eta^{3}}\Bigg{(}\sum_{i=1}^{k_{1}}i\lambda_{i}+\sum_{j=1}^{k_{2}}j\mu_{j}\Bigg{)}\left(\eta te^{\eta t}-e^{\eta t}+1\right),\ \eta\neq 0,

which can further be used to calculate the correlation coefficient R(t)=Corr(D(t),B(t))R(t)=\mathrm{Corr}(D(t),B(t)).

Next, we do the similar study for {N(t)}t0\{N^{*}(t)\}_{t\geq 0}, that is, the GBDP with constant birth and death rates.

3.1. GBDP with constant birth and death rates

Let B(t)B^{*}(t) and D(t)D^{*}(t) be the total number of births and deaths up to time tt in the process {N(t)}t0\{N^{*}(t)\}_{t\geq 0}, respectively. Similar to the result of Proposition 3.7, the joint pmf p(d,b,n,t)=Pr{D(t)=d,B(t)=b,N(t)=n}p^{*}(d,b,n,t)=\mathrm{Pr}\{D^{*}(t)=d,B^{*}(t)=b,N^{*}(t)=n\} solves the following system of differential equations:

ddtp(d,b,n,t)\displaystyle\frac{\mathrm{d}}{\mathrm{d}t}p^{*}(d,b,n,t) =(i=1k1λi+j=1k2μj)p(d,b,n,t)+i=1k1λip(d,bi,ni,t)\displaystyle=-\bigg{(}\sum_{i=1}^{k_{1}}\lambda_{i}+\sum_{j=1}^{k_{2}}\mu_{j}\bigg{)}p^{*}(d,b,n,t)+\sum_{i=1}^{k_{1}}\lambda_{i}p^{*}(d,b-i,n-i,t)
+j=1k2μjp(dj,b,n+j,t),d0,b1,n0\displaystyle\ \ +\sum_{j=1}^{k_{2}}\mu_{j}p^{*}(d-j,b,n+j,t),\ d\geq 0,\,b\geq 1,\,n\geq 0

with initial conditions p(0,1,1,0)=1p^{*}(0,1,1,0)=1 and p(d,b,n,t)=0p^{*}(d,b,n,t)=0 for all d0d\neq 0, b1b\neq 1 and n1n\neq 1.

So, the pgf U(u,v,w,t)=𝔼(uD(t)vB(t)wN(t))U^{*}(u,v,w,t)=\mathbb{E}(u^{D^{*}(t)}v^{B^{*}(t)}w^{N^{*}(t)}), |u|1|u|\leq 1, |v|1|v|\leq 1, |w|1|w|\leq 1 is the solution of the following differential equation:

ddtU(u,v,w,t)=(i=1k1λi((vw)i1)+j=1k2μj((uw1)j1))U(u,v,w,t)\frac{\mathrm{d}}{\mathrm{d}t}U^{*}(u,v,w,t)=\left(\sum_{i=1}^{k_{1}}\lambda_{i}\left((vw)^{i}-1\right)+\sum_{j=1}^{k_{2}}\mu_{j}\left((uw^{-1})^{j}-1\right)\right)U^{*}(u,v,w,t)

with U(u,v,w,0)=vwU^{*}(u,v,w,0)=vw. It is given by

U(u,v,w,t)=vweΛtexp(i=1k1λi(vw)i+j=1k2μj(uw1)j)t.U^{*}(u,v,w,t)=vwe^{-\Lambda t}\exp{\left(\sum_{i=1}^{k_{1}}\lambda_{i}(vw)^{i}+\sum_{j=1}^{k_{2}}\mu_{j}(uw^{-1})^{j}\right)t}. (3.11)
Theorem 3.1.

The joint pmf p(d,b,n,t)=Pr{D(t)=d,B(t)=b,N(t)=n}p^{*}(d,b,n,t)=\mathrm{Pr}\{D^{*}(t)=d,B^{*}(t)=b,N^{*}(t)=n\} is given by

p(d,b,n,t)=Ω(k1,k2,d,b,n)i=1k1j=1k2λixiμjyjtxi+yjxi!yj!eΛt,d0,b1,n0,p^{*}(d,b,n,t)=\sum_{\Omega^{*}(k_{1},k_{2},d,b,n)}\prod_{i=1}^{k_{1}}\prod_{j=1}^{k_{2}}\frac{\lambda_{i}^{x_{i}}\mu_{j}^{y_{j}}t^{x_{i}+y_{j}}}{x_{i}!y_{j}!}e^{-\Lambda t},\ d\geq 0,\,b\geq 1,\,n\geq 0, (3.12)

where Ω(k1,k2,d,b,n)={(x1,,xk1,y1,,yk2)0k1+k2:j=1k2jyj=d,i=1k1ixi=b1,bd=n}.\Omega^{*}(k_{1},k_{2},d,b,n)=\bigg{\{}(x_{1},\ldots,x_{k_{1}},y_{1},\ldots,y_{k_{2}})\in\mathbb{N}_{0}^{k_{1}+k_{2}}:\sum_{j=1}^{k_{2}}jy_{j}=d,\,\sum_{i=1}^{k_{1}}ix_{i}=b-1,\,b-d=n\bigg{\}}.

Proof.

From (3.11), we have

U(u,v,w,t)=vweΛtl=0(i=1k1λi(vw)i+j=1k2μj(uw1)j)ltll!.U^{*}(u,v,w,t)=vwe^{-\Lambda t}\sum_{l=0}^{\infty}\bigg{(}\sum_{i=1}^{k_{1}}\lambda_{i}(vw)^{i}+\sum_{j=1}^{k_{2}}\mu_{j}(uw^{-1})^{j}\bigg{)}^{l}\frac{t^{l}}{l!}.

Equivalently,

U(u,v,w,t)=eΛtl=0S(k1,k2,l)i=1k1j=1k2λixiμjyjtxi+yjxi!yj!ujyjvixi+1wixijyj+1,U^{*}(u,v,w,t)=e^{-\Lambda t}\sum_{l=0}^{\infty}\sum_{S^{*}(k_{1},k_{2},l)}\prod_{i=1}^{k_{1}}\prod_{j=1}^{k_{2}}\frac{\lambda_{i}^{x_{i}}\mu_{j}^{y_{j}}t^{x_{i}+y_{j}}}{x_{i}!y_{j}!}u^{jy_{j}}v^{ix_{i}+1}w^{ix_{i}-jy_{j}+1},

where S(k1,k2,l)={(x1,,xk1,y1,,yk2):xi,yj{0,1,,l},i=1k1xi+j=1k2yj=l}.S^{*}(k_{1},k_{2},l)=\big{\{}(x_{1},\ldots,x_{k_{1}},y_{1},\ldots,y_{k_{2}}):x_{i},y_{j}\in\{0,1,\ldots,l\},\,\sum_{i=1}^{k_{1}}x_{i}+\sum_{j=1}^{k_{2}}y_{j}=l\big{\}}. So, the coefficients of udvbwnu^{d}v^{b}w^{n} in U(u,v,w,t)U^{*}(u,v,w,t) is the required result. ∎

Remark 3.2.

From (3.12), we get the following joint pmfs

p(b,n,t)\displaystyle p^{*}(b,n,t) =Pr{B(t)=b,N(t)=n}=Ω(k1,k2,b,n)i=1k1j=1k2λixiμjyjtxi+yjxi!yj!eΛt,b1,n0,\displaystyle=\mathrm{Pr}\{B^{*}(t)=b,N^{*}(t)=n\}=\sum_{\Omega^{*}(k_{1},k_{2},b,n)}\prod_{i=1}^{k_{1}}\prod_{j=1}^{k_{2}}\frac{\lambda_{i}^{x_{i}}\mu_{j}^{y_{j}}t^{x_{i}+y_{j}}}{x_{i}!y_{j}!}e^{-\Lambda t},\ b\geq 1,\,n\geq 0,
q(d,n,t)\displaystyle q^{*}(d,n,t) =Pr{D(t)=b,N(t)=n}=Ω(k1,k2,d,n)i=1k1j=1k2λixiμjyjtxi+yjxi!yj!eΛt,d0,n0,\displaystyle=\mathrm{Pr}\{D^{*}(t)=b,N^{*}(t)=n\}=\sum_{\Omega^{**}(k_{1},k_{2},d,n)}\prod_{i=1}^{k_{1}}\prod_{j=1}^{k_{2}}\frac{\lambda_{i}^{x_{i}}\mu_{j}^{y_{j}}t^{x_{i}+y_{j}}}{x_{i}!y_{j}!}e^{-\Lambda t},\ d\geq 0,\,n\geq 0,

and

p~(d,b,t)=Pr{D(t)=d,B(t)=b}=Ω(k1,k2,d,b)i=1k1j=1k2λixiμjyjtxi+yjxi!yj!eΛt,d0,b1,\tilde{p}^{*}(d,b,t)=\mathrm{Pr}\{D^{*}(t)=d,B^{*}(t)=b\}=\sum_{\Omega^{***}(k_{1},k_{2},d,b)}\prod_{i=1}^{k_{1}}\prod_{j=1}^{k_{2}}\frac{\lambda_{i}^{x_{i}}\mu_{j}^{y_{j}}t^{x_{i}+y_{j}}}{x_{i}!y_{j}!}e^{-\Lambda t},\ d\geq 0,\,b\geq 1,

where

Ω(k1,k2,b,n)\displaystyle\Omega^{*}(k_{1},k_{2},b,n) ={(x1,,xk1,y1,,yk2)0k1+k2:i=1k1ixi=b1,i=1k1ixij=1k2jyj=n1},\displaystyle=\bigg{\{}(x_{1},\ldots,x_{k_{1}},y_{1},\ldots,y_{k_{2}})\in\mathbb{N}_{0}^{k_{1}+k_{2}}:\sum_{i=1}^{k_{1}}ix_{i}=b-1,\,\sum_{i=1}^{k_{1}}ix_{i}-\sum_{j=1}^{k_{2}}jy_{j}=n-1\bigg{\}},
Ω(k1,k2,d,n)\displaystyle\Omega^{**}(k_{1},k_{2},d,n) ={(x1,,xk1,y1,,yk2)0k1+k2:j=1k2jyj=d,i=1k1ixij=1k2jyj=n1},\displaystyle=\bigg{\{}(x_{1},\ldots,x_{k_{1}},y_{1},\ldots,y_{k_{2}})\in\mathbb{N}_{0}^{k_{1}+k_{2}}:\sum_{j=1}^{k_{2}}jy_{j}=d,\,\sum_{i=1}^{k_{1}}ix_{i}-\sum_{j=1}^{k_{2}}jy_{j}=n-1\bigg{\}},

and

Ω(k1,k2,d,b)={(x1,,xk1,y1,,yk2)0k1+k2:j=1k2jyj=d,i=1k1ixi=b1}.\Omega^{***}(k_{1},k_{2},d,b)=\bigg{\{}(x_{1},\ldots,x_{k_{1}},y_{1},\ldots,y_{k_{2}})\in\mathbb{N}_{0}^{k_{1}+k_{2}}:\sum_{j=1}^{k_{2}}jy_{j}=d,\,\sum_{i=1}^{k_{1}}ix_{i}=b-1\bigg{\}}.
Remark 3.3.

From (3.12), we obtain the following marginal pmfs:

p~(d,t)=Pr{D(t)=d}=Ω(k2,d)j=1k2(μjt)yjyj!ej=1k2μjt,d0,\tilde{p}^{*}(d,t)=\mathrm{Pr}\{D^{*}(t)=d\}=\sum_{\Omega^{*}(k_{2},d)}\prod_{j=1}^{k_{2}}\frac{(\mu_{j}t)^{y_{j}}}{y_{j}!}e^{-\sum_{j=1}^{k_{2}}\mu_{j}t},\ d\geq 0,

and

p(b,t)=Pr{B(t)=b}=Ω(k1,b)i=1k1(λit)xixi!ei=1k1λit,b1,{p}^{*}(b,t)=\mathrm{Pr}\{B^{*}(t)=b\}=\sum_{\Omega^{**}(k_{1},b)}\prod_{i=1}^{k_{1}}\frac{(\lambda_{i}t)^{x_{i}}}{x_{i}!}e^{-\sum_{i=1}^{k_{1}}\lambda_{i}t},\ b\geq 1,

where Ω(k2,d)={(y1,,yk2)0k2:j=1k2jyj=d}\Omega^{*}(k_{2},d)=\bigg{\{}(y_{1},\ldots,y_{k_{2}})\in\mathbb{N}_{0}^{k_{2}}:\sum_{j=1}^{k_{2}}jy_{j}=d\bigg{\}} and Ω(k1,b)={(x1,,xk1)0k1:i=1k1ixi=b1}.\Omega^{**}(k_{1},b)=\bigg{\{}(x_{1},\ldots,x_{k_{1}})\in\mathbb{N}_{0}^{k_{1}}:\sum_{i=1}^{k_{1}}ix_{i}=b-1\bigg{\}}.

Remark 3.4.

Note that the joint pmf of (D(t),B(t))(D^{*}(t),B^{*}(t)) is the product of their marginal pmfs. Hence, {D(t)}t0{\{D^{*}(t)\}}_{t\geq 0} and {B(t)}t0{\{B^{*}(t)\}}_{t\geq 0} are two independent random processes.

Proposition 3.8.

For t0t\geq 0, we have
(i) 𝔼(B(t))=1+i=1k1iλit\mathbb{E}(B^{*}(t))=1+\sum_{i=1}^{k_{1}}i\lambda_{i}t,
(ii) 𝔼(D(t))=j=1k2jμjt\mathbb{E}(D^{*}(t))=\sum_{j=1}^{k_{2}}j\mu_{j}t,
(iii) Var(B(t))=i=1k1i2λit\mathrm{Var}(B^{*}(t))=\sum_{i=1}^{k_{1}}i^{2}\lambda_{i}t,
(iv) Var(D(t))=j=1k2j2μjt\mathrm{Var}(D^{*}(t))=\sum_{j=1}^{k_{2}}j^{2}\mu_{j}t.

Proof.

On taking derivatives of (3.11) with respect to uu and vv, we get

uU(u,v,w,t)\displaystyle\frac{\partial}{\partial u}U^{*}(u,v,w,t) =vweΛte(i=1k1λi(vw)i+j=1k2μj(uw1)j)tj=1k2jμjuj1wjt,\displaystyle=vwe^{-\Lambda t}e^{\left(\sum_{i=1}^{k_{1}}\lambda_{i}(vw)^{i}+\sum_{j=1}^{k_{2}}\mu_{j}(uw^{-1})^{j}\right)t}\sum_{j=1}^{k_{2}}j\mu_{j}u^{j-1}w^{-j}t,
2u2U(u,v,w,t)\displaystyle\frac{\partial^{2}}{\partial u^{2}}U^{*}(u,v,w,t) =vweΛte(i=1k1λi(vw)i+j=1k2μj(uw1)j)t((j=1k2jμjuj1wj)2t2\displaystyle=vwe^{-\Lambda t}e^{\left(\sum_{i=1}^{k_{1}}\lambda_{i}(vw)^{i}+\sum_{j=1}^{k_{2}}\mu_{j}(uw^{-1})^{j}\right)t}\Bigg{(}\Bigg{(}\sum_{j=1}^{k_{2}}j\mu_{j}u^{j-1}w^{-j}\Bigg{)}^{2}t^{2}
+j=1k2j(j1)μjuj2wjt),\displaystyle\ \ +\sum_{j=1}^{k_{2}}j(j-1)\mu_{j}u^{j-2}w^{-j}t\Bigg{)},
vU(u,v,w,t)\displaystyle\frac{\partial}{\partial v}U^{*}(u,v,w,t) =weΛte(i=1k1λi(vw)i+j=1k2μj(uw1)j)t(1+i=1k1iλi(vw)it),\displaystyle=we^{-\Lambda t}e^{\left(\sum_{i=1}^{k_{1}}\lambda_{i}(vw)^{i}+\sum_{j=1}^{k_{2}}\mu_{j}(uw^{-1})^{j}\right)t}\left(1+\sum_{i=1}^{k_{1}}i\lambda_{i}(vw)^{i}t\right),

and

v2U(u,v,w,t)\displaystyle\frac{\partial}{\partial v^{2}}U^{*}(u,v,w,t) =weΛte(i=1k1λi(vw)i+j=1k2μj(uw1)j)t(i=1k1i2λivi1wit\displaystyle=we^{-\Lambda t}e^{\left(\sum_{i=1}^{k_{1}}\lambda_{i}(vw)^{i}+\sum_{j=1}^{k_{2}}\mu_{j}(uw^{-1})^{j}\right)t}\Bigg{(}\sum_{i=1}^{k_{1}}i^{2}\lambda_{i}v^{i-1}w^{i}t
+(1+i=1k1iλi(vw)it)i=1k1iλivi1wit).\displaystyle\ \ +\Bigg{(}1+\sum_{i=1}^{k_{1}}i\lambda_{i}(vw)^{i}t\Bigg{)}\sum_{i=1}^{k_{1}}i\lambda_{i}v^{i-1}w^{i}t\Bigg{)}.

So,

𝔼(D(t))\displaystyle\mathbb{E}(D^{*}(t)) =uU(u,v,w,t)|u=1,v=1,w=1=j=1k2jμjt,\displaystyle=\frac{\partial}{\partial u}U^{*}(u,v,w,t)\bigg{|}_{u=1,v=1,w=1}=\sum_{j=1}^{k_{2}}j\mu_{j}t,
𝔼(B(t))\displaystyle\mathbb{E}(B^{*}(t)) =vU(u,v,w,t)|u=1,v=1,w=1=1+i=1k1iλit,\displaystyle=\frac{\partial}{\partial v}U^{*}(u,v,w,t)\bigg{|}_{u=1,v=1,w=1}=1+\sum_{i=1}^{k_{1}}i\lambda_{i}t,
Var(D(t))\displaystyle\mathrm{Var}(D^{*}(t)) =2u2U(u,v,w,t)|u=1,v=1,w=1+𝔼(D(t))(𝔼(D(t)))2=j=1k2j2μjt,\displaystyle=\frac{\partial^{2}}{\partial u^{2}}U^{*}(u,v,w,t)\bigg{|}_{u=1,v=1,w=1}+\mathbb{E}(D^{*}(t))-(\mathbb{E}(D^{*}(t)))^{2}=\sum_{j=1}^{k_{2}}j^{2}\mu_{j}t,

and

Var(B(t))=2v2U(u,v,w,t)|u=1,v=1,w=1+𝔼(N(t))(𝔼(B(t)))2=i=1k1i2λit.\mathrm{Var}(B^{*}(t))=\frac{\partial^{2}}{\partial v^{2}}U^{*}(u,v,w,t)\bigg{|}_{u=1,v=1,w=1}+\mathbb{E}(N^{*}(t))-(\mathbb{E}(B^{*}(t)))^{2}=\sum_{i=1}^{k_{1}}i^{2}\lambda_{i}t.

This completes the proof. ∎

Proposition 3.9.

The variance-covariance matrix Σ\Sigma of (D(t),B(t),N(t))(D^{*}(t),B^{*}(t),N^{*}(t)) is given by

Σ=(j=1k2j2μjt0j=1k2j2μjt0i=1k1i2λiti=1k1i2λitj=1k2j2μjti=1k1i2λit(i=1k1i2λi+j=1k2j2μj)t)\Sigma=\begin{pmatrix}\sum_{j=1}^{k_{2}}j^{2}\mu_{j}t&0&-\sum_{j=1}^{k_{2}}j^{2}\mu_{j}t\vspace{0.2cm}\\ 0&\sum_{i=1}^{k_{1}}i^{2}\lambda_{i}t&\sum_{i=1}^{k_{1}}i^{2}\lambda_{i}t\vspace{0.2cm}\\ -\sum_{j=1}^{k_{2}}j^{2}\mu_{j}t&\sum_{i=1}^{k_{1}}i^{2}\lambda_{i}t&\left(\sum_{i=1}^{k_{1}}i^{2}\lambda_{i}+\sum_{j=1}^{k_{2}}j^{2}\mu_{j}\right)t\end{pmatrix}

for all t0t\geq 0.

Proof.

On taking derivatives of (3.11) with respect to u,vu,\,v and ww, we get

2uvU(u,v,w,t)\displaystyle\frac{\partial^{2}}{\partial u\partial v}U^{*}(u,v,w,t) =weΛte(i=1k1λi(vw)i+j=1k2μj(uw1)j)t(1+i=1k1iλi(vw)it)j=1k2jμjuj1wjt,\displaystyle=we^{-\Lambda t}e^{\left(\sum_{i=1}^{k_{1}}\lambda_{i}(vw)^{i}+\sum_{j=1}^{k_{2}}\mu_{j}(uw^{-1})^{j}\right)t}\left(1+\sum_{i=1}^{k_{1}}i\lambda_{i}(vw)^{i}t\right)\sum_{j=1}^{k_{2}}j\mu_{j}u^{j-1}w^{-j}t,
2uwU(u,v,w,t)\displaystyle\frac{\partial^{2}}{\partial u\partial w}U^{*}(u,v,w,t) =veΛte(i=1k1λi(vw)i+j=1k2μj(uw1)j)t(j=1k2jμjuj1wjt(i=1k1iλi(vw)i\displaystyle=ve^{-\Lambda t}e^{\left(\sum_{i=1}^{k_{1}}\lambda_{i}(vw)^{i}+\sum_{j=1}^{k_{2}}\mu_{j}(uw^{-1})^{j}\right)t}\bigg{(}\sum_{j=1}^{k_{2}}j\mu_{j}u^{j-1}w^{-j}t\bigg{(}\sum_{i=1}^{k_{1}}i\lambda_{i}(vw)^{i}
j=1k2jμj(uw1)j)tj=1k2j(j1)μjuj1wjt),\displaystyle\ \ -\sum_{j=1}^{k_{2}}j\mu_{j}(uw^{-1})^{j}\bigg{)}t-\sum_{j=1}^{k_{2}}j(j-1)\mu_{j}u^{j-1}w^{-j}t\bigg{)},
2vwU(u,v,w,t)\displaystyle\frac{\partial^{2}}{\partial v\partial w}U^{*}(u,v,w,t) =eΛte(i=1k1λi(vw)i+j=1k2μj(uw1)j)t((1+i=1k1i(i+1)λi(vw)it)\displaystyle=e^{-\Lambda t}e^{\left(\sum_{i=1}^{k_{1}}\lambda_{i}(vw)^{i}+\sum_{j=1}^{k_{2}}\mu_{j}(uw^{-1})^{j}\right)t}\Bigg{(}\Bigg{(}1+\sum_{i=1}^{k_{1}}i(i+1)\lambda_{i}(vw)^{i}t\Bigg{)}
+(1+i=1k1iλi(vw)it)(i=1k1iλi(vw)ij=1k2jμj(uw1)j)t).\displaystyle\ \ +\Bigg{(}1+\sum_{i=1}^{k_{1}}i\lambda_{i}(vw)^{i}t\Bigg{)}\Bigg{(}\sum_{i=1}^{k_{1}}i\lambda_{i}(vw)^{i}-\sum_{j=1}^{k_{2}}j\mu_{j}(uw^{-1})^{j}\Bigg{)}t\Bigg{)}.

So,

Cov(D(t),B(t))\displaystyle\mathrm{Cov}(D^{*}(t),B^{*}(t)) =2uvU(u,v,w,t)|u=1,v=1,w=1𝔼(D(t))𝔼(B(t))\displaystyle=\frac{\partial^{2}}{\partial u\partial v}U^{*}(u,v,w,t)\bigg{|}_{u=1,v=1,w=1}-\mathbb{E}(D^{*}(t))\mathbb{E}(B^{*}(t))
=(1+i=1k1iλit)j=1k2jμjt(1+i=1k1iλit)j=1k2jμjt=0,\displaystyle=\left(1+\sum_{i=1}^{k_{1}}i\lambda_{i}t\right)\sum_{j=1}^{k_{2}}j\mu_{j}t-\left(1+\sum_{i=1}^{k_{1}}i\lambda_{i}t\right)\sum_{j=1}^{k_{2}}j\mu_{j}t=0,
Cov(D(t),N(t))\displaystyle\mathrm{Cov}(D^{*}(t),N^{*}(t)) =2uwU(u,v,w,t)|u=1,v=1,w=1𝔼(D(t))𝔼(N(t))\displaystyle=\frac{\partial^{2}}{\partial u\partial w}U^{*}(u,v,w,t)\bigg{|}_{u=1,v=1,w=1}-\mathbb{E}(D^{*}(t))\mathbb{E}(N^{*}(t))
=j=1k2jμjt(i=1k1iλij=1k2jμj)tj=1k2j(j1)μjt\displaystyle=\sum_{j=1}^{k_{2}}j\mu_{j}t\left(\sum_{i=1}^{k_{1}}i\lambda_{i}-\sum_{j=1}^{k_{2}}j\mu_{j}\right)t-\sum_{j=1}^{k_{2}}j(j-1)\mu_{j}t
j=1k2jμjt(1+(i=1k1iλij=1k2jμj)t)\displaystyle\ \ -\sum_{j=1}^{k_{2}}j\mu_{j}t\left(1+\left(\sum_{i=1}^{k_{1}}i\lambda_{i}-\sum_{j=1}^{k_{2}}j\mu_{j}\right)t\right)
=j=1k2j2μjt,\displaystyle=-\sum_{j=1}^{k_{2}}j^{2}\mu_{j}t,

and

Cov(B(t),N(t))\displaystyle\mathrm{Cov}(B^{*}(t),N^{*}(t)) =2vwU(u,v,w,t)|u=1,v=1,w=1𝔼(B(t))𝔼(N(t))\displaystyle=\frac{\partial^{2}}{\partial v\partial w}U^{*}(u,v,w,t)\bigg{|}_{u=1,v=1,w=1}-\mathbb{E}(B^{*}(t))\mathbb{E}(N^{*}(t))
=1+i=1k1i(i+1)λit+(1+i=1k1iλit)(i=1k1iλij=1k2jμj)t\displaystyle=1+\sum_{i=1}^{k_{1}}i(i+1)\lambda_{i}t+\left(1+\sum_{i=1}^{k_{1}}i\lambda_{i}t\right)\left(\sum_{i=1}^{k_{1}}i\lambda_{i}-\sum_{j=1}^{k_{2}}j\mu_{j}\right)t
(1+i=1k1iλit)(1+(i=1k1iλij=1k2jμj)t)\displaystyle\ \ -\left(1+\sum_{i=1}^{k_{1}}i\lambda_{i}t\right)\left(1+\left(\sum_{i=1}^{k_{1}}i\lambda_{i}-\sum_{j=1}^{k_{2}}j\mu_{j}\right)t\right)
=i=1k1i2λit.\displaystyle=\sum_{i=1}^{k_{1}}i^{2}\lambda_{i}t.

This completes the proof. ∎

4. Stochastic integral of GBDP

Puri (1966) and Puri (1968) studied the joint distribution of linear birth-death process and its integral. McNeil (1970) obtained the distribution of the integral functional of BDP. Moreover, Gani and McNeil (1971) studied the joint distributions of random variables and their integrals for certain birth-death and diffusion processes. Doubleday (1973) has done similar study for the linear birth-death process with multiple births.

In GBDP, let 𝒩(0)=k\mathcal{N}(0)=k, that is, there are kk individuals present in a population at time t=0t=0. The hitting time of {𝒩(t)}t\{\mathcal{N}(t)\}_{t\geq} to state 0 given that 𝒩(0)=k\mathcal{N}(0)=k is defined as

Zk=inf{t0:𝒩(t)=0}.Z_{k}=\mathrm{inf}\{t\geq 0:\mathcal{N}(t)=0\}. (4.1)

Let pk(t)=Pr{𝒩(t)=0|𝒩(0)=k}.p_{k}(t)=\mathrm{Pr}\{\mathcal{N}(t)=0|\mathcal{N}(0)=k\}. Suppose that there is no possibility of immigration, that is, λ(0)i=0\lambda_{(0)_{i}}=0 for all i=1,2,,k1i=1,2,\ldots,k_{1}. Then, Pr{Zks}=pk(s),s0.\mathrm{Pr}\{Z_{k}\leq s\}=p_{k}(s),\ s\geq 0.

Let us consider an infinitesimal time interval of length hh. Then, we have

pk(t+h)\displaystyle p_{k}(t+h) =Pr{𝒩(h)=k|𝒩(0)=k}pk(t)+i=1k1Pr{𝒩(h)=k+i|𝒩(0)=k}pk+i(t)\displaystyle=\mathrm{Pr}\{\mathcal{N}(h)=k|\mathcal{N}(0)=k\}p_{k}(t)+\sum_{i=1}^{k_{1}}\mathrm{Pr}\{\mathcal{N}(h)=k+i|\mathcal{N}(0)=k\}p_{k+i}(t)
+j=1k2Pr{𝒩(h)=kj|𝒩(0)=k}pkj(t).\displaystyle\ \ +\sum_{j=1}^{k_{2}}\mathrm{Pr}\{\mathcal{N}(h)=k-j|\mathcal{N}(0)=k\}p_{k-j}(t).

On using (2.1), we get

pk(t+h)\displaystyle p_{k}(t+h) =(1i=1k1λ(k)ih+j=1k2μ(k)jh)pk(t)+i=1k1λ(k)ihpk+i(t)+j=1k2μ(k)jhpkj(t)+o(h),\displaystyle=\bigg{(}1-\sum_{i=1}^{k_{1}}\lambda_{(k)_{i}}h+\sum_{j=1}^{k_{2}}\mu_{(k)_{j}}h\bigg{)}p_{k}(t)+\sum_{i=1}^{k_{1}}\lambda_{(k)_{i}}hp_{k+i}(t)+\sum_{j=1}^{k_{2}}\mu_{(k)_{j}}hp_{k-j}(t)+o(h),

which on rearranging the terms and letting h0h\to 0 reduces to

ddtpk(t)=(i=1k1λ(k)i+j=1k2μ(k)j)pk(t)+i=1k1λ(k)ipk+i(t)+j=1k2μ(k)jpkj(t).\frac{\mathrm{d}}{\mathrm{d}t}p_{k}(t)=-\bigg{(}\sum_{i=1}^{k_{1}}\lambda_{(k)_{i}}+\sum_{j=1}^{k_{2}}\mu_{(k)_{j}}\bigg{)}p_{k}(t)+\sum_{i=1}^{k_{1}}\lambda_{(k)_{i}}p_{k+i}(t)+\sum_{j=1}^{k_{2}}\mu_{(k)_{j}}p_{k-j}(t). (4.2)

Let

𝒲k=0Zkg(𝒩(t))dt,\mathcal{W}_{k}=\int_{0}^{Z_{k}}g(\mathcal{N}(t))\,\mathrm{d}t, (4.3)

where g()g(\cdot) is a non-negative real valued function on the state-space of GBDP. Observe that if g(u)=1g(u)=1 then 𝒲k=Zk\mathcal{W}_{k}=Z_{k} and if g(u)=ug(u)=u then 𝒲k\mathcal{W}_{k} is the area under 𝒩(t)\mathcal{N}(t) up to time of first extinction of the process given that it start from state kk.

Now, let us define 𝒲k(θ)=𝔼(eθ𝒲k),θ0.\mathcal{W}_{k}(\theta)=\mathbb{E}(e^{-\theta\mathcal{W}_{k}}),\ \theta\geq 0. Recall that TsT_{s} denotes the total time spent by the process {𝒩(t)}t0\{\mathcal{N}(t)\}_{t\geq 0} in state ss before the first transition takes place given that it start from state ss, and let BiB_{i} denotes the event that the first transition is ii many births and DjD_{j} be the event that the first transition is jj many deaths given that we start from kk. Then, we have

𝒲k(θ)\displaystyle\mathcal{W}_{k}(\theta) =i=1k1𝔼(eθ𝒲k|Bi)Pr(Bi)+j=1k2𝔼(eθ𝒲k|Dj)Pr(Dj)\displaystyle=\sum_{i=1}^{k_{1}}\mathbb{E}(e^{-\theta\mathcal{W}_{k}}|B_{i})\mathrm{Pr}(B_{i})+\sum_{j=1}^{k_{2}}\mathbb{E}(e^{-\theta\mathcal{W}_{k}}|D_{j})\mathrm{Pr}(D_{j})
=i=1k1𝔼(eθ(𝒲k+i+Tkg(k)))Pr(Bi)+j=1k2𝔼(eθ(𝒲kj+Tkg(k)))Pr(Dj)\displaystyle=\sum_{i=1}^{k_{1}}\mathbb{E}(e^{-\theta(\mathcal{W}_{k+i}+T_{k}g(k))})\mathrm{Pr}(B_{i})+\sum_{j=1}^{k_{2}}\mathbb{E}(e^{-\theta(\mathcal{W}_{k-j}+T_{k}g(k))})\mathrm{Pr}(D_{j})
=i=1k1𝒲k+i(θ)𝔼(eθTkg(k))Pr(Bi)+j=1k2𝒲kj(θ)𝔼(eθTkg(k))Pr(Dj),\displaystyle=\sum_{i=1}^{k_{1}}\mathcal{W}_{k+i}(\theta)\mathbb{E}(e^{-\theta T_{k}g(k)})\mathrm{Pr}(B_{i})+\sum_{j=1}^{k_{2}}\mathcal{W}_{k-j}(\theta)\mathbb{E}(e^{-\theta T_{k}g(k)})\mathrm{Pr}(D_{j}),

where we used strong Markov property to get the last two steps. From the Markovian property of GBDP, we have Pr(Bi)=λ(k)i/ρ\mathrm{Pr}(B_{i})={\lambda_{(k)_{i}}}/{\rho} and Pr(Dj)=μ(k)j/ρ,\mathrm{Pr}(D_{j})={\mu_{(k)_{j}}}/{\rho}, where ρ=i=1k1λ(k)i+j=1k2μ(k)j\rho=\sum_{i=1}^{k_{1}}\lambda_{(k)_{i}}+\sum_{j=1}^{k_{2}}\mu_{(k)_{j}}. On using Remark 2.2, we get

𝒲k(θ)\displaystyle\mathcal{W}_{k}(\theta) =(i=1k1𝒲k+i(θ)λ(k)i+j=1k2𝒲kj(θ)μ(k)j)0eθsg(k)eρsds\displaystyle=\bigg{(}\sum_{i=1}^{k_{1}}\mathcal{W}_{k+i}(\theta)\lambda_{(k)_{i}}+\sum_{j=1}^{k_{2}}\mathcal{W}_{k-j}(\theta)\mu_{(k)_{j}}\bigg{)}\int_{0}^{\infty}e^{-\theta sg(k)}e^{-\rho s}\,\mathrm{d}s
=(i=1k1𝒲k+i(θ)λ(k)i+j=1k2𝒲kj(θ)μ(k)j)1θg(k)+ρ.\displaystyle=\bigg{(}\sum_{i=1}^{k_{1}}\mathcal{W}_{k+i}(\theta)\lambda_{(k)_{i}}+\sum_{j=1}^{k_{2}}\mathcal{W}_{k-j}(\theta)\mu_{(k)_{j}}\bigg{)}\frac{1}{\theta g(k)+\rho}.

Hence,

θ𝒲k(θ)=i=1k1λik𝒲k+i(θ)+j=1k2μjk𝒲kj(θ)(i=1k1λik+j=1k2μjk)𝒲k(θ)\theta\mathcal{W}_{k}(\theta)=\sum_{i=1}^{k_{1}}\lambda_{ik}^{*}\mathcal{W}_{k+i}(\theta)+\sum_{j=1}^{k_{2}}\mu_{jk}^{*}\mathcal{W}_{k-j}(\theta)-\bigg{(}\sum_{i=1}^{k_{1}}\lambda_{ik}^{*}+\sum_{j=1}^{k_{2}}\mu_{jk}^{*}\bigg{)}\mathcal{W}_{k}(\theta) (4.4)

with 𝒲0=0Z0g(𝒩(t))dt=0\mathcal{W}_{0}=\int_{0}^{Z_{0}}g(\mathcal{N}(t))\,\mathrm{d}t=0 and 𝒲0(θ)=1.\mathcal{W}_{0}(\theta)=1. Here, λik=λ(k)i/g(k)\lambda_{ik}^{*}={\lambda_{(k)_{i}}}/{g(k)} and μjk=μ(k)j/g(k).\mu_{jk}^{*}={\mu_{(k)_{j}}}/{g(k)}.

As McNeil (1970) pointed out that Eq. (4.4) correspond to finding the Laplace transform of (4.2) with parameters λik\lambda_{ik}^{*} and μjk\mu_{jk}^{*} in place of λ(k)i\lambda_{(k)_{i}} and μ(k)j\mu_{(k)_{j}}, respectively, that is, on taking the Laplace transform of (4.2), we get

θk(θ)=i=1k1λ(k)ik+i(θ)+j=1k2μ(k)jkj(θ)(i=1k1λ(k)i+j=1k2μ(k)j)k(θ),\theta\mathcal{L}_{k}(\theta)=\sum_{i=1}^{k_{1}}\lambda_{(k)_{i}}\mathcal{L}_{k+i}(\theta)+\sum_{j=1}^{k_{2}}\mu_{(k)_{j}}\mathcal{L}_{k-j}(\theta)-\bigg{(}\sum_{i=1}^{k_{1}}\lambda_{(k)_{i}}+\sum_{j=1}^{k_{2}}\mu_{(k)_{j}}\bigg{)}\mathcal{L}_{k}(\theta), (4.5)

where

k(θ)=0eθt(ddtpk(t))dt=𝔼(eθZk)\mathcal{L}_{k}(\theta)=\int_{0}^{\infty}e^{-\theta t}\bigg{(}\frac{\mathrm{d}}{\mathrm{d}t}p_{k}(t)\bigg{)}\,\mathrm{d}t=\mathbb{E}(e^{-\theta Z_{k}})

with 0(θ)=1\mathcal{L}_{0}(\theta)=1. Hence, the Eq. (4.4) is nothing but the backward equation for GBDP with scaled parameters λik\lambda_{ik}^{*} and μjk\mu_{jk}^{*} and the distribution of 𝒲k\mathcal{W}_{k} is known whenever the distribution of ZkZ_{k} is known.

Remark 4.1.

For k1=k2=1k_{1}=k_{2}=1, (4.4) reduces to

θ𝒲k(θ)=λ1k𝒲k+1(θ)+μ1k𝒲k1(θ)(λ1k+μ1k)𝒲k(θ),\theta\mathcal{W}_{k}(\theta)=\lambda_{1k}^{*}\mathcal{W}_{k+1}(\theta)+\mu_{1k}^{*}\mathcal{W}_{k-1}(\theta)-(\lambda_{1k}^{*}+\mu_{1k}^{*})\mathcal{W}_{k}(\theta),

with 𝒲0(θ)=1\mathcal{W}_{0}(\theta)=1, which coincide with result obtained in McNeil (1970).

Remark 4.2.

In the cases of GLBDP, that is, λ(k)i=kλi\lambda_{(k)_{i}}=k\lambda_{i} and μ(k)j=kμj\mu_{(k)_{j}}=k\mu_{j}, (4.4) reduces to

θ𝒲k(θ)=i=1k1λ~ik𝒲k+i(θ)+j=1k2μ~jk𝒲kj(θ)(i=1k1λ~ik+j=1k2μ~jk)𝒲k(θ),\theta\mathcal{W}_{k}(\theta)=\sum_{i=1}^{k_{1}}\tilde{\lambda}_{ik}^{*}\mathcal{W}_{k+i}(\theta)+\sum_{j=1}^{k_{2}}\tilde{\mu}_{jk}^{*}\mathcal{W}_{k-j}(\theta)-\bigg{(}\sum_{i=1}^{k_{1}}\tilde{\lambda}_{ik}^{*}+\sum_{j=1}^{k_{2}}\tilde{\mu}_{jk}^{*}\bigg{)}\mathcal{W}_{k}(\theta),

where λ~ik=kλi/g(k)\tilde{\lambda}_{ik}^{*}={k\lambda_{i}}/{g(k)} and μ~jk=kμj/g(k).\tilde{\mu}_{jk}^{*}={k\mu_{j}}/{g(k)}.

4.1. Stochastic path integral of GBDP

Let us consider the stochastic path integral 𝒳(t)=0tg(𝒩(s))ds\mathcal{X}(t)=\int_{0}^{t}g(\mathcal{N}(s))\,\mathrm{d}s of GBDP, where g()g(\cdot) is a non-negative real valued function. Then, conditional on the event {𝒩(0)=m}\{\mathcal{N}(0)=m\}, the joint distribution function qm(n,x,t)=Pr{𝒩(t)=n,𝒳(t)x|𝒩(0)=m}q_{m}(n,x,t)=\mathrm{Pr}\{\mathcal{N}(t)=n,\mathcal{X}(t)\leq x|\mathcal{N}(0)=m\}, t0t\geq 0 of 𝒩(t)\mathcal{N}(t) and 𝒳(t)\mathcal{X}(t) solves

tqm(n,x,t)+g(n)xqm(n,x,t)\displaystyle\frac{\partial}{\partial t}q_{m}(n,x,t)+g(n)\frac{\partial}{\partial x}q_{m}(n,x,t) =(i=1k1λ(n)i+j=1k2μ(n)j)qm(n,x,t)+i=1k1λ(ni)iqm(ni,x,t)\displaystyle=-\left(\sum_{i=1}^{k_{1}}\lambda_{(n)_{i}}+\sum_{j=1}^{k_{2}}\mu_{(n)_{j}}\right)q_{m}(n,x,t)+\sum_{i=1}^{k_{1}}\lambda_{(n-i)_{i}}q_{m}(n-i,x,t)
+j=1k2μ(n+j)jqm(n+j,x,t),n0,\displaystyle\ \ +\sum_{j=1}^{k_{2}}\mu_{(n+j)_{j}}q_{m}(n+j,x,t),\ n\geq 0, (4.6)

where qm(ni,x,t)=0q_{m}(n-i,x,t)=0 for all i>ni>n. This follows from the transitions probabilities of GBDP and its Markov property.

Remark 4.3.

Let 𝒯k(t)\mathcal{T}_{k}(t) be the total time in (0,t)(0,t) during which the population size is kk. Then, for g(x)=xg(x)=x, the stochastic path integral of GBDP can be written as

X(t)=0t𝒩(s)ds=k=1k𝒯k(t).X(t)=\int_{0}^{t}\mathcal{N}(s)\,\mathrm{d}s=\sum_{k=1}^{\infty}k\mathcal{T}_{k}(t).

Next, we consider the stochastic path integral of GLBDP.

4.1.1. A linear case of GBDP

Here, we obtain some results related to the stochastic path integral of GLBDP. Let X(t)X(t) be the stochastic path integral of GLBDP defined as

X(t)=0tN(s)dt,t0.X(t)=\int_{0}^{t}N(s)\,\mathrm{d}t,\ t\geq 0.

From (4.6), the system of differential equations that governs the joint distribution pm(n,x,t)=Pr{N(t)=n,X(t)<x|N(0)=m}p_{m}(n,x,t)=\mathrm{Pr}\{N(t)=n,X(t)<x|N(0)=m\}, t0t\geq 0 of N(t)N(t) and X(t)X(t) is given by

tpm(n,x,t)+nxpm(n,x,t)\displaystyle\frac{\partial}{\partial t}p_{m}(n,x,t)+n\frac{\partial}{\partial x}p_{m}(n,x,t) =n(i=1k1λi+j=1k2μj)pm(n,x,t)+i=1k1(ni)λipm(ni,x,t)\displaystyle=-n\left(\sum_{i=1}^{k_{1}}\lambda_{i}+\sum_{j=1}^{k_{2}}\mu_{j}\right)p_{m}(n,x,t)+\sum_{i=1}^{k_{1}}(n-i)\lambda_{i}p_{m}(n-i,x,t)
+j=1k2(n+j)μjpm(n+j,x,t),n0.\displaystyle\ \ +\sum_{j=1}^{k_{2}}(n+j)\mu_{j}p_{m}(n+j,x,t),\ n\geq 0. (4.7)

Thus, their joint pgf G~(u,v,t)=0n=0unvxpm(n,x,t)dx\tilde{G}(u,v,t)=\int_{0}^{\infty}\sum_{n=0}^{\infty}u^{n}v^{x}p_{m}(n,x,t)\,\mathrm{d}x, |u|1|u|\leq 1, |v|1|v|\leq 1 solves

tG~(u,v,t)=(i=1k1λi(ui1)+j=1k2μj(uj1)+lnv)uuG~(u,v,t)\displaystyle\frac{\partial}{\partial t}\tilde{G}(u,v,t)=\left(\sum_{i=1}^{k_{1}}\lambda_{i}(u^{i}-1)+\sum_{j=1}^{k_{2}}\mu_{j}(u^{-j}-1)+\ln v\right)u\frac{\partial}{\partial u}\tilde{G}(u,v,t)

with initial condition G~(u,v,0)=um\tilde{G}(u,v,0)=u^{m}. Hence, the cgf K~(θu,θv,t)=lnG~(eθu,eθv,t)\tilde{K}(\theta_{u},\theta_{v},t)=\ln\tilde{G}(e^{\theta_{u}},e^{\theta_{v}},t) solves

tK~(θu,θv,t)=(i=1k1λi(eiθu1)+j=1k2μj(ejθu1)+θv)θuK~(θu,θv,t)\frac{\partial}{\partial t}\tilde{K}(\theta_{u},\theta_{v},t)=\left(\sum_{i=1}^{k_{1}}\lambda_{i}(e^{i\theta_{u}}-1)+\sum_{j=1}^{k_{2}}\mu_{j}(e^{-j\theta_{u}}-1)+\theta_{v}\right)\frac{\partial}{\partial\theta_{u}}\tilde{K}(\theta_{u},\theta_{v},t) (4.8)

with K~(θu,θv,0)=mθu.\tilde{K}(\theta_{u},\theta_{v},0)=m\theta_{u}.

Next, we obtain the mean and variance of the stochastic path integral of GLBDP and its covariance with N(t)N(t).

Proposition 4.1.

For m=1m=1 and t0t\geq 0, we have
(i) 𝔼(X(t))={(eηt1)η,η0,t,η=0,\mathbb{E}(X(t))=\begin{cases}\frac{(e^{\eta t}-1)}{\eta},\ \eta\neq 0,\vspace{0.1cm}\\ t,\ \eta=0,\end{cases}
(ii) Cov(N(t),X(t))={ζη2(e2ηteηtηteηt),η0,ζt22,η=0,,\mathrm{Cov}(N(t),X(t))=\begin{cases}\frac{\zeta}{\eta^{2}}(e^{2\eta t}-e^{\eta t}-\eta te^{\eta t}),\ \eta\neq 0,\vspace{0.1cm}\\ \zeta\frac{t^{2}}{2},\ \eta=0,\end{cases},
(iii) Var(X(t))={2ζη2(e2ηt12ηteηt),η0,ζt33,η=0.\mathrm{Var}(X(t))=\begin{cases}\frac{2\zeta}{\eta^{2}}\left(\frac{e^{2\eta t}-1}{2\eta}-te^{\eta t}\right),\ \eta\neq 0,\vspace{0.1cm}\\ \zeta\frac{t^{3}}{3},\ \eta=0.\end{cases}

Proof.

On using (3.5) in (4.8), we have

t(θu𝔼(N(t))\displaystyle\frac{\partial}{\partial t}(\theta_{u}\mathbb{E}(N(t)) +θv𝔼(X(t))+θu22Var(N(t))+θv22Var(X(t))+θuθvCov(N(t),X(t))+)\displaystyle+\theta_{v}\mathbb{E}(X(t))+\frac{\theta_{u}^{2}}{2}\mathrm{Var}(N(t))+\frac{\theta_{v}^{2}}{2}\mathrm{Var}(X(t))+\theta_{u}\theta_{v}\mathrm{Cov}(N(t),X(t))+\dots)
θvl=0(θu)ll!(𝔼(N(t))+θuVar(N(t))+θvCov(N(t),X(t))+)\displaystyle-\theta_{v}\sum_{l=0}^{\infty}\frac{(-\theta_{u})^{l}}{l!}(\mathbb{E}(N(t))+\theta_{u}\mathrm{Var}(N(t))+\theta_{v}\mathrm{Cov}(N(t),X(t))+\dots)
=l=1(i=1k1λi(iθu)ll!+j=1k2μj(jθu)ll!).\displaystyle=\sum_{l=1}^{\infty}\left(\sum_{i=1}^{k_{1}}\lambda_{i}\frac{(i\theta_{u})^{l}}{l!}+\sum_{j=1}^{k_{2}}\mu_{j}\frac{(-j\theta_{u})^{l}}{l!}\right). (4.9)

On comparing the coefficients on both sides of (4.9), we get

ddt𝔼(X(t))\displaystyle\frac{\mathrm{d}}{\mathrm{d}t}\mathbb{E}(X(t)) =𝔼(N(t)),\displaystyle=\mathbb{E}(N(t)),
ddtCov(N(t),X(t))\displaystyle\frac{\mathrm{d}}{\mathrm{d}t}\mathrm{Cov}(N(t),X(t)) =Var(N(t))+ηCov(N(t),X(t))\displaystyle=\mathrm{Var}(N(t))+\eta\mathrm{Cov}(N(t),X(t))

and

ddtVar(X(t))=2Cov(N(t),X(t)).\frac{\mathrm{d}}{\mathrm{d}t}\mathrm{Var}(X(t))=2\mathrm{Cov}(N(t),X(t)).

On solving the above differential equations with initial conditions 𝔼(X(0))=0\mathbb{E}(X(0))=0, Cov(N(0),X(0))=0\mathrm{Cov}(N(0),X(0))=0 and Var(X(0))=0\mathrm{Var}(X(0))=0, respectively, we get the required results. ∎

Remark 4.4.

The limiting behavior of the mean of X(t)X(t) is given as follows:

limt𝔼(X(t))={1η,η<0,,η0.\lim_{t\to\infty}\mathbb{E}(X(t))=\begin{cases}-\frac{1}{\eta},\ \eta<0,\vspace{0.1cm}\\ \infty,\ \eta\geq 0.\end{cases}

The correlation coefficient RX(1)=Corr(N(1),X(1))R_{X}(1)=\mathrm{Corr}(N(1),X(1)) for different values of parameter are illustrated in Figure 5.

Refer to caption
Figure 5. Values of stochastic path integral of GLBDP at t=1t=1.

The variation of correlation coefficient between N(t)N(t) and its stochastic path integral with time is shown in Figure 6. For η>0\eta>0, N(t)N(t) and X(t)X(t) have strong positive correlation and as tt increases it converges to +1+1. For η<0\eta<0, the correlation is positive but it is getting weaker with time.

Refer to caption
Figure 6. Correlation plot of N(t)N(t) and X(t)X(t) for k1=k2=2k_{1}=k_{2}=2.

4.2. Stochastic path integral of GBDP in the case of constant birth and death rates

Let X(t)=0tN(s)dsX^{*}(t)=\int_{0}^{t}N^{*}(s)\,\mathrm{d}s be the stochastic path integral of the process {N(t)}t0\{N^{*}(t)\}_{t\geq 0} defined in Section 2.2. From (4.6), the joint distribution p1(n,x,t)=Pr{N(t)=n,X(t)x|N(0)=1}p^{*}_{1}(n,x,t)=\mathrm{Pr}\{N^{*}(t)=n,X^{*}(t)\leq x|N^{*}(0)=1\} solves

tp1(n,x,t)+nxp1(n,x,t)\displaystyle\frac{\partial}{\partial t}p_{1}^{*}(n,x,t)+n\frac{\partial}{\partial x}p_{1}^{*}(n,x,t) =(i=1k1λi+j=1k2μj)p1(n,x,t)+i=1k1λip1(ni,x,t)\displaystyle=-\left(\sum_{i=1}^{k_{1}}\lambda_{i}+\sum_{j=1}^{k_{2}}\mu_{j}\right)p_{1}^{*}(n,x,t)+\sum_{i=1}^{k_{1}}\lambda_{i}p_{1}^{*}(n-i,x,t)
+j=1k2μjp1(n+j,x,t),n0\displaystyle\ \ +\sum_{j=1}^{k_{2}}\mu_{j}p_{1}^{*}(n+j,x,t),\ n\geq 0 (4.10)

with p1(1,0,0)=1p_{1}^{*}(1,0,0)=1, where p(n,x,t)=0p^{*}(n,x,t)=0 for all n<0n<0.

The pgf G~(u,v,t)=0n=0unvxp1(n,x,t)dx\tilde{G}^{*}(u,v,t)=\int_{0}^{\infty}\sum_{n=0}^{\infty}u^{n}v^{x}{p}_{1}^{*}(n,x,t)\,\mathrm{d}x, 0<u<1, 0<v<10<u<1,\ 0<v<1 of (N(t),X(t))(N^{*}(t),X^{*}(t)) solves the following differential equation:

tG~(u,v,t)lnvuG~(u,v,t)=(i=1k1λi(ui1)+j=1k2μj(uj1))G~(u,v,t)\frac{\partial}{\partial t}\tilde{G}^{*}(u,v,t)-\ln v\frac{\partial}{\partial u}\tilde{G}^{*}(u,v,t)=\left(\sum_{i=1}^{k_{1}}\lambda_{i}(u^{i}-1)+\sum_{j=1}^{k_{2}}\mu_{j}(u^{-j}-1)\right)\tilde{G}^{*}(u,v,t) (4.11)

with initial condition G~(u,v,0)=u\tilde{G}^{*}(u,v,0)=u. And, its cgf K~(θu,θv,t)=lnG~(eθu,eθv,t)\tilde{K}^{*}(\theta_{u},\theta_{v},t)=\ln\tilde{G}^{*}(e^{\theta_{u}},e^{\theta_{v}},t) solves

tK~(θu,θv,t)θveθuθuK~(θu,θv,t)=i=1k1λi(eiθu1)+j=1k2μj(ejθu1)\frac{\partial}{\partial t}\tilde{K}^{*}(\theta_{u},\theta_{v},t)-\theta_{v}e^{-\theta_{u}}\frac{\partial}{\partial\theta_{u}}\tilde{K}^{*}(\theta_{u},\theta_{v},t)=\sum_{i=1}^{k_{1}}\lambda_{i}(e^{i\theta_{u}}-1)+\sum_{j=1}^{k_{2}}\mu_{j}(e^{-j\theta_{u}}-1) (4.12)

with K~(θu,θv,0)=θu.\tilde{K}^{*}(\theta_{u},\theta_{v},0)=\theta_{u}. On following the similar lines as in the proof of Proposition 4.1, we get

𝔼(X(t))\displaystyle\mathbb{E}(X^{*}(t)) =t+ηt22,\displaystyle=t+\frac{\eta t^{2}}{2},
Cov(N(t),X(t))\displaystyle\mathrm{Cov}(N^{*}(t),X^{*}(t)) =t+(ηi=1k1i2λi+j=1k2j2μj)t22\displaystyle=t+\left(\eta-\sum_{i=1}^{k_{1}}i^{2}\lambda_{i}+\sum_{j=1}^{k_{2}}j^{2}\mu_{j}\right)\frac{t^{2}}{2}

and

Var(X(t))=t2+(ηi=1k1i2λi+j=1k2j2μj)t33.\mathrm{Var}(X^{*}(t))=t^{2}+\left(\eta-\sum_{i=1}^{k_{1}}i^{2}\lambda_{i}+\sum_{j=1}^{k_{2}}j^{2}\mu_{j}\right)\frac{t^{3}}{3}.

Next, we solve differential equation (4.11) to obtain the joint pgf of N(t)N^{*}(t) and X(t)X^{*}(t).

The subsidiary equation corresponding to (4.11) is as follows:

t1=ulnv=1i=1k1λi(ui1)+j=1k2μj(uj1)G~G~(u,v,t).\frac{\partial t}{1}=\frac{\partial u}{-\ln v}=\frac{1}{\sum_{i=1}^{k_{1}}\lambda_{i}(u^{i}-1)+\sum_{j=1}^{k_{2}}\mu_{j}(u^{-j}-1)}\frac{\partial\tilde{G}^{*}}{\tilde{G}^{*}(u,v,t)}. (4.13)

So, we get

tlnv+u=constantt\ln v+u=\text{constant}

and

u(i=1k1λi(uii+11)+j=1k2μj(uj1j1))+lnvlnG~(u,v,t)=constant.u\left(\sum_{i=1}^{k_{1}}\lambda_{i}\left(\frac{u^{i}}{i+1}-1\right)+\sum_{j=1}^{k_{2}}\mu_{j}\left(\frac{u^{-j}}{1-j}-1\right)\right)+\ln v\ln\tilde{G}^{*}(u,v,t)=\text{constant}.

Thus, the general solution of (4.11) is given by

u(i=1k1λi(uii+11)+j=1k2μj(uj1j1))+lnvlnG~(u,v,t)=Φ(tlnv+u),u\left(\sum_{i=1}^{k_{1}}\lambda_{i}\left(\frac{u^{i}}{i+1}-1\right)+\sum_{j=1}^{k_{2}}\mu_{j}\left(\frac{u^{-j}}{1-j}-1\right)\right)+\ln v\ln\tilde{G}^{*}(u,v,t)=\Phi(t\ln v+u),

where Φ\Phi is a real valued function. By using the initial condition G~(u,v,0)=u\tilde{G}^{*}(u,v,0)=u, we obtain

Φ(u)=u(i=1k1λi(uii+11)+j=1k2μj(uj1j1))+lnvlnu.\Phi(u)=u\left(\sum_{i=1}^{k_{1}}\lambda_{i}\left(\frac{u^{i}}{i+1}-1\right)+\sum_{j=1}^{k_{2}}\mu_{j}\left(\frac{u^{-j}}{1-j}-1\right)\right)+\ln v\ln u.

Thus,

G~(u,v,t)=exp(Φ(tlnv+u)lnvulnv(i=1k1λi(uii+11)+j=1k2μj(uj1j1))).\tilde{G}^{*}(u,v,t)=\exp\left(\frac{\Phi(t\ln v+u)}{\ln v}-\frac{u}{\ln v}\left(\sum_{i=1}^{k_{1}}\lambda_{i}\left(\frac{u^{i}}{i+1}-1\right)+\sum_{j=1}^{k_{2}}\mu_{j}\left(\frac{u^{-j}}{1-j}-1\right)\right)\right).

5. Immigration effect in GLBDP

In GLBDP, the state n=0n=0 is the absorbing state of the process, that is, once the population get extinct it can not be revive again. Here, we consider a possibility that the process can revive again once it reaches state n=0n=0. To capture the immigration effect in GLBDP, we introduce a linear version of GBDP, namely, the generalized linear birth-death process with immigration (GLBDPwI).

Let ν>0\nu>0 be the rate of population growth due to immigration. Here, we consider two different cases of immigration as follows:

5.1. Immigration effect at state n=0n=0 only

In this case, the immigration happens only when no alive individual is present in the population. So, the immigration rate is λ(0)i=ν\lambda_{(0)_{i}}=\nu, and once the immigration happens, the birth and death rates are λ(n)i=nλi\lambda_{(n)_{i}}=n\lambda_{i}, i{1,2,,k1}i\in\{1,2,\dots,k_{1}\} and μ(n)j\mu_{(n)_{j}}, j{1,2,,k2}j\in\{1,2,\dots,k_{2}\}, respectively, for all n1n\geq 1. From (2.2), the state probabilities p(n,t)p(n,t), n0n\geq 0 of GLBDPwI solve the following system of differential equations:

ddtp(0,t)\displaystyle\frac{\mathrm{d}}{\mathrm{d}t}p(0,t) =k1νp(0,t)+j=1k2jμjp(j,t),\displaystyle=-k_{1}\nu p(0,t)+\sum_{j=1}^{k_{2}}j\mu_{j}p(j,t),
ddtp(n,t)\displaystyle\frac{\mathrm{d}}{\mathrm{d}t}p(n,t) =(i=1k1nλi+j=1k2nμj)p(n,t)+νp(0,t)+i=1k1(ni)λip(ni,t)\displaystyle=-\left(\sum_{i=1}^{k_{1}}n\lambda_{i}+\sum_{j=1}^{k_{2}}n\mu_{j}\right)p(n,t)+\nu p(0,t)+\sum_{i=1}^{k_{1}}(n-i)\lambda_{i}p(n-i,t)
+j=1k2(n+j)μjp(n+j,t), 1nk1\displaystyle\ \ +\sum_{j=1}^{k_{2}}(n+j)\mu_{j}p(n+j,t),\ 1\leq n\leq k_{1}

and

ddtp(n,t)=(i=1k1nλi+j=1k2nμj)p(n,t)+i=1k1(ni)λip(ni,t)+j=1k2(n+j)μjp(n+j,t),n>k1,\frac{\mathrm{d}}{\mathrm{d}t}p(n,t)=-\left(\sum_{i=1}^{k_{1}}n\lambda_{i}+\sum_{j=1}^{k_{2}}n\mu_{j}\right)p(n,t)+\sum_{i=1}^{k_{1}}(n-i)\lambda_{i}p(n-i,t)+\sum_{j=1}^{k_{2}}(n+j)\mu_{j}p(n+j,t),\ n>k_{1},

with p(1,0)=1p(1,0)=1 and p(n,t)=0p(n,t)=0 for all n1n\neq 1. Here, p(n,t)=0p(n,t)=0 for all n<0n<0. Its pgf (u,t)=n=0unp(n,t)\mathcal{H}(u,t)=\sum_{n=0}^{\infty}u^{n}p(n,t) solves

t(u,t)=(i=1k1λiu(ui1)+j=1k2μju(uj1))u(u,t)+i=1k1ν(ui1)p(0,t)\frac{\partial}{\partial t}\mathcal{H}(u,t)=\left(\sum_{i=1}^{k_{1}}\lambda_{i}u(u^{i}-1)+\sum_{j=1}^{k_{2}}\mu_{j}u(u^{-j}-1)\right)\frac{\partial}{\partial u}\mathcal{H}(u,t)+\sum_{i=1}^{k_{1}}\nu(u^{i}-1)p(0,t) (5.1)

with (u,0)=u\mathcal{H}(u,0)=u.

On taking the derivative of (5.1) with respect to uu, we get

2ut(u,t)\displaystyle\frac{\partial^{2}}{\partial u\partial t}\mathcal{H}(u,t) =(i=1k1λiu(ui1)+j=1k2μju(uj1))2u2(u,t)\displaystyle=\left(\sum_{i=1}^{k_{1}}\lambda_{i}u(u^{i}-1)+\sum_{j=1}^{k_{2}}\mu_{j}u(u^{-j}-1)\right)\frac{\partial^{2}}{\partial u^{2}}\mathcal{H}(u,t)
+(i=1k1λiu((i+1)ui1)+j=1k2μj((1j)uj1))u(u,t)+i=1k1iνui1p(0,t).\displaystyle\ \ +\left(\sum_{i=1}^{k_{1}}\lambda_{i}u((i+1)u^{i}-1)+\sum_{j=1}^{k_{2}}\mu_{j}((1-j)u^{-j}-1)\right)\frac{\partial}{\partial u}\mathcal{H}(u,t)+\sum_{i=1}^{k_{1}}i\nu u^{i-1}p(0,t). (5.2)

On taking u=1u=1 in (5.2), we get the following differential equation that governs the mean (t)=n=0np(n,t)\mathscr{M}(t)=\sum_{n=0}^{\infty}np(n,t) of GLBDPwI:

ddt(t)=η(t)+k1(k1+1)2νp(0,t)\frac{\mathrm{d}}{\mathrm{d}t}\mathscr{M}(t)=\eta\mathscr{M}(t)+\frac{k_{1}(k_{1}+1)}{2}\nu p(0,t)

with (0)=1\mathscr{M}(0)=1, where η=i=1k1iλij=1k2jμj.\eta=\sum_{i=1}^{k_{1}}i\lambda_{i}-\sum_{j=1}^{k_{2}}j\mu_{j}. Thus,

(t)=eηt(1+νk1(k1+1)20teηsp(0,s)ds).\mathscr{M}(t)=e^{\eta t}\left(1+\nu\frac{k_{1}(k_{1}+1)}{2}\int_{0}^{t}e^{-\eta s}p(0,s)\,\mathrm{d}s\right).

5.2. Immigration effect at any state

In this case, the immigration effect is always present in the population at a constant rate. So, the birth and death rates are λ(n)i=nλi+ν\lambda_{(n)_{i}}=n\lambda_{i}+\nu, i{1,2,,k1}i\in\{1,2,\dots,k_{1}\} and μ(n)j\mu_{(n)_{j}}, j{1,2,,k2}j\in\{1,2,\dots,k_{2}\}, respectively, for all n0n\geq 0. From (2.2), the state probabilities p(n,t)p(n,t) of GLBDPwI solve

ddtp(n,t)\displaystyle\frac{\mathrm{d}}{\mathrm{d}t}p(n,t) =(k1ν+ni=1k1λi+nj=1k2μj)p(n,t)+i=1k1(ν+(ni)λi)p(ni,t)\displaystyle=-\left(k_{1}\nu+n\sum_{i=1}^{k_{1}}\lambda_{i}+n\sum_{j=1}^{k_{2}}\mu_{j}\right)p(n,t)+\sum_{i=1}^{k_{1}}(\nu+(n-i)\lambda_{i})p(n-i,t)
+j=1k2(n+j)μjp(n+j,t),n0,\displaystyle\ \ +\sum_{j=1}^{k_{2}}(n+j)\mu_{j}p(n+j,t),\ n\geq 0,

where p(1,0)=1p(1,0)=1 and p(n,t)=0p(n,t)=0 for all n1n\neq 1. So, the differential equation that governs the pgf (u,t)=n=0unp(n,t)\mathcal{H}(u,t)=\sum_{n=0}^{\infty}u^{n}p(n,t) of GLBDPwI is given by

t(u,t)=(i=1k1λiu(ui1)+j=1k2μju(uj1))u(u,t)+(i=1k1ν(ui1))(u,t)\frac{\partial}{\partial t}\mathcal{H}(u,t)=\left(\sum_{i=1}^{k_{1}}\lambda_{i}u(u^{i}-1)+\sum_{j=1}^{k_{2}}\mu_{j}u(u^{-j}-1)\right)\frac{\partial}{\partial u}\mathcal{H}(u,t)+\left(\sum_{i=1}^{k_{1}}\nu(u^{i}-1)\right)\mathcal{H}(u,t) (5.3)

with (u,0)=u\mathcal{H}(u,0)=u.

On taking the derivative with respect to uu on both sides of (5.3), we have

tu(u,t)\displaystyle\frac{\partial}{\partial t}\frac{\partial}{\partial u}\mathcal{H}(u,t) =(i=1k1λiu(ui1)+j=1k2μju(uj1))2u2(u,t)\displaystyle=\left(\sum_{i=1}^{k_{1}}\lambda_{i}u(u^{i}-1)+\sum_{j=1}^{k_{2}}\mu_{j}u(u^{-j}-1)\right)\frac{\partial^{2}}{\partial u^{2}}\mathcal{H}(u,t)
+(i=1k1λi((i+1)ui1)+j=1k2μj((j+1)uj1))u(u,t)+i=1k1iνui1(u,t)\displaystyle\ \ +\left(\sum_{i=1}^{k_{1}}\lambda_{i}((i+1)u^{i}-1)+\sum_{j=1}^{k_{2}}\mu_{j}((-j+1)u^{-j}-1)\right)\frac{\partial}{\partial u}\mathcal{H}(u,t)+\sum_{i=1}^{k_{1}}i\nu u^{i-1}\mathcal{H}(u,t)
+i=1k1ν(ui1)u(u,t).\displaystyle\ \ +\sum_{i=1}^{k_{1}}\nu(u^{i}-1)\frac{\partial}{\partial u}\mathcal{H}(u,t). (5.4)

On taking u=1u=1 in (5.4), we get the mean (t)\mathscr{M}(t) of GLBDPwI as a solution of the following differential equation:

t(t)=η(t)+k1(k1+1)ν2\frac{\partial}{\partial t}\mathscr{M}(t)=\eta\mathscr{M}(t)+\frac{k_{1}(k_{1}+1)\nu}{2}

with initial condition (0)=1\mathscr{M}(0)=1. It is given by

(t)={eηt+k1(k1+1)ν2η(eηt1),η0,1+k1(k1+1)νt2,η=0,\mathscr{M}(t)=\begin{cases}e^{\eta t}+\frac{k_{1}(k_{1}+1)\nu}{2\eta}(e^{\eta t}-1),\ \eta\neq 0,\vspace{0.1cm}\\ 1+\frac{k_{1}(k_{1}+1)\nu t}{2},\ \eta=0,\end{cases}

where η=i=1k1iλij=1k2jμj\eta=\sum_{i=1}^{k_{1}}i\lambda_{i}-\sum_{j=1}^{k_{2}}j\mu_{j}.

Remark 5.1.

For ν=0\nu=0, the mean of GLBDPwI reduces to the mean of GLBDP given in (2.15).

For different values of η\eta the variation of expected values of GLBDP and GLBDPwI with time is shown in Figure 7.

Refer to caption
Figure 7. Expected population size in GLBDP and GLBDPwI for k1=k2=2k_{1}=k_{2}=2.

In GLBDPwI, for η>0\eta>0, the population size grow exponentially fast for the large value of tt and for η<0\eta<0, the limiting mean value for large tt is k1(k1+1)ν/2η-k_{1}(k_{1}+1)\nu/2\eta, that is,

limt(t)={,η0,k1(k1+1)ν2η,η<0.\lim_{t\to\infty}\mathscr{M}(t)=\begin{cases}\infty,\ \eta\geq 0,\vspace{0.1cm}\\ -\frac{k_{1}(k_{1}+1)\nu}{2\eta},\ \eta<0.\end{cases}

Thus, as expected in GLBDPwI, the population will never go extinct.

Remark 5.2.

On comparing (2.6) and (5.3), we note that they differs in the term i=1k1ν(ui1)(u,t)\sum_{i=1}^{k_{1}}\nu(u^{i}-1)\mathcal{H}(u,t). It is a generalized counting process component with constant transition rates. Suppose that the birth rates in GLBDPwI tend to zero, that is, λi0\lambda_{i}\to 0 for all i{1,2,,k1}i\in\{1,2,\dots,k_{1}\} then it can be used to study the fluctuation in the number of particles enclosed inside very small volume. Here, the immigration of particle from surrounding happens as GCP with constant rates and the emigration of finitely many particles from closed volume is considered as death whose rates are linearly proportional to the number of particle present.

6. Application to vehicles parking management system

Suppose a vehicles parking lot has maximum capacity of KK parking spots. If we assume that the gate of the parking area is wide enough such that multiple vehicles can arrive or depart together and the chance of simultaneous arrival and departure of the vehicles is negligible. Let N(t)N(t) denotes the total number of vehicles parked at time tt. Then, we can use a linear version of GBDP to model this problem, where {N(t)}t0\{N(t)\}_{t\geq 0} is a GBDP with parameters

λ(n)i=(Kn)λi,i{1,2,,K1}andμ(n)j=nμj,j{1,2,,K2}\lambda_{(n)_{i}}=(K-n)\lambda_{i},\ i\in\{1,2,\ldots,K_{1}\}\ \text{and}\ \mu_{(n)_{j}}=n\mu_{j},\ j\in\{1,2,\ldots,K_{2}\}

for all 0nK0\leq n\leq K, K1<KK_{1}<K and K2<KK_{2}<K. Thus, the rates of arrival and departure of vehicles are proportional to the number of vacant spots and the total number of vehicles parked in the parking lot, respectively. Let A(t)A(t) and D(t)D(t) denote the number of vehicles arrived and departed by time tt. Here, we are interested in knowing the quantities 𝔼(N(t))\mathbb{E}(N(t)), 𝔼(A(t))\mathbb{E}(A(t)) and 𝔼(D(t))\mathbb{E}(D(t)). Let us assume that there are no vehicles parked at time t=0t=0. As obtained in the case of GLBDP, it can be shown that the pmf p(n,t)=Pr{N(t)=n}p(n,t)=\mathrm{Pr}\{N(t)=n\} solves the following system of differential equations:

ddtp(n,t)\displaystyle\frac{\mathrm{d}}{\mathrm{d}t}p(n,t) =(i=1K1(Kn)λi+j=1K2nμj)p(n,t)+i=1K1(Kn+i)λip(ni,t)\displaystyle=-\bigg{(}\sum_{i=1}^{K_{1}}(K-n)\lambda_{i}+\sum_{j=1}^{K_{2}}n\mu_{j}\bigg{)}p(n,t)+\sum_{i=1}^{K_{1}}(K-n+i)\lambda_{i}p(n-i,t)
+j=1K2(n+j)μjp(n+j,t), 0nK,\displaystyle\ \ +\sum_{j=1}^{K_{2}}(n+j)\mu_{j}p(n+j,t),\ 0\leq n\leq K,

with initial condition p(0,0)=1p(0,0)=1. Here, p(n,t)=0p(n,t)=0 for all n<0n<0 and n>Kn>K. Hence, the governing system of differential equation for its mean is given by

ddt𝔼(N(t))=(i=1K1iλi+j=1K2jμj)𝔼(N(t))+Ki=1K1iλi\frac{\mathrm{d}}{\mathrm{d}t}\mathbb{E}(N(t))=-\left(\sum_{i=1}^{K_{1}}i\lambda_{i}+\sum_{j=1}^{K_{2}}j\mu_{j}\right)\mathbb{E}(N(t))+K\sum_{i=1}^{K_{1}}i\lambda_{i} (6.1)

with initial condition 𝔼(N(0))=0\mathbb{E}(N(0))=0. On solving (6.1), we get

𝔼(N(t))=Ki=1K1iλiβ(1eβt),\mathbb{E}(N(t))=\frac{K\sum_{i=1}^{K_{1}}i\lambda_{i}}{\beta}\left(1-e^{-\beta t}\right),

where β=i=1K1iλi+j=1K2jμj\beta=\sum_{i=1}^{K_{1}}i\lambda_{i}+\sum_{j=1}^{K_{2}}j\mu_{j}. In the long run, the expected number of vehicles parked is given by

limt𝔼(N(t))=Ki=1K1iλiβ<K.\lim_{t\to\infty}\mathbb{E}(N(t))=\frac{K\sum_{i=1}^{K_{1}}i\lambda_{i}}{\beta}<K.

The joint pmf p(a,n,t)=Pr{A(t)=a,N(t)=n}p(a,n,t)=\mathrm{Pr}\{A(t)=a,N(t)=n\} solves

ddtp(a,n,t)\displaystyle\frac{\mathrm{d}}{\mathrm{d}t}p(a,n,t) =(i=1K1(Kn)λi+j=1K2nμj)p(a,n,t)+i=1K1(Kn+i)λip(ai,ni,t)\displaystyle=-\left(\sum_{i=1}^{K_{1}}(K-n)\lambda_{i}+\sum_{j=1}^{K_{2}}n\mu_{j}\right)p(a,n,t)+\sum_{i=1}^{K_{1}}(K-n+i)\lambda_{i}p(a-i,n-i,t)
+j=1K2(n+j)μjp(a,n+j,t),a,n{0,1,,K}\displaystyle\ \ +\sum_{j=1}^{K_{2}}(n+j)\mu_{j}p(a,n+j,t),\ a,n\in\{0,1,\dots,K\}

with p(0,0,0)=1p(0,0,0)=1. Its proof follows similar lines to that of Proposition 3.1.

The joint cgf K(θu,θv,t)=ln𝔼(eθuA(t)+θvN(t))K(\theta_{u},\theta_{v},t)=\ln\mathbb{E}(e^{\theta_{u}A(t)+\theta_{v}N(t)}) satisfies the following differential equation:

tK(θu,θv,t)+(i=1K1λi(ei(θu+θv)1)j=1K2μj(ejθv1))θvK(θu,θv,t)=Ki=1K1λi(ei(θu+θv)1)\frac{\partial}{\partial t}K(\theta_{u},\theta_{v},t)+\left(\sum_{i=1}^{K_{1}}\lambda_{i}(e^{i(\theta_{u}+\theta_{v})}-1)-\sum_{j=1}^{K_{2}}\mu_{j}(e^{-j\theta_{v}}-1)\right)\frac{\partial}{\partial\theta_{v}}K(\theta_{u},\theta_{v},t)=K\sum_{i=1}^{K_{1}}\lambda_{i}(e^{i(\theta_{u}+\theta_{v})}-1)

with initial condition K(θu,θv,0)=0K(\theta_{u},\theta_{v},0)=0. On using (3.5), we get

ddt𝔼(A(t))=i=1K1iλi(K𝔼(N(t)))\frac{\mathrm{d}}{\mathrm{d}t}\mathbb{E}(A(t))=\sum_{i=1}^{K_{1}}i\lambda_{i}(K-\mathbb{E}(N(t)))

with 𝔼(A(0))=0\mathbb{E}(A(0))=0. Thus,

𝔼(A(t))=i=1K1iλi(KtKi=1K1iλiβ(t+etβ1β)).\mathbb{E}(A(t))=\sum_{i=1}^{K_{1}}i\lambda_{i}\left(Kt-\frac{K\sum_{i=1}^{K_{1}}i\lambda_{i}}{\beta}\bigg{(}t+\frac{e^{-t\beta}-1}{\beta}\bigg{)}\right).

Similarly, the joint pmf q(d,n,t)=Pr{D(t)=d,N(t)=n}q(d,n,t)=\mathrm{Pr}\{D(t)=d,N(t)=n\} is the solution of following system of differential equations:

ddtq(d,n,t)\displaystyle\frac{\mathrm{d}}{\mathrm{d}t}q(d,n,t) =(i=1K1(Kn)λi+j=1K2nμj)q(d,n,t)+i=1K1(Kn+i)λiq(d,ni,t)\displaystyle=-\left(\sum_{i=1}^{K_{1}}(K-n)\lambda_{i}+\sum_{j=1}^{K_{2}}n\mu_{j}\right)q(d,n,t)+\sum_{i=1}^{K_{1}}(K-n+i)\lambda_{i}q(d,n-i,t)
+j=1K2(n+j)μjq(dj,n+j,t),d,n{0,1,,K}.\displaystyle\ \ +\sum_{j=1}^{K_{2}}(n+j)\mu_{j}q(d-j,n+j,t),\ d,n\in\{0,1,\dots,K\}.

Hence, the cgf 𝒦(θu,θv,t)=ln𝔼(eθuD(t)+θvN(t))\mathcal{K}(\theta_{u},\theta_{v},t)=\ln\mathbb{E}(e^{\theta_{u}D(t)+\theta_{v}N(t)}) solves

t𝒦(θu,θv,t)+(i=1K1λi(eiθv1)+j=1K2μj(ej(θuθv)1))θv𝒦(θu,θv,t)=Ki=1K1λi(eiθv1)\frac{\partial}{\partial t}\mathcal{K}(\theta_{u},\theta_{v},t)+\left(\sum_{i=1}^{K_{1}}\lambda_{i}(e^{i\theta_{v}}-1)+\sum_{j=1}^{K_{2}}\mu_{j}(e^{j(\theta_{u}-\theta_{v})}-1)\right)\frac{\partial}{\partial\theta_{v}}\mathcal{K}(\theta_{u},\theta_{v},t)=K\sum_{i=1}^{K_{1}}\lambda_{i}(e^{i\theta_{v}}-1)

with 𝒦(θu,θv,0)=0\mathcal{K}(\theta_{u},\theta_{v},0)=0. Thus, we get

𝔼(D(t))=Ki=1K1iλij=1K2jμj(tβ+eβt1β2).\mathbb{E}(D(t))=K\sum_{i=1}^{K_{1}}i\lambda_{i}\sum_{j=1}^{K_{2}}j\mu_{j}\left(\frac{t}{\beta}+\frac{e^{-\beta t}-1}{\beta^{2}}\right).

Let us consider the stochastic path integral X(t)=0tN(s)dsX(t)=\int_{0}^{t}N(s)\,\mathrm{d}s. Then, we can find the average occupancy O(t)O(t) of the parking lot at any given time t0t\geq 0 which we defined as O(t)=t1𝔼(X(t))O(t)=t^{-1}\mathbb{E}(X(t)).

From (4.6), the system of differential equations that governs the joint distribution p0(n,x,t)p_{0}(n,x,t) is given by

tp0(n,x,t)+nxp0(n,x,t)\displaystyle\frac{\partial}{\partial t}p_{0}(n,x,t)+n\frac{\partial}{\partial x}p_{0}(n,x,t) =(i=1K1(Kn)λi+j=1K2nμj)p0(n,x,t)\displaystyle=-\left(\sum_{i=1}^{K_{1}}(K-n)\lambda_{i}+\sum_{j=1}^{K_{2}}n\mu_{j}\right)p_{0}(n,x,t)
+i=1K1(Kn+i)λip0(ni,x,t)\displaystyle\ \ +\sum_{i=1}^{K_{1}}(K-n+i)\lambda_{i}p_{0}(n-i,x,t)
+j=1K2(n+j)μjp0(n+j,x,t), 0nK,\displaystyle\ \ +\sum_{j=1}^{K_{2}}(n+j)\mu_{j}p_{0}(n+j,x,t),\ 0\leq n\leq K,

where p0(n,x,t)=0p_{0}(n,x,t)=0 for all n<0n<0 and n>Kn>K.

So, the joint cgf K~(θu,θv,t)=ln𝔼(eθuN(t)+θvX(t))\tilde{K}(\theta_{u},\theta_{v},t)=\ln\mathbb{E}(e^{\theta_{u}N(t)+\theta_{v}X(t)}) solves

tK~(θu,θv,t)+(i=1K1λi(eiθu1)j=1K2μj(ejθu1)θv)θuK~(θu,θv,t)=Ki=1K1λi(eiθu1)\frac{\partial}{\partial t}\tilde{K}(\theta_{u},\theta_{v},t)+\left(\sum_{i=1}^{K_{1}}\lambda_{i}(e^{i\theta_{u}}-1)-\sum_{j=1}^{K_{2}}\mu_{j}(e^{-j\theta_{u}}-1)-\theta_{v}\right)\frac{\partial}{\partial\theta_{u}}\tilde{K}(\theta_{u},\theta_{v},t)=K\sum_{i=1}^{K_{1}}\lambda_{i}(e^{i\theta_{u}}-1) (6.2)

with K~(θu,θv,0)=0\tilde{K}(\theta_{u},\theta_{v},0)=0.

On using (3.5), we get the following differential equation:

ddt𝔼(X(t))=𝔼(N(t))\frac{\mathrm{d}}{\mathrm{d}t}\mathbb{E}(X(t))=\mathbb{E}(N(t))

with 𝔼(X(0))=0\mathbb{E}(X(0))=0. Hence,

𝔼(X(t))=Ki=1K1iλiβ(t+eβt1β).\mathbb{E}(X(t))=\frac{K\sum_{i=1}^{K_{1}}i\lambda_{i}}{\beta}\left(t+\frac{e^{-\beta t}-1}{\beta}\right).

Thus, average occupancy is

O(t)=Ki=1K1iλiβ(1+eβt1βt).O(t)=\frac{K\sum_{i=1}^{K_{1}}i\lambda_{i}}{\beta}\left(1+\frac{e^{-\beta t}-1}{\beta t}\right).

References

  • [1] Bailey, N.T.J., 1964. The Elements of Stochastic Processes with Applications to the Natural Sciences. Wiley, New York.
  • [2] Doubleday, W.G., 1973. On linear birth-death processes with multiple births. Math Biosci. 17, 43-56.
  • [3] Di Crescenzo, A., Martinucci, B. and Meoli, A. 2016. A fractional counting process and its connection with the Poisson process. ALEA Lat. Am. J. Probab. Math. Stat. 13(1), 291-307.
  • [4] Feller, W., 1968. An Introduction to Probability Theory and Its Applications, Vol. 1, 3rd ed. Wiley, New York.
  • [5] Gani, J., McNeil, D.R., 1971. Joint distributions of random variables and their integrals for certain birth-death and diffusion processes. Advances Appl. Prob. 3, 339-352.
  • [6] Kendall, D.G., 1948. On the Generalized ”Birth-and-Death” Process. Ann. Math. Statist. 19(1), 1-15.
  • [7] Karlin, S., Taylor, H.M., 1975. A first course in stochastic processes. Academic Press.
  • [8] Moran, P.A.P, (1951). Estimation methods for evolutive processes. J. R. Statist. Soc. (B)13, 141-146.
  • [9] Moran, P.A.P, (1953). Estimation of parameters of a birth and death process. J. R. Statist. Soc. (B)15, 241-245.
  • [10] McNeil, D.R., 1970. Integral functions of birth and death processes and related limiting distributions. Ann. Math. Stat. 41, 480-485.
  • [11] Puri, P.S., 1966. On the homogeneous birth and death process and its integral. Biometrika. 53, 61-67.
  • [12] Puri, P.S., 1968. Some further results on the birth-and-death process and its integral. Proc. Camb. Phil. Sot. 67, 141-154.