This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Parameter estimation and long-range dependence of the fractional binomial process

Meena Sanjay Babulala, Sunil Kumar Gauttamb and Aditya Maheshwari#c a[email protected], b[email protected], c[email protected] Department of Mathematics, The LNM Institute of Information Technology, Rupa ki Nangal, Post-Sumel, Via-Jamdoli Jaipur 302031, Rajasthan, India. #Operations Management and Quantitative Techniques Area, Indian Institute of Management Indore, Indore 453556, Madhya Pradesh, India.
Abstract.

In 1990, Jakeman (see [21]) defined the binomial process as a special case of the classical birth-death process, where the probability of birth is proportional to the difference between a fixed number and the number of individuals present. Later, a fractional generalization of the binomial process was studied by Cahoy and Polito (2012) (see [7]) and called it as fractional binomial process (FBP). In this paper, we study second-order properties of the FBP and the long-range behavior of the FBP and its noise process. We also estimate the parameters of the FBP using the method of moments procedure. Finally, we present the simulated sample paths and its algorithm for the FBP.

Key words and phrases:
long-range dependence; fractional Binomial process; linear birth-death process; fractional calculus; Mittag–Leffler functions.
MSC 2020 Mathematics Subject Classification:
60G22, 60G55.

1. Introduction

A linear birth and death process, introduced by Feller (see [18]), is widely used to model population dynamics (see [2, 3, 38]), queuing systems (see [19]), and other phenomena (see [11, 35, 43]) in which entities enter and exit a system over time. In population model, individuals give birth to new individual with the rate λ>0\lambda>0 and individuals die with rate μ>0\mu>0, independent of each other. Several researchers have studied its statistical properties (see [10, 14, 25, 26, 42]), and there are numerous domains in which it finds use, including biology (see [39, 45]), ecology (see [2, 11, 12]), and finance (see [27, 28, 41]).

Jakeman (see [21]) studied a linear birth and death process in which the birth rate is proportional to NnN-n, where NN is a fixed large number and nn is present population, while the mortality rate stays linear in nn. Moreover, it is demonstrated that an equilibrium with a binomial distribution is attained as time tends to infinity, and therefore it is called the binomial process. The behavior of the binomial process differs from that of the traditional linear birth-death process, since in the binomial process the birth rate is proportional to NnN-n makes chances of birth equal to zero whenever population size nn reaches NN. Therefore, population never crosses size NN in the binomial process, whereas no such restriction of upper bound on population size in the classical linear birth and death process exists. The binomial process has found its application in several areas, such as, the telegraph wave models (see [1, 23]), quantum optics (see [15, 24, 13]), and etc.

Recently, the fractional binomial process (FBP) {𝒩ν(t)}t0\left\{\mathcal{N}^{\nu}(t)\right\}_{t\geq 0}, was introduced by Cahoy and Polito (see [7]), with birth rate λ>0\lambda>0 and death rate μ>0\mu>0. It is obtained by taking the fractional-order derivative in place of the integer order derivative in the governing differential equation of the binomial process. They also showed that one dimensional distribution of the FBP is same as the binomial process subordinated by inverse ν\nu-stable subordinator, 0<ν<10<\nu<1. It preserves the binomial limit when time tends to infinity which makes it appealing for application in areas such as quantum optics (see [22]) and several other disciplines (see [20, 33, 44]).

Cahoy and Polito (see [7]) have studied several statistical properties of the FBP such as mean, variance, extinction probability and state probability. The second order properties of the FBP still remains to be investigated and one of them is the long-range dependence (LRD) property. The LRD property of a stochastic model or a process refers to the presence of long-term persistence of autocorrelation over time. More specifically, it means that the autocorrelation function of the process decays slowly, indicating that distant observations are still not uncorrelated. This property is in contrast to short-range dependence (SRD) property, where correlation decay quickly as the lag between observations increase. The LRD property has found use-cases in several sub-domains like modeling, prediction, and risk management. One can find its applications in various fields, including finance (see [31, 33]), climate science (see [44]), biomedical engineering (see [20, 37]), econometrics (see [36]), etc.

In this paper, we prove that the FBP has the LRD property. Let δ>0\delta>0 be fixed, the increments of the FBP are defined as:

Zνδ(t)=𝒩ν(t+δ)𝒩ν(t),t0,Z^{\delta}_{\nu}(t)=\mathcal{N}^{\nu}(t+\delta)-\mathcal{N}^{\nu}(t),\qquad t\geq 0,

which we call the fractional binomial noise (FBN). We prove that the FBN has the SRD property.

Parameter estimation is a fundamental aspect of data analysis, where the goal is to determine the values of unknown parameters in a model or system based on observed data. The parameter estimation of the the FBP is not known in the literature and in this paper we discuss the same using the method of moments. The simulated sample paths of the FBP gives an visual idea of the evolution of the process and in this paper we present simulated sample paths for the FBP.

The subsequent sections of the paper are structured as follows. In Section 2, we have stated some preliminary results regarding the binomial process and the FBP. Section 3 deals with the LRD property of the FBP and the SRD property of the FBN. Section 4 presents different simulation algorithms to create the sample path of the FBP. Finally, in Section 5, we have studied the parameter estimation for the FBP.

2. Preliminaries

In this section, we introduce some notations, definitions and results that will be used later. A linear birth-and-death (LBD) process is a continuous-time Markov chain (CTMC), {Y(t):t0}\left\{Y(t):t\geq 0\right\}, defined on the countable state space S={0,1,2,3,}S=\left\{0,1,2,3,...\right\} and, the transitions are permitted only to its nearest neighbours. The state probability pn(t)={Y(t)=n|Y(0)=M}p_{n}(t)=\mathbb{P}\left\{Y(t)=n|Y(0)=M\right\} of LBD satisfies following Cauchy problem (see [3])

(2.1) {ddtpn(t)=μ(n+1)pn+1(t)μnpn(t)λnpn(t)+λ(n1)pn1(t),pn(0)={1n=M,0nM,\left\{\begin{array}[]{ll}\dfrac{d}{dt}p_{n}(t)=\mu(n+1)p_{n+1}(t)-\mu np_{n}(t)-\lambda np_{n}(t)+\lambda(n-1)p_{n-1}(t),\\ p_{n}(0)=\left\{\begin{array}[]{ll}1&\mbox{$n=M,$}\\ 0&\mbox{$n\neq M,$}\end{array}\right.\end{array}\right.

where, M1M\geq 1 is the initial population. An LBD process have applications in several areas, such as, modelling population dynamics (see [2, 3, 38]), queuing systems (see [19]) and biological systems (see [39, 45, 2, 11, 12]). Jakeman (see [21]) studied linear birth and death process with some modifications and obtained the binomial process. We next present some preliminary results of the binomial process that will be needed later in this paper.

2.1. Binomial process

Jakeman (see [21]) introduced the classical binomial process {𝒩(t)}t0\left\{\mathcal{N}(t)\right\}_{t\geq 0} with birth rate λ>0\lambda>0 and death rate μ>0\mu>0 has initial value problem for the state probability pn(t)p_{n}(t) given by

(2.2) {ddtpn(t)=μ(n+1)pn+1(t)μnpn(t)λ(Nn)pn(t)+λ(Nn+1)pn1(t),if 0nNpn(0)={1n=M,0nM,\left\{\begin{array}[]{ll}\dfrac{d}{dt}p_{n}(t)=\mu(n+1)p_{n+1}(t)-\mu np_{n}(t)-\lambda(N-n)p_{n}(t)\\ \qquad\qquad+\lambda(N-n+1)p_{n-1}(t),\quad\mbox{if $0\leq n\leq N$, }\\ p_{n}(0)=\left\{\begin{array}[]{ll}1&\mbox{$n=M,$}\\ 0&\mbox{$n\neq M,$}\end{array}\right.\end{array}\right.

where pn(t)={𝒩(t)=n|𝒩(0)=M}p_{n}(t)=\mathbb{P}\left\{\mathcal{N}(t)=n|\mathcal{N}(0)=M\right\}, MM is the initial population and NN is the maximum attainable population. The state space of the binomial process is {0,,N}\{0,\ldots,N\}. The generating function for 𝒩(t)\mathcal{N}(t) is defined as

Q(u,t)=n=0N(1u)npn(t),\mbox{Q}(u,t)=\sum_{n=0}^{N}(1-u)^{n}p_{n}(t),

and it satisfies the following partial differential equation (pde)

(2.3) {tQ(u,t)=μuuQ(u,t)λu(1u)uQ(u,t)λNuQ(u,t)Q(u,0)=(1u)M,|1u|1,\left\{\begin{array}[]{ll}\dfrac{\partial}{\partial t}\mbox{Q}(u,t)=-\mu u\dfrac{\partial}{\partial u}\mbox{Q}(u,t)-\lambda u(1-u)\dfrac{\partial}{\partial u}\mbox{Q}(u,t)-\lambda Nu\mbox{Q}(u,t)\\ \\ \mbox{Q}(u,0)=(1-u)^{M},\qquad|1-u|\leq 1,\end{array}\right.

where M1M\geq 1 is the initial number of individuals and NMN\geq M. The solution of the above equation (2.3) is given by

(2.4) Q(u,t)=[1(1θ)ξu]N(1[(1θ)ξ+θ]u1(1θ)ξu)M,Q(u,t)=[1-(1-\theta)\xi u]^{N}\left(\frac{1-[(1-\theta)\xi+\theta]u}{1-(1-\theta)\xi u}\right)^{M},

where ξ=λλ+μ\xi=\frac{\lambda}{\lambda+\mu} and θ(t)=exp[(μ+λ)t].\theta(t)=\exp[-(\mu+\lambda)t]. The joint probability generating function of the binomial process is given as (see [21] )

(2.5) Q(u,u,t)=n=0N(1u)nPnQn(u,t),\displaystyle Q(u,u^{\prime},t)=\sum_{n=0}^{N}(1-u)^{n}P_{n}Q_{n}(u^{\prime},t),

where Qn(u,t)Q_{n}(u^{\prime},t) is given by (2.4) and the subscript nn denotes the initial population. The probability of finding nn individuals PnP_{n} given by (see [21])

(2.6) Pn={(Nn)ξn(1ξ)NnnN,0n>N.P_{n}=\left\{\begin{array}[]{cc}\binom{N}{n}\xi^{n}(1-\xi)^{N-n}\qquad n\leq N,\\ 0\qquad\qquad\qquad\qquad\qquad n>N.\end{array}\right.

Moreover, it is observed in [21] that as time tends to infinity, the evolving population follows a binomial distribution with parameters NN and λ/(λ+μ).\lambda/(\lambda+\mu). We now state some preliminary results of the FBP that will be needed later.

2.2. Fractional Binomial process

The FBP (see [7]) {𝒩ν(t)}t0\left\{\mathcal{N}^{\nu}(t)\right\}_{t\geq 0} is obtained by taking fractional-order derivative in place of the integer-order derivative in the governing differential equation of the binomial process given in (2.2). The governing differential equation of the FBP {𝒩ν(t)}t0\left\{\mathcal{N}^{\nu}(t)\right\}_{t\geq 0} with birth rate λ>0\lambda>0 and death rate μ>0\mu>0 is given by

(2.7) {dνdtνpnν(t)=μ(n+1)pn+1ν(t)μnpnν(t)λ(Nn)pnν(t)+λ(Nn+1)pn1ν(t),0nN,pnν(0)={1n=M,0nM.\left\{\begin{array}[]{ll}\dfrac{d^{\nu}}{dt^{\nu}}p_{n}^{\nu}(t)=\mu(n+1)p_{n+1}^{\nu}(t)-\mu np_{n}^{\nu}(t)-\lambda(N-n)p_{n}^{\nu}(t)+\lambda(N-n+1)p_{n-1}^{\nu}(t),&\mbox{$0\leq n\leq N$,}\\ p_{n}^{\nu}(0)=\left\{\begin{array}[]{ll}1&\mbox{$n=M$},\\ 0&\mbox{$n\neq M$}.\end{array}\right.\par\end{array}\right.

The inverse ν\nu-stable subordinator is defined as the right-continuous of the ν\nu-stable subordinator {Dν(t)}t0\{D_{\nu}(t)\}_{t\geq 0} (see [6, 34])

Eν(t)=inf{x>0|Dν(t)>x},0<ν<1,t0.E_{\nu}(t)=\inf\{x>0|D_{\nu}(t)>x\},\quad 0<\nu<1,\quad t\geq 0.

It is observed (see [7]) that the one-dimensional distribution of the FBP {𝒩ν(t)}t0\left\{\mathcal{N}^{\nu}(t)\right\}_{t\geq 0} can be written as time-changed binomial process {𝒩(t)}t0\left\{\mathcal{N}(t)\right\}_{t\geq 0} by an independent inverse ν\nu-stable subordinator Eν(t)E_{\nu}(t), i.e.,

(2.8) 𝒩ν(t)=𝑑𝒩(Eν(t)),\mathcal{N}^{\nu}(t)\overset{d}{=}\mathcal{N}(E_{\nu}(t)),

where t0t\geq 0 and ν(0,1).\nu\in(0,1). It is known that (see [7]) the generating function of the FBP Qν(u,t)=n=0N(1u)npnν(t)Q^{\nu}(u,t)=\sum_{n=0}^{N}(1-u)^{n}p_{n}^{\nu}(t) solves the following differential equation

(2.9) {νtνQν(u,t)=μuuQν(u,t)λu(1u)uQν(u,t)λNuQν(u,t),Qν(u,0)=(1u)M,|1u|1,\left\{\begin{array}[]{ll}\dfrac{\partial^{\nu}}{\partial t^{\nu}}\mbox{Q}^{\nu}(u,t)=-\mu u\dfrac{\partial}{\partial u}\mbox{Q}^{\nu}(u,t)-\lambda u(1-u)\dfrac{\partial}{\partial u}\mbox{Q}^{\nu}(u,t)-\lambda Nu\mbox{Q}^{\nu}(u,t),\\ \\ \mbox{Q}^{\nu}(u,0)=(1-u)^{M},\qquad|1-u|\leq 1,\end{array}\right.

where the initial number of individuals is M1M\geq 1, and NMN\geq M.

Definition 2.1.

Let f(x)f(x) and g(x)g(x) be two functions, then they are called asymptotically equivalent denoted by f(x)g(x)f(x)\sim g(x), if

limxf(x)g(x)=1.\lim_{x\rightarrow\infty}\frac{f(x)}{g(x)}=1.

The Mittag–Leffler function can be defined as (see [17])

Eα(z)=r=0zrΓ(αr+1)α,z,(α)>0.E_{\alpha}(z)=\sum_{r=0}^{\infty}\dfrac{z^{r}}{\Gamma(\alpha r+1)}\quad\alpha,z\in\mathbb{C},\quad\mathbb{R}(\alpha)>0.

Now, using expansion of Eν(x)=1πn=0an(ν)xn+1E_{\nu}(-x)=\tfrac{1}{\pi}\sum_{n=0}^{\infty}\tfrac{a_{n}(\nu)}{x^{n+1}}, where 0<ν<10<\nu<1 (see [5]), we will show that Eν(x)E_{\nu}(-x) asymptotically equivalent to a0(ν)πx\tfrac{a_{0}(\nu)}{\pi x}, that is

Eν(x)\displaystyle E_{\nu}(-x) =1xπ[a0(ν)+a1(ν)x+a2(ν)x2+]\displaystyle=\dfrac{1}{x\pi}\left[a_{0}(\nu)+\dfrac{a_{1}(\nu)}{x}+\dfrac{a_{2}(\nu)}{x^{2}}+\cdot\cdots\right]
=1π[a0(ν)x+a1(ν)x2+a2(ν)x3+]\displaystyle=\dfrac{1}{\pi}\left[\dfrac{a_{0}(\nu)}{x}+\dfrac{a_{1}(\nu)}{x^{2}}+\dfrac{a_{2}(\nu)}{x^{3}}+\cdots\right]
(2.10) =1π[a0(ν)x+O(1x2)]a0(ν)πx.\displaystyle=\dfrac{1}{\pi}\left[\dfrac{a_{0}(\nu)}{x}+O\left(\dfrac{1}{x^{2}}\right)\right]\sim\dfrac{a_{0}(\nu)}{\pi x}.

The mean and variance of the FBP {𝒩ν(t)}t0\left\{\mathcal{N}^{\nu}(t)\right\}_{t\geq 0} are given by (see [7] )

(2.11) 𝔼[𝒩ν(t)]\displaystyle\mathbb{E}[\mathcal{N}^{\nu}(t)] =(MNλλ+μ)Eν((λ+μ)tν)+Nλλ+μ\displaystyle=\left(M-\dfrac{N\lambda}{\lambda+\mu}\right)E_{\nu}(-(\lambda+\mu)t^{\nu})+\dfrac{N\lambda}{\lambda+\mu}
Var[𝒩ν(t)]\displaystyle\mbox{Var}[\mathcal{N}^{\nu}(t)] =(λ2N(N1)(λ+μ)22λM(N1)λ+μ+M(M1))Eν(2(λ+μ)tν)\displaystyle=\left({\dfrac{\lambda^{2}N(N-1)}{(\lambda+\mu)^{2}}}-\dfrac{2\lambda M(N-1)}{\lambda+\mu}+M(M-1)\right)E_{\nu}(-2(\lambda+\mu)t^{\nu})
+(2λ2N(λ+μ)2λλ+μ(N+2M)+M)Eν((λ+μ)tν)\displaystyle\qquad+\left(\dfrac{2\lambda^{2}N}{(\lambda+\mu)^{2}}-\dfrac{\lambda}{\lambda+\mu}(N+2M)+M\right)E_{\nu}(-(\lambda+\mu)t^{\nu})
(2.12) (MNλλ+μ)2Eν((λ+μ)tν)2+Nλμ(λ+μ)2,\displaystyle\qquad-\left(M-N\dfrac{\lambda}{\lambda+\mu}\right)^{2}E_{\nu}(-(\lambda+\mu)t^{\nu})^{2}+\dfrac{N\lambda\mu}{(\lambda+\mu)^{2}},

where Eα(ξ)=r=0ξrΓ(αr+1)E_{\alpha}(\xi)=\sum_{r=0}^{\infty}\tfrac{\xi^{r}}{\Gamma(\alpha r+1)} is the Mittag–Leffler function.

2.3. The long and short range dependence

In the literature, there are various definitions of the LRD and SRD characteristics of a stochastic process. However, for the purpose of this paper, we will utilize the following definition (see [4, 16, 32]) for non stationary process.

Definition 2.2.

Let d>0d>0 and {X(t)}t0\left\{X(t)\right\}_{t\geq 0} be a stochastic process and the asymptotic behaviour of its correlation function is given by

limtCorr[X(s),X(t)]td=c(s),0<s<t\lim_{t\rightarrow\infty}\frac{\mbox{Corr}\left[X(s),X(t)\right]}{t^{-d}}=c(s),\qquad 0<s<t

for fixed ss and c(s)>0c(s)>0. Then, we say that the stochastic process {X(t)}t0\left\{X(t)\right\}_{t\geq 0} has the LRD property if d(0,1)d\in\left(0,1\right), otherwise it is said to have the SRD property when d(1,2)d\in\left(1,2\right) .

3. Dependence structure for the FBP

The aim of this section is to examine the LRD and SRD property of the FBP {𝒩ν(t)}t0\left\{\mathcal{N}^{\nu}(t)\right\}_{t\geq 0} and the fractional binomial noise (FBN) {Zνδ(t)}t0\left\{Z^{\delta}_{\nu}(t)\right\}_{t\geq 0} respectively. Now, we derive some results which are needed to prove it. First, we derive the recurrence relation for the joint probability generating function (pgf) of the FBP.

Lemma 3.1.

The joint pgf of the FBP satisfies the following relationship

(3.1) Qν(u,u)\displaystyle Q^{\nu}(u,u^{\prime}) =n=0N(1u)nPnQnν(u,t),\displaystyle=\sum_{n=0}^{N}(1-u)^{n}P_{n}Q^{\nu}_{n}(u^{\prime},t),

where PnP_{n} is given by (2.6) and Qnν\mbox{Q}_{n}^{\nu} is the solution of (2.9) with initial population nn.

Proof.

Let PnnνP^{\nu}_{nn^{\prime}} denote the probability of finding nn individuals present at time t0t_{0} and nn^{\prime} individuals at time t0+tt_{0}+t, and is given by

Pnnν\displaystyle P^{\nu}_{nn^{\prime}} =(𝒩ν(t0+t)=n,𝒩ν(t0)=n)\displaystyle=\mathbb{P}\left(\mathcal{N}^{\nu}(t_{0}+t)=n’,\mathcal{N}^{\nu}(t_{0})=n\right)
(3.2) =(𝒩ν(t0+t)=n|𝒩ν(t0)=n)(𝒩ν(t0)=n).\displaystyle=\mathbb{P}\left(\mathcal{N}^{\nu}(t_{0}+t)=n’|\mathcal{N}^{\nu}(t_{0})=n\right)\mathbb{P}(\mathcal{N}^{\nu}(t_{0})=n).

Now, we have the generating function for the FBP as

Qν(u,u;t)\displaystyle Q^{\nu}(u,u^{\prime};t) =n,n=0N(1u)n(1u)nPnnν\displaystyle=\sum_{n,n^{\prime}=0}^{N}(1-u)^{n}(1-u^{\prime})^{n^{\prime}}P^{\nu}_{nn^{\prime}}
=n,n=0N(1u)n(1u)n(𝒩ν(t0+t)=n|𝒩ν(t0)=n)(𝒩ν(t0)=n) (using (3))\displaystyle=\sum_{n,n^{\prime}=0}^{N}(1-u)^{n}(1-u^{\prime})^{n^{\prime}}\mathbb{P}\left(\mathcal{N}^{\nu}(t_{0}+t)=n’|\mathcal{N}^{\nu}(t_{0})=n\right)\mathbb{P}(\mathcal{N}^{\nu}(t_{0})=n)\mbox{ (using \eqref{pnn'}) }
=n=0N(1u)n(𝒩ν(t0)=n)n=0N(1u)n(𝒩ν(t0+t)=n|𝒩ν(t0)=n)\displaystyle=\sum_{n=0}^{N}(1-u)^{n}\mathbb{P}(\mathcal{N}^{\nu}(t_{0})=n)\sum_{n^{\prime}=0}^{N}(1-u^{\prime})^{n^{\prime}}\mathbb{P}\left(\mathcal{N}^{\nu}(t_{0}+t)=n’|\mathcal{N}^{\nu}(t_{0})=n\right)
=n=0N(1u)n(𝒩ν(t0)=n)Qnν(u,t),\displaystyle=\sum_{n=0}^{N}(1-u)^{n}\mathbb{P}(\mathcal{N}^{\nu}(t_{0})=n)\mbox{Q}^{\nu}_{n}(u^{\prime},t),

using (2.5). Now, using limt0\lim_{t_{0}\rightarrow\infty}(X(t0)=n)=Pn\mathbb{P}(X(t_{0})=n)=P_{n} and we have that

(3.3) Qν(u,u;t)\displaystyle Q^{\nu}(u,u^{\prime};t) =n=0N(1u)nPnQnν(u,t).\displaystyle=\sum_{n=0}^{N}(1-u)^{n}P_{n}Q^{\nu}_{n}(u^{\prime},t).\qed

We next evaluate 𝔼[𝒩ν(s)𝒩ν(t)]\mathbb{E}[\mathcal{N}^{\nu}(s)\mathcal{N}^{\nu}(t)] function for the FBP.

Theorem 3.2.

Let 0<ν<10<\nu<1 and {𝒩ν(t)}t0\left\{\mathcal{N}^{\nu}(t)\right\}_{t\geq 0} be the FBP, then

(3.4) 𝔼[𝒩ν(s)𝒩ν(t)]=(Nξ)2Nξ(ξ1)(Eν((λ+μ)(ts)ν)).\displaystyle\mathbb{E}[\mathcal{N}^{\nu}(s)\mathcal{N}^{\nu}(t)]=(N\xi)^{2}-N\xi(\xi-1)(E_{\nu}(-(\lambda+\mu)(t-s)^{\nu})).
Proof.

Using equation (3.1), we get

uQν(u,u;ts)\displaystyle\dfrac{\partial}{\partial u^{\prime}}Q^{\nu}(u,u^{\prime};t-s) =u(n=0N(1u)nPnQnν(u,ts))\displaystyle=\dfrac{\partial}{\partial u^{\prime}}\left(\sum_{n=0}^{N}(1-u)^{n}P_{n}Q^{\nu}_{n}(u^{\prime},t-s)\right)
=(n=0N(1u)nPnuQnν(u,ts)).\displaystyle=\left(\sum_{n=0}^{N}(1-u)^{n}P_{n}\dfrac{\partial}{\partial u^{\prime}}Q^{\nu}_{n}(u^{\prime},t-s)\right).

We have that

2uuQν(u,u;ts)\displaystyle\dfrac{\partial^{2}}{\partial u\partial u^{\prime}}Q^{\nu}(u,u^{\prime};t-s) =u[n=0N(1u)nPnuQnν(u,ts)]\displaystyle=\dfrac{\partial}{\partial u}\left[\sum_{n=0}^{N}(1-u)^{n}P_{n}\dfrac{\partial}{\partial u^{\prime}}Q^{\nu}_{n}(u^{\prime},t-s)\right]
=n=1Nn(1u)n1PnuQnν(u,ts))\displaystyle=\sum_{n=1}^{N}-n(1-u)^{n-1}P_{n}\dfrac{\partial}{\partial u^{\prime}}Q^{\nu}_{n}(u^{\prime},t-s))
2uuQν(u,u)|u=0,u=0\displaystyle\dfrac{\partial^{2}}{\partial u\partial u^{\prime}}Q^{\nu}(u,u^{\prime})\big{|}_{u=0,u^{\prime}=0} =n=1NnPn(𝔼ν(ts))\displaystyle=\sum_{n=1}^{N}-nP_{n}(-\mathbb{E}\mathbb{N}^{\nu}(t-s))
=n=1Nn[(nNξ)Eν((λ+μ)(ts)ν)+Nξ]Pn\displaystyle=\sum_{n=1}^{N}n[(n-N\xi)E_{\nu}(-(\lambda+\mu)(t-s)^{\nu})+N\xi]P_{n}
(3.5) =n=1Nn2(Eν((λ+μ)(ts)ν))Pnn=1NnNξ(Eν((λ+μ)(ts)ν)1)Pn.\displaystyle=\sum_{n=1}^{N}n^{2}(E_{\nu}(-(\lambda+\mu)(t-s)^{\nu}))P_{n}-\sum_{n=1}^{N}nN\xi(E_{\nu}(-(\lambda+\mu)(t-s)^{\nu})-1)P_{n}.

Solving both parts separately in the above expression, we have the following

n=1Nn2(Eν((λ+μ)tν))Pn\displaystyle\sum_{n=1}^{N}n^{2}(E_{\nu}(-(\lambda+\mu)t^{\nu}))P_{n} =n=1Nn2Eν((λ+μ)tν)[NCnξn(1ξ)Nn]\displaystyle=\sum_{n=1}^{N}n^{2}E_{\nu}(-(\lambda+\mu)t^{\nu})[^{N}C_{n}\xi^{n}(1-\xi)^{N-n}]
=NEν((λ+μ)tν)ξn=1N(N1)!(Nn)!(n1)!nξn1(1ξ)Nn\displaystyle=NE_{\nu}(-(\lambda+\mu)t^{\nu})\xi\sum_{n=1}^{N}\dfrac{(N-1)!}{(N-n)!(n-1)!}n\xi^{n-1}(1-\xi)^{N-n}
=(N(N1)Eν((λ+μ)tν)ξ2n=2N(N2)!(Nn)!(n2)!ξn2(1ξ)Nn\displaystyle=\left(N(N-1)E_{\nu}(-(\lambda+\mu)t^{\nu})\xi^{2}\sum_{n=2}^{N}\dfrac{(N-2)!}{(N-n)!(n-2)!}\xi^{n-2}(1-\xi)^{N-n}\right.
+NEν((λ+μ)tν)ξn=1N(N1)!(Nn)!(n1)!ξn1(1ξ)Nn)\displaystyle\qquad\left.+NE_{\nu}(-(\lambda+\mu)t^{\nu})\xi\sum_{n=1}^{N}\dfrac{(N-1)!}{(N-n)!(n-1)!}\xi^{n-1}(1-\xi)^{N-n}\right)
(3.6) =N(N1)(Eν((λ+μ)(ts)ν))ξ2+N(Eν((λ+μ)(ts)ν)ξ.\displaystyle=N(N-1)(E_{\nu}(-(\lambda+\mu)(t-s)^{\nu}))\xi^{2}+N(E_{\nu}(-(\lambda+\mu)(t-s)^{\nu})\xi.

Now, considering second part, we have that

n=1NnNξ(Eν((λ+μ)(ts)ν)1)Pn\displaystyle\sum_{n=1}^{N}nN\xi(E_{\nu}(-(\lambda+\mu)(t-s)^{\nu})-1)P_{n} =n=1NnNξ(Eν((λ+μ)(ts)ν)1)[NCnξn(1ξ)Nn]\displaystyle=\sum_{n=1}^{N}nN\xi(E_{\nu}(-(\lambda+\mu)(t-s)^{\nu})-1)[^{N}C_{n}\xi^{n}(1-\xi)^{N-n}]
=N(Eν((λ+μ)(ts)ν)1)n=1NN!(Nn)!n!nξn+1(1ξ)Nn\displaystyle=N(E_{\nu}(-(\lambda+\mu)(t-s)^{\nu})-1)\sum_{n=1}^{N}\dfrac{N!}{(N-n)!n!}n\xi^{n+1}(1-\xi)^{N-n}
=(Nξ)2(Eν((λ+μ)(ts)ν)1)[1ξ]N1\displaystyle=(N\xi)^{2}(E_{\nu}(-(\lambda+\mu)(t-s)^{\nu})-1)[1-\xi]^{N-1}
(3.7) =(Nξ)2(Eν((λ+μ)(ts)ν)1).\displaystyle=(N\xi)^{2}\left(E_{\nu}(-(\lambda+\mu)(t-s)^{\nu})-1\right).

Using equations (3), (3) and (3), we get

2uuQν(u,u)|u=0,u=0\displaystyle\dfrac{\partial^{2}}{\partial u\partial u^{\prime}}Q^{\nu}(u,u^{\prime})\big{|}_{u=0,u^{\prime}=0} =[N(N1)(Eν((λ+μ)(ts)ν))ξ2+N(Eν((λ+μ)(ts)ν)ξ\displaystyle=\left[N(N-1)(E_{\nu}(-(\lambda+\mu)(t-s)^{\nu}))\xi^{2}+N(E_{\nu}(-(\lambda+\mu)(t-s)^{\nu})\xi\right.
(Nξ)2(Eν((λ+μ)(ts)ν)1)]\displaystyle\qquad\qquad\left.-(N\xi)^{2}\left(E_{\nu}(-(\lambda+\mu)(t-s)^{\nu})-1\right)\right]
=(Nξ)2(Eν((λ+μ)(ts)ν))Nξ2(Eν((λ+μ)(ts)ν))\displaystyle=(N\xi)^{2}\left(E_{\nu}(-(\lambda+\mu)(t-s)^{\nu})\right)-N\xi^{2}\left(E_{\nu}(-(\lambda+\mu)(t-s)^{\nu})\right)
+Nξ(Eν((λ+μ)(ts)ν))(Nξ)2(Eν((λ+μ)(ts)ν))+(Nξ)2\displaystyle\qquad+N\xi\left(E_{\nu}(-(\lambda+\mu)(t-s)^{\nu})\right)-(N\xi)^{2}\left(E_{\nu}(-(\lambda+\mu)(t-s)^{\nu})\right)+(N\xi)^{2}
=(Nξ)2+Nξ(1ξ)(Eν((λ+μ)(ts)ν)).\displaystyle=(N\xi)^{2}+N\xi(1-\xi)(E_{\nu}(-(\lambda+\mu)(t-s)^{\nu})).

Hence, we have

𝔼[𝒩ν(s)𝒩ν(t)]\displaystyle\mathbb{E}[\mathcal{N}^{\nu}(s)\mathcal{N}^{\nu}(t)] =(2uuQν(u,u)|u=0,u=0)=(Nξ)2+Nξ(1ξ)(Eν((λ+μ)(ts)ν)).\displaystyle=\left(\dfrac{\partial^{2}}{\partial u\partial u^{\prime}}Q^{\nu}(u,u^{\prime})\big{|}_{u=0,u^{\prime}=0}\right)=(N\xi)^{2}+N\xi(1-\xi)(E_{\nu}(-(\lambda+\mu)(t-s)^{\nu})).\qquad\qed

Next, we compute autocovariance function of the FBP.

Theorem 3.3.

The autocovariance function of the FBP {𝒩ν(t)}t0\left\{\mathcal{N}^{\nu}(t)\right\}_{t\geq 0} is given by

Cov [𝒩ν(s),𝒩ν(t)]=(Nξ(1ξ)(Eν((λ+μ)(ts)ν)))((M22MNξ+N2ξ2)\displaystyle[\mathcal{N}^{\nu}(s),\mathcal{N}^{\nu}(t)]=(N\xi(1-\xi)(E_{\nu}(-(\lambda+\mu)(t-s)^{\nu})))-\left((M^{2}-2MN\xi+N^{2}\xi^{2})\right.
(3.8) ×Eν((λ+μ)sν)Eν((λ+μ)tν)){(MNξ)Nξ[Eν((λ+μ)sν)+Eν((λ+μ)tν)]}.\displaystyle\qquad\times\left.E_{\nu}(-(\lambda+\mu)s^{\nu})E_{\nu}(-(\lambda+\mu)t^{\nu})\right)-\left\{(M-N\xi)N\xi[E_{\nu}(-(\lambda+\mu)s^{\nu})+E_{\nu}(-(\lambda+\mu)t^{\nu})]\right\}.
Proof.

Using (2.11), we obtain the following expression

𝔼[𝒩ν(s)]𝔼[𝒩ν(t)]\displaystyle\mathbb{E}[\mathcal{N}^{\nu}(s)]\mathbb{E}[\mathcal{N}^{\nu}(t)] =(M22MNξ+N2ξ2)Eν((λ+μ)sν)Eν((λ+μ)tν)\displaystyle=(M^{2}-2MN\xi+N^{2}\xi^{2})E_{\nu}(-(\lambda+\mu)s^{\nu})E_{\nu}(-(\lambda+\mu)t^{\nu})
+(MNξ)Nξ[Eν((λ+μ)sν)+Eν((λ+μ)tν)]+N2ξ2.\displaystyle\qquad+(M-N\xi)N\xi[E_{\nu}(-(\lambda+\mu)s^{\nu})+E_{\nu}(-(\lambda+\mu)t^{\nu})]+N^{2}\xi^{2}.

Using equation (3.4), we get

Cov [𝒩ν(s),𝒩ν(t)]=𝔼[𝒩ν(s)𝒩ν(t)]𝔼[𝒩ν(s)]𝔼[𝒩ν(t)]\displaystyle[\mathcal{N}^{\nu}(s),\mathcal{N}^{\nu}(t)]=\mathbb{E}[\mathcal{N}^{\nu}(s)\mathcal{N}^{\nu}(t)]-\mathbb{E}[\mathcal{N}^{\nu}(s)]\mathbb{E}[\mathcal{N}^{\nu}(t)]
=((Nξ)2+Nξ(1ξ)(Eν((λ+μ)(ts)ν)))((M22MNξ+N2ξ2)Eν((λ+μ)sν)\displaystyle=((N\xi)^{2}+N\xi(1-\xi)(E_{\nu}(-(\lambda+\mu)(t-s)^{\nu})))-\left((M^{2}-2MN\xi+N^{2}\xi^{2})E_{\nu}(-(\lambda+\mu)s^{\nu})\right.
×Eν((λ+μ)tν)){(MNξ)Nξ[Eν((λ+μ)sν)+Eν((λ+μ)tν)]}N2ξ2\displaystyle\qquad\times\left.E_{\nu}(-(\lambda+\mu)t^{\nu})\right)-\left\{(M-N\xi)N\xi[E_{\nu}(-(\lambda+\mu)s^{\nu})+E_{\nu}(-(\lambda+\mu)t^{\nu})]\right\}-N^{2}\xi^{2}
=(Nξ(1ξ)(Eν((λ+μ)(ts)ν)))((M22MNξ+N2ξ2)Eν((λ+μ)sν)Eν((λ+μ)tν))\displaystyle=(N\xi(1-\xi)(E_{\nu}(-(\lambda+\mu)(t-s)^{\nu})))-\left((M^{2}-2MN\xi+N^{2}\xi^{2})E_{\nu}(-(\lambda+\mu)s^{\nu})E_{\nu}(-(\lambda+\mu)t^{\nu})\right)
{(MNξ)Nξ(Eν((λ+μ)sν)+Eν((λ+μ)tν))}.\displaystyle\qquad-\left\{(M-N\xi)N\xi(E_{\nu}(-(\lambda+\mu)s^{\nu})+E_{\nu}(-(\lambda+\mu)t^{\nu}))\right\}.\qed

We next present the asymptotic behavior of the variance and covariance function of the FBP.

Theorem 3.4.

The variance and covariance functions of the FBP are asymptotically equivalent to

Var[𝒩ν(t)]a0(ν)π(λ+μ)tν[(ξ2N(N1)2ξM(N1)+M(M1))2+(2ξ2Nξ(N+2M)+M)],\displaystyle\mbox{Var}[\mathcal{N}^{\nu}(t)]\sim\dfrac{a_{0}(\nu)}{\pi(\lambda+\mu)t^{\nu}}\left[\dfrac{\left(\xi^{2}N(N-1)-2\xi M(N-1)+M(M-1)\right)}{2}+\left(2\xi^{2}N-\xi(N+2M)+M\right)\right],
Cov[𝒩ν(s),𝒩ν(t)]a0(ν)π(λ+μ)tν[Nξ(1ξ)((MNξ)2Eν((λ+μ)sν)){(MNξ)Nξ}],\displaystyle\mbox{Cov}[\mathcal{N}^{\nu}(s),\mathcal{N}^{\nu}(t)]\sim\dfrac{a_{0}(\nu)}{\pi(\lambda+\mu)t^{\nu}}\left[N\xi(1-\xi)-\left((M-N\xi)^{2}E_{\nu}(-(\lambda+\mu)s^{\nu})\right)-\left\{(M-N\xi)N\xi\right\}\right],

as tt\to\infty, where 0<ν<10<\nu<1, 0<s<t<0<s<t<\infty and ss is fixed.

Proof.

Using (2.2), we have

Var [𝒩ν(t)]=(ξ2N(N1)2ξM(N1)+M(M1))Eν(2(λ+μ)tν)\displaystyle[\mathcal{N}^{\nu}(t)]=\left(\xi^{2}N(N-1)-2\xi M(N-1)+M(M-1)\right)E_{\nu}(-2(\lambda+\mu)t^{\nu})
+(2ξ2Nξ(N+2M)+M)Eν((λ+μ)tν)(MNξ)2Eν((λ+μ)tν)2+Nξμμ+λ\displaystyle\qquad+\left(2\xi^{2}N-\xi(N+2M)+M\right)E_{\nu}(-(\lambda+\mu)t^{\nu})-(M-N\xi)^{2}E_{\nu}(-(\lambda+\mu)t^{\nu})^{2}+N\xi\dfrac{\mu}{\mu+\lambda}
(ξ2N(N1)2ξM(N1)+M(M1))a0(ν)(2π(λ+μ)tν)\displaystyle\sim\left(\xi^{2}N(N-1)-2\xi M(N-1)+M(M-1)\right)\dfrac{a_{0}(\nu)}{(2\pi(\lambda+\mu)t^{\nu})}
+(2ξ2Nξ(N+2M)+M)a0(ν)π(λ+μ)tν(MNξ)2(a0(ν)π(λ+μ)tν)2+Nξμμ+λ\displaystyle\qquad+\left(2\xi^{2}N-\xi(N+2M)+M\right)\dfrac{a_{0}(\nu)}{\pi(\lambda+\mu)t^{\nu}}-(M-N\xi)^{2}\left(\dfrac{a_{0}(\nu)}{\pi(\lambda+\mu)t^{\nu}}\right)^{2}+N\xi\dfrac{\mu}{\mu+\lambda}
a0(ν)π(λ+μ)tν[(ξ2N(N1)2ξM(N1)+M(M1))2+(2ξ2Nξ(N+2M)+M)\displaystyle\sim\dfrac{a_{0}(\nu)}{\pi(\lambda+\mu)t^{\nu}}\left[\dfrac{\left(\xi^{2}N(N-1)-2\xi M(N-1)+M(M-1)\right)}{2}+\left(2\xi^{2}N-\xi(N+2M)+M\right)\right.
(MNξ)2(a0(ν)π(λ+μ)tν)]\displaystyle\qquad\qquad\qquad\qquad\qquad\left.-(M-N\xi)^{2}\left(\dfrac{a_{0}(\nu)}{\pi(\lambda+\mu)t^{\nu}}\right)\right]
(3.9) a0(ν)π(λ+μ)tν[(ξ2N(N1)2ξM(N1)+M(M1))2+(2ξ2Nξ(N+2M)+M)].\displaystyle\sim\dfrac{a_{0}(\nu)}{\pi(\lambda+\mu)t^{\nu}}\left[\dfrac{\left(\xi^{2}N(N-1)-2\xi M(N-1)+M(M-1)\right)}{2}+\left(2\xi^{2}N-\xi(N+2M)+M\right)\right].

Using (3.3) and (2.2), we have

Cov[𝒩ν(s),𝒩ν(t)]\displaystyle\mbox{Cov}[\mathcal{N}^{\nu}(s),\mathcal{N}^{\nu}(t)] =(Nξ(1ξ)(Eν((λ+μ)(ts)ν)))(M22MNξ+N2ξ2)Eν((λ+μ)sν)\displaystyle=(N\xi(1-\xi)(E_{\nu}(-(\lambda+\mu)(t-s)^{\nu})))-(M^{2}-2MN\xi+N^{2}\xi^{2})E_{\nu}(-(\lambda+\mu)s^{\nu})
×Eν((λ+μ)tν){(MNξ)Nξ[Eν((λ+μ)sν)+Eν((λ+μ)tν)]}\displaystyle\qquad\times E_{\nu}(-(\lambda+\mu)t^{\nu})-\left\{(M-N\xi)N\xi[E_{\nu}(-(\lambda+\mu)s^{\nu})+E_{\nu}(-(\lambda+\mu)t^{\nu})]\right\}
Nξ(1ξ)a0(ν)π(λ+μ)(ts)ν((MNξ)2Eν((λ+μ)sν)a0(ν)π(λ+μ)tν)\displaystyle\sim N\xi(1-\xi)\dfrac{a_{0}(\nu)}{\pi(\lambda+\mu)(t-s)^{\nu}}-\left((M-N\xi)^{2}E_{\nu}(-(\lambda+\mu)s^{\nu})\dfrac{a_{0}(\nu)}{\pi(\lambda+\mu)t^{\nu}}\right)
{(MNξ)Nξ(Eν((λ+μ)sν)+a0(ν)π(λ+μ)tν)}\displaystyle\qquad-\left\{(M-N\xi)N\xi\left(E_{\nu}(-(\lambda+\mu)s^{\nu})+\dfrac{a_{0}(\nu)}{\pi(\lambda+\mu)t^{\nu}}\right)\right\}
Nξ(1ξ)a0(ν)π(λ+μ)(ts)ν((MNξ)2Eν((λ+μ)sν)a0(ν)π(λ+μ)tν)\displaystyle\sim N\xi(1-\xi)\dfrac{a_{0}(\nu)}{\pi(\lambda+\mu)(t-s)^{\nu}}-\left((M-N\xi)^{2}E_{\nu}(-(\lambda+\mu)s^{\nu})\dfrac{a_{0}(\nu)}{\pi(\lambda+\mu)t^{\nu}}\right)
{(MNξ)Nξ(a0(ν)π(λ+μ)tν)}\displaystyle\qquad\qquad-\left\{(M-N\xi)N\xi\left(\dfrac{a_{0}(\nu)}{\pi(\lambda+\mu)t^{\nu}}\right)\right\}
a0(ν)π(λ+μ)tν[Nξ(1ξ)(1s/t)ν((MNξ)2Eν((λ+μ)sν)){(MNξ)Nξ}]\displaystyle\sim\dfrac{a_{0}(\nu)}{\pi(\lambda+\mu)t^{\nu}}\left[\dfrac{N\xi(1-\xi)}{(1-s/t)^{\nu}}-\left((M-N\xi)^{2}E_{\nu}(-(\lambda+\mu)s^{\nu})\right)-\left\{(M-N\xi)N\xi\right\}\right]
(3.10) a0(ν)π(λ+μ)tν[Nξ(1ξ)((MNξ)2Eν((λ+μ)sν)){(MNξ)Nξ}].\displaystyle\sim\dfrac{a_{0}(\nu)}{\pi(\lambda+\mu)t^{\nu}}\left[N\xi(1-\xi)-\left((M-N\xi)^{2}E_{\nu}(-(\lambda+\mu)s^{\nu})\right)-\left\{(M-N\xi)N\xi\right\}\right].\qed

We now prove the main result of this section.

Theorem 3.5.

The FBP {𝒩ν(t)}t0\left\{\mathcal{N}^{\nu}(t)\right\}_{t\geq 0} exhibits the LRD property.

Proof.

Let 0<s<t0<s<t and using (3) and (3), we get

Corr[𝒩ν(s),𝒩ν(t)]=Cov[𝒩ν(s),𝒩ν(t)](Var[𝒩ν(s)]Var[𝒩ν(t)])1/2\displaystyle\mbox{Corr}[\mathcal{N}^{\nu}(s),\mathcal{N}^{\nu}(t)]=\dfrac{\mbox{Cov}[\mathcal{N}^{\nu}(s),\mathcal{N}^{\nu}(t)]}{(\mbox{Var}[\mathcal{N}^{\nu}(s)]\mbox{Var}[\mathcal{N}^{\nu}(t)])^{1/2}}
(a0(ν)π(λ+μ)tν)1/2[Nξ(1ξ)((MNξ)2Eν((λ+μ)sν)){(MNξ)Nξ}]{Var[𝒩ν(s)][(ξ2N(N1)2ξM(N1)+M(M1))2+(2ξ2Nξ(N+2M)+M)]}1/2\displaystyle\sim\dfrac{\left(\dfrac{a_{0}(\nu)}{\pi(\lambda+\mu)t^{\nu}}\right)^{1/2}\left[N\xi(1-\xi)-\left((M-N\xi)^{2}E_{\nu}(-(\lambda+\mu)s^{\nu})\right)-\left\{(M-N\xi)N\xi\right\}\right]}{\left\{\mbox{Var}[\mathcal{N}^{\nu}(s)]\left[\dfrac{\left(\xi^{2}N(N-1)-2\xi M(N-1)+M(M-1)\right)}{2}+\left(2\xi^{2}N-\xi(N+2M)+M\right)\right]\right\}^{1/2}}
c(s)tν/2,\displaystyle\sim\dfrac{c(s)}{t^{\nu/2}},
where
c(s)\displaystyle c(s) =(a0(ν)(π(λ+μ)))1/2[Nξ(1ξ)((MNξ)2Eν((λ+μ)sν)){(MNξ)Nξ}]{Var[𝒩ν(s)][(ξ2N(N1)2ξM(N1)+M(M1))2+(2ξ2Nξ(N+2M)+M)]}1/2.\displaystyle=\dfrac{\left(\dfrac{a_{0}(\nu)}{(\pi(\lambda+\mu))}\right)^{1/2}\left[N\xi(1-\xi)-\left((M-N\xi)^{2}E_{\nu}(-(\lambda+\mu)s^{\nu})\right)-\left\{(M-N\xi)N\xi\right\}\right]}{\left\{Var[\mathcal{N}^{\nu}(s)]\left[\dfrac{\left(\xi^{2}N(N-1)-2\xi M(N-1)+M(M-1)\right)}{2}+\left(2\xi^{2}N-\xi(N+2M)+M\right)\right]\right\}^{1/2}}.

Since ν(0,1]\nu\in(0,1], the FBP has LRD property. ∎

Definition 3.6 (Fractional Binomial Noise).

Let δ>0\delta>0 be fixed, and define the increments of the fractional binomial process as the fractional binomial noise (FBN) is

Zνδ(t)=𝒩ν(t+δ)𝒩ν(t),t0.Z^{\delta}_{\nu}(t)=\mathcal{N}^{\nu}(t+\delta)-\mathcal{N}^{\nu}(t),\qquad t\geq 0.

The noise process find applications in sonar communication (see [40]), vehicular communications (see [30]), wireless sensor networks (see [29]) and many other fields, where signals are transmitted through noise. We now explore the dependence structure of the fractional binomial noise (FBN) {Zνδ(t)}t0\left\{Z^{\delta}_{\nu}(t)\right\}_{t\geq 0}.

Theorem 3.7.

The FBN {Zνδ(t)t0}\left\{Z^{\delta}_{\nu}(t)_{t\geq 0}\right\} has the SRD property.

Proof.

Let s,δ0s,\delta\geq 0 be fixed, and 0s+δt0\leq s+\delta\leq t. We begin with

Cov[Zνδ(s),Zνδ(t)]\displaystyle\mbox{Cov}[Z^{\delta}_{\nu}(s),Z^{\delta}_{\nu}(t)] =Cov[𝒩ν(s+δ)𝒩ν(s),𝒩ν(t+δ)𝒩ν(t)]\displaystyle=\mbox{Cov}[\mathcal{N}^{\nu}(s+\delta)-\mathcal{N}^{\nu}(s),\mathcal{N}^{\nu}(t+\delta)-\mathcal{N}^{\nu}(t)]
=Cov[𝒩ν(s+δ),𝒩ν(t+δ)]+Cov[𝒩ν(s),𝒩ν(t)]Cov[𝒩ν(s+δ),𝒩ν(t)]\displaystyle=\mbox{Cov}[\mathcal{N}^{\nu}(s+\delta),\mathcal{N}^{\nu}(t+\delta)]+Cov[\mathcal{N}^{\nu}(s),\mathcal{N}^{\nu}(t)]-\mbox{Cov}[\mathcal{N}^{\nu}(s+\delta),\mathcal{N}^{\nu}(t)]
(3.11) Cov[𝒩ν(s),𝒩ν(t+δ)].\displaystyle\qquad-\mbox{Cov}[\mathcal{N}^{\nu}(s),\mathcal{N}^{\nu}(t+\delta)].

From (3), we have

Cov[𝒩ν(s),𝒩ν(t)]rtν[Nξ(1ξ)((MNξ)2Eν((λ+μ)sν)){(MNξ)Nξ}],\mbox{Cov}[\mathcal{N}^{\nu}(s),\mathcal{N}^{\nu}(t)]\sim\dfrac{r}{t^{\nu}}\left[N\xi(1-\xi)-\left((M-N\xi)^{2}E_{\nu}(-(\lambda+\mu)s^{\nu})\right)-\left\{(M-N\xi)N\xi\right\}\right],

where r=a0(ν)π(μ+λ)r=\frac{a_{0}(\nu)}{\pi(\mu+\lambda)}. Using above equation, we get

Cov[Zνδ(s),Zνδ(t)]\displaystyle\mbox{Cov}[Z^{\delta}_{\nu}(s),Z^{\delta}_{\nu}(t)] r(t+δ)ν[Nξ(1ξ)((MNξ)2Eν((λ+μ)(s+δ)ν)){(MNξ)Nξ}]\displaystyle\sim\dfrac{r}{(t+\delta)^{\nu}}\left[N\xi(1-\xi)-\left((M-N\xi)^{2}E_{\nu}(-(\lambda+\mu)(s+\delta)^{\nu})\right)-\left\{(M-N\xi)N\xi\right\}\right]
+rtν[Nξ(1ξ)((MNξ)2Eν((λ+μ)sν)){(MNξ)Nξ}]\displaystyle\qquad+\dfrac{r}{t^{\nu}}\left[N\xi(1-\xi)-\left((M-N\xi)^{2}E_{\nu}(-(\lambda+\mu)s^{\nu})\right)-\left\{(M-N\xi)N\xi\right\}\right]
rtν[Nξ(1ξ)((MNξ)2Eν((λ+μ)(s+δ)ν)){(MNξ)Nξ}]\displaystyle\qquad-\dfrac{r}{t^{\nu}}\left[N\xi(1-\xi)-\left((M-N\xi)^{2}E_{\nu}(-(\lambda+\mu)(s+\delta)^{\nu})\right)-\left\{(M-N\xi)N\xi\right\}\right]
r(t+δ)ν[Nξ(1ξ)((MNξ)2Eν((λ+μ)sν)){(MNξ)Nξ}]\displaystyle\qquad-\dfrac{r}{(t+\delta)^{\nu}}\left[N\xi(1-\xi)-\left((M-N\xi)^{2}E_{\nu}(-(\lambda+\mu)s^{\nu})\right)-\left\{(M-N\xi)N\xi\right\}\right]
r(MNξ)2[Eν((λ+μ)(s+δ)ν)(t+δ))νEν((λ+μ)sν)tν+Eν((λ+μ)(s+δ)ν)tν+Eν((λ+μ)sν)(t+δ)ν]\displaystyle\sim r(M-N\xi)^{2}\left[-\dfrac{E_{\nu}(-(\lambda+\mu)(s+\delta)^{\nu})}{(t+\delta))^{\nu}}-\dfrac{E_{\nu}(-(\lambda+\mu)s^{\nu})}{t^{\nu}}+\dfrac{E_{\nu}(-(\lambda+\mu)(s+\delta)^{\nu})}{t^{\nu}}+\dfrac{E_{\nu}(-(\lambda+\mu)s^{\nu})}{(t+\delta)^{\nu}}\right]
r(MNξ)2(1(t+δ)ν1tν)(Eν((λ+μ)sν)Eν((λ+μ)(s+δ)ν))\displaystyle\sim r(M-N\xi)^{2}\left(\dfrac{1}{(t+\delta)^{\nu}}-\dfrac{1}{t^{\nu}}\right)\left(E_{\nu}(-(\lambda+\mu)s^{\nu})-E_{\nu}(-(\lambda+\mu)(s+\delta)^{\nu})\right)
r(MNξ)2tν(νδt)(Eν((λ+μ)sν)Eν((λ+μ)(s+δ)ν))\displaystyle\sim\dfrac{r(M-N\xi)^{2}}{t^{\nu}}\left(\dfrac{-\nu\delta}{t}\right)\left(E_{\nu}(-(\lambda+\mu)s^{\nu})-E_{\nu}(-(\lambda+\mu)(s+\delta)^{\nu})\right)
r(MNξ)2(νδt1+ν)(Eν((λ+μ)sν)Eν((λ+μ)(s+δ)ν)).\displaystyle\sim r(M-N\xi)^{2}\left(\dfrac{-\nu\delta}{t^{1+\nu}}\right)\left(E_{\nu}(-(\lambda+\mu)s^{\nu})-E_{\nu}(-(\lambda+\mu)(s+\delta)^{\nu})\right).

Observe that

Var[Zνδ(t)]\displaystyle\mbox{Var}[Z^{\delta}_{\nu}(t)] =Var[Nν(t+δ)]+Var[Nν(t)]2Cov[Nν(t+δ),Nν(t)]\displaystyle=\mbox{Var}[N^{\nu}(t+\delta)]+\mbox{Var}[N^{\nu}(t)]-2\mbox{Cov}[N^{\nu}(t+\delta),N^{\nu}(t)]
Var[𝒩ν(t)]\displaystyle\mbox{Var}[\mathcal{N}^{\nu}(t)] rtν[(ξ2N(N1)2ξM(N1)+M(M1))2+(2ξ2Nξ(N+2M)+M)]\displaystyle\sim\dfrac{r}{t^{\nu}}\left[\dfrac{\left(\xi^{2}N(N-1)-2\xi M(N-1)+M(M-1)\right)}{2}+\left(2\xi^{2}N-\xi(N+2M)+M\right)\right]
Cov[𝒩ν(t+δ),𝒩ν(t)]\displaystyle\mbox{Cov}[\mathcal{N}^{\nu}(t+\delta),\mathcal{N}^{\nu}(t)] =(Nξ(1ξ)(Eν((λ+μ)(δ)ν)))\displaystyle=(N\xi(1-\xi)(E_{\nu}(-(\lambda+\mu)(\delta)^{\nu})))
((M22MNξ+N2ξ2)Eν((λ+μ)(t+δ))ν)Eν((λ+μ)tν))\displaystyle\qquad-\left((M^{2}-2MN\xi+N^{2}\xi^{2})E_{\nu}(-(\lambda+\mu)(t+\delta))^{\nu})E_{\nu}(-(\lambda+\mu)t^{\nu})\right)
{(MNξ)Nξ[Eν((λ+μ)(t+δ))ν+Eν((λ+μ)tν)]}\displaystyle\qquad-\left\{(M-N\xi)N\xi[E_{\nu}(-(\lambda+\mu)(t+\delta))^{\nu}+E_{\nu}(-(\lambda+\mu)t^{\nu})]\right\}
(r2(M22MNξ+N2ξ2)1(t(t+δ))ν){(MNξ)Nξr[1(t+δ)ν+1tν]}.\displaystyle\sim-\left(r^{2}(M^{2}-2MN\xi+N^{2}\xi^{2})\dfrac{1}{(t(t+\delta))^{\nu}}\right)-\left\{(M-N\xi)N\xi r\left[\dfrac{1}{(t+\delta)^{\nu}}+\dfrac{1}{t^{\nu}}\right]\right\}.

Using above equations, we get

Var[Zνδ(t)]=Var[𝒩ν(t+δ)]+Var[𝒩ν(t)]2Cov[𝒩ν(t+δ),𝒩ν(t)]\displaystyle\mbox{Var}[Z^{\delta}_{\nu}(t)]=\mbox{Var}[\mathcal{N}^{\nu}(t+\delta)]+\mbox{Var}[\mathcal{N}^{\nu}(t)]-2\mbox{Cov}[\mathcal{N}^{\nu}(t+\delta),\mathcal{N}^{\nu}(t)]
rtν[(ξ2N(N1)2ξM(N1)+M(M1))2+(2ξ2Nξ(N+2M)+M)]\displaystyle\sim\dfrac{r}{t^{\nu}}\left[\dfrac{\left(\xi^{2}N(N-1)-2\xi M(N-1)+M(M-1)\right)}{2}+\left(2\xi^{2}N-\xi(N+2M)+M\right)\right]
+r(t+δ)ν[(ξ2N(N1)2ξM(N1)+M(M1))2+(2ξ2Nξ(N+2M)+M)]\displaystyle\qquad+\dfrac{r}{(t+\delta)^{\nu}}\left[\dfrac{\left(\xi^{2}N(N-1)-2\xi M(N-1)+M(M-1)\right)}{2}+\left(2\xi^{2}N-\xi(N+2M)+M\right)\right]
+2((M22MNξ+N2ξ2)r21(t(t+δ))ν)\displaystyle\qquad+2\left((M^{2}-2MN\xi+N^{2}\xi^{2})r^{2}\dfrac{1}{(t(t+\delta))^{\nu}}\right)
+2((MNξ)Nξ[r(t+δ)ν+rtν])\displaystyle\qquad+2\left((M-N\xi)N\xi\left[\dfrac{r}{(t+\delta)^{\nu}}+\dfrac{r}{t^{\nu}}\right]\right)
rtν([(ξ2N(N1)2ξM(N1)+M(M1))2+(2ξ2Nξ(N+2M)+M)](1+1(1+δt)ν)\displaystyle\sim\dfrac{r}{t^{\nu}}\left(\left[\dfrac{\left(\xi^{2}N(N-1)-2\xi M(N-1)+M(M-1)\right)}{2}+\left(2\xi^{2}N-\xi(N+2M)+M\right)\right]\left(1+\frac{1}{\left(1+\frac{\delta}{t}\right)^{\nu}}\right)\right.
+2(MNξ)Nξ(1+1(1+δt)ν))\displaystyle\qquad\qquad\qquad\qquad\qquad\left.+2(M-N\xi)N\xi\left(1+\dfrac{1}{\left(1+\frac{\delta}{t}\right)^{\nu}}\right)\right)
2rtν[(ξ2N(N1)2ξM(N1)+M(M1))2+(2ξ2Nξ(N+2M)+M)+2(MNξ)Nξ].\displaystyle\sim\dfrac{2r}{t^{\nu}}\left[\dfrac{\left(\xi^{2}N(N-1)-2\xi M(N-1)+M(M-1)\right)}{2}+\left(2\xi^{2}N-\xi(N+2M)+M\right)+2(M-N\xi)N\xi\right].

Now, we calculate correlation function

Corr[Zνδ(s),Zνδ(t)]=Cov[Zνδ(s),Zνδ(t)](Var[Zνδ(s)]Var[Zνδ(t)])1/2\displaystyle\mbox{Corr}[Z^{\delta}_{\nu}(s),Z^{\delta}_{\nu}(t)]=\dfrac{Cov[Z^{\delta}_{\nu}(s),Z^{\delta}_{\nu}(t)]}{(Var[Z^{\delta}_{\nu}(s)]Var[Z^{\delta}_{\nu}(t)])^{1/2}}
\displaystyle\sim r(MNξ)2(νδt1+ν)(Eν((λ+μ)sν)Eν((λ+μ)(s+δ)ν)){2rtν[(ξ2N(N1)2ξM(N1)+M(M1))2+(2ξ2Nξ(N+2M)+M)+2(MNξ)Nξ]Var[Zνδ(s)]}1/2\displaystyle\frac{r(M-N\xi)^{2}\left(\dfrac{-\nu\delta}{t^{1+\nu}}\right)\left(E_{\nu}(-(\lambda+\mu)s^{\nu})-E_{\nu}(-(\lambda+\mu)(s+\delta)^{\nu})\right)}{\left\{\frac{2r}{t^{\nu}}\left[\frac{\left(\xi^{2}N(N-1)-2\xi M(N-1)+M(M-1)\right)}{2}+\left(2\xi^{2}N-\xi(N+2M)+M\right)+2(M-N\xi)N\xi\right]Var[Z^{\delta}_{\nu}(s)]\right\}^{1/2}}
(νδt1+ν2)r(MNξ)2(Eν((λ+μ)sν)Eν((λ+μ)(s+δ)ν)){2[(ξ2N(N1)2ξM(N1)+M(M1))2+(2ξ2Nξ(N+2M)+M)+2(MNξ)Nξ]Var[Zνδ(s)]}1/2\displaystyle\sim\dfrac{\left(\frac{-\nu\delta}{t^{1+\frac{\nu}{2}}}\right)\sqrt{r}(M-N\xi)^{2}\left(E_{\nu}(-(\lambda+\mu)s^{\nu})-E_{\nu}(-(\lambda+\mu)(s+\delta)^{\nu})\right)}{\left\{2\left[\frac{\left(\xi^{2}N(N-1)-2\xi M(N-1)+M(M-1)\right)}{2}+\left(2\xi^{2}N-\xi(N+2M)+M\right)+2(M-N\xi)N\xi\right]Var[Z^{\delta}_{\nu}(s)]\right\}^{1/2}}
(1t1+ν2)c(s), where\displaystyle\sim\left(\frac{1}{t^{1+\frac{\nu}{2}}}\right)c(s),\mbox{ where}
c(s)=(νδt1+ν2)r(MNξ)2(Eν((λ+μ)sν)Eν((λ+μ)(s+δ)ν)){2[(ξ2N(N1)2ξM(N1)+M(M1))2+(2ξ2Nξ(N+2M)+M)+2(MNξ)Nξ]Var[Zνδ(s)]}1/2c(s)=\dfrac{\left(\frac{-\nu\delta}{t^{1+\frac{\nu}{2}}}\right)\sqrt{r}(M-N\xi)^{2}\left(E_{\nu}(-(\lambda+\mu)s^{\nu})-E_{\nu}(-(\lambda+\mu)(s+\delta)^{\nu})\right)}{\left\{2\left[\frac{\left(\xi^{2}N(N-1)-2\xi M(N-1)+M(M-1)\right)}{2}+\left(2\xi^{2}N-\xi(N+2M)+M\right)+2(M-N\xi)N\xi\right]Var[Z^{\delta}_{\nu}(s)]\right\}^{1/2}}

Since ν(0,1]\nu\in(0,1], there the FBN has the SRD property. ∎

4. simulation

In this section, we provide algorithm to simulate the FBP, which we will use in Section 5 for parameter estimation of the FBP.
The sojourn time SkS_{k} of the process {𝒩(t)}t0\left\{\mathcal{N}(t)\right\}_{t\geq 0} is defined as the duration for which it remains in current state kk. The distribution of sojourn or inter-arrival time SkS_{k} is given by (see [38, Chapter VI, Section 3.2])

{Skt}=exp[(λ(Nn)+μn)kt],\mathbb{P}\left\{S_{k}\geq t\right\}=\exp[-(\lambda(N-n)+\mu n)kt],

and thus the pdf of the sojourn time SkS_{k} is given by

fSk(t)=(λ(Nn)+μn)kexp[(λ(Nn)+μn)kt],t0.f_{S_{k}}(t)=(\lambda(N-n)+\mu n)k\exp[-(\lambda(N-n)+\mu n)kt],\quad t\geq 0.

Using (2.8), we obtain the sojourn time, Skν,S_{k}^{\nu}, for the FBP {𝒩ν(t)}t0\left\{\mathcal{N}^{\nu}(t)\right\}_{t\geq 0} as given below

{Skνt}=Eν[(λ(Nn)+μn)ktν].\mathbb{P}\left\{S_{k}^{\nu}\geq t\right\}=E_{\nu}[-(\lambda(N-n)+\mu n)kt^{\nu}].

This implies that the FBP changes state from kk to k+1k+1 or k1k-1 with probability λ(Nn)(λ(Nn)+μn)\frac{\lambda(N-n)}{(\lambda(N-n)+\mu n)} or μn(λ(Nn)+μn),\frac{\mu n}{(\lambda(N-n)+\mu n)}, respectively. Now, it can be simulated using the following procedure.

 

Algorithm 1 Simulation of the fractional binomial process
 
0:  N=500N=500, M=300M=300, μ=μn\mu=\mu n, λ=λ(Nn),\lambda=\lambda(N-n), and ν\nu.
0:  𝒩ν\mathcal{N}^{\nu}, simulated sample paths for the fractional binomial process. Initialisation : nn is present population where 0nN0\leq n\leq N, KK is desired number of birth or death occurs and NN is fixed large number.
1:  for k=1:Kk=1:K do
2:     generate a negatively exponentially distributed random variable ξk\xi_{k} and a one sided ν\nu-stable random variable .
3:     simulate Skν=𝑑ξk1/νVνS_{k}^{\nu}\overset{d}{=}{\xi_{k}^{1/\nu}}V_{\nu}.
4:     if Uλ(Nn)λ(Nn)+μnU\leq\dfrac{\lambda(N-n)}{\lambda(N-n)+\mu n} then
5:        𝒩ν(sk)=M+1,\mathcal{N}^{\nu}(s_{k})=M+1,
6:        otherwise 𝒩ν(sk)=M1,\mathcal{N}^{\nu}(s_{k})=M-1,
7:     end if
8:  end for
9:  return  𝒩ν.\mathcal{N}^{\nu}.  
Refer to caption
(a) ν=1\nu=1, λ=0.015\lambda=0.015, μ=0.05\mu=0.05, N=500N=500, M=300M=300
Refer to caption
(b) ν=0.8\nu=0.8, λ=0.015\lambda=0.015, μ=0.05\mu=0.05, N=500N=500, M=300M=300
Figure 1. Five simulated sample path of binomial and fractional binomial process
Refer to caption
(a) ν=0.8\nu=0.8, λ=0.05\lambda=0.05, μ=0.015\mu=0.015, N=500N=500, M=300M=300
Refer to caption
(b) ν=0.8\nu=0.8, λ=0.015\lambda=0.015, μ=0.015\mu=0.015, N=500N=500,
  M=300M=300
Figure 2. Five simulated sample path of the fractional binomial process

Interpretation of the sample paths

We observe from Figure 1 that when we compare the binomial process with the FBP a sudden population burst (negative burst due to higher death rate) is visible in the FBP, that is, the population burst frequency increases as we decreases value of ν\nu from 11 to 0. The sample paths of the FBP in Figure 1- 2 keeps revolving around their theoretical mean.

5. Parameter estimation of the FBP

The method of moments (MoM) is a statistical technique used for estimating the parameters of the distribution of a population. The moments are summary statistics that describe various aspects of the distribution, such as mean, variance, skewness, and kurtosis. Here, we have not used maximum likelihood estimator technique as there is no explicit expression of density of the FBP is available. We can not use linear regression model for parameter estimation used by Cahoy and Polito used in ( see [8, 9]) as our model is non-linear regression model.

Let TT be fixed time and X1,X2,,XJX_{1},X_{2},...,X_{J} denotes the value obtained by simulating sample paths of the FBP. Then, using X1,X2,,XJX_{1},X_{2},...,X_{J} we evaluate sample mean (m1m_{1}) and sample second moment (m2)m_{2}) as follows

(5.1) m1=1Jn=1JXn and m2=1Jn=1JXn2.\displaystyle m_{1}=\dfrac{1}{J}\sum_{n=1}^{J}X_{n}\quad\mbox{ and }\quad m_{2}=\dfrac{1}{J}\sum_{n=1}^{J}X_{n}^{2}.

We denote the population first moment by μ1(λ,ν)\mu_{1}^{\prime}(\lambda,\nu) (as a function of λ\lambda and, ν)\nu) and the population second moment by μ2(λ,ν),\mu^{\prime}_{2}(\lambda,\nu), then using (2.11) and (2.2), we get

(5.2) μ1(λ,ν)=(MNλλ+μ)Eν((λ+μ)tν)+Nλλ+μ\displaystyle\mu_{1}^{\prime}(\lambda,\nu)=\left(M-\dfrac{N\lambda}{\lambda+\mu}\right)E_{\nu}(-(\lambda+\mu)t^{\nu})+\dfrac{N\lambda}{\lambda+\mu}
μ2(λ,ν)=(λ2N(N1)(λ+μ)22λM(N1)λ+μ+M(M1))Eν(2(λ+μ)tν)\displaystyle\mu^{\prime}_{2}(\lambda,\nu)=\left({\dfrac{\lambda^{2}N(N-1)}{(\lambda+\mu)^{2}}}-\dfrac{2\lambda M(N-1)}{\lambda+\mu}+M(M-1)\right)E_{\nu}(-2(\lambda+\mu)t^{\nu})
+(2λ2N(λ+μ)2λ(N+2M)λ+μ+M)Eν((λ+μ)tν)(MNλλ+μ)2Eν((λ+μ)tν)2\displaystyle\qquad+\left(\dfrac{2\lambda^{2}N}{(\lambda+\mu)^{2}}-\dfrac{\lambda(N+2M)}{\lambda+\mu}+M\right)E_{\nu}(-(\lambda+\mu)t^{\nu})-\left(M-N\dfrac{\lambda}{\lambda+\mu}\right)^{2}E_{\nu}(-(\lambda+\mu)t^{\nu})^{2}
(5.3) +Nλμ(λ+μ)2+{(MNλλ+μ)Eν((λ+μ)tν)+Nλλ+μ}2.\displaystyle\qquad+\dfrac{N\lambda\mu}{(\lambda+\mu)^{2}}+\left\{\left(M-\dfrac{N\lambda}{\lambda+\mu}\right)E_{\nu}(-(\lambda+\mu)t^{\nu})+\dfrac{N\lambda}{\lambda+\mu}\right\}^{2}.

To estimate parameter λ\lambda and ν\nu, we equate sample moments of the FBP (5.1) with the population moments (5.2) and numerically solve the following equation

m1\displaystyle m_{1} =μ1(λ,ν)\displaystyle=\mu_{1}^{\prime}(\lambda,\nu)
(5.4) m2\displaystyle m_{2} =μ2(λ,ν).\displaystyle=\mu_{2}^{\prime}(\lambda,\nu).

We took sample J=500J=500 and repeated this process KK times, while estimating parameters, that is, we generate this samples X1,X2,,X500X_{1},X_{2},...,X_{500} of the FBP for different sample sizes K=100K=100, 10001000 and 10,000.10,000. Then, we evaluate sample mean (m1,im_{1,i}) and sample second moment (m2,i)m_{2,i}) using (5) for i=1,2,,Ki=1,2,\ldots,K, which gives KK estimates of λ\lambda and ν\nu each from above equation (5), subsequently, we take average of KK estimates of λ\lambda and ν\nu to obtain λ^\hat{\lambda} and ν^\hat{\nu}.

Here, we have used numerical method to solve equations (5) as it is easy to observe from equation (5.2) that they have complicated form and hard to solve analytically. The tables below display these values together with associated MAD (mean absolute deviation) and MSE (mean square error). For five distinct pairs of values of λ\lambda and ν\nu, the FBP data were simulated.

Table 1. Parameter estimation and its dispersion’s of the FBP for parameter λ=0.3\lambda=0.3 and ν=0.8\nu=0.8 with μ=0.5\mu=0.5, M=30M=30 and N=500N=500.
K=100K=100 K=1,000K=1,000 K=10,000K=10,000
MeanMADMSE MeanMADMSE MeanMADMSE
λ^\hat{\lambda} 0.3045 0.0071 0.000073 0.3036 0.0046 0.00003 0.3016 0.0018 0.00001
ν^\hat{\nu} 0.8652 0.1853 0.0482 0.8588 0.1820 0.0466 0.8358 0.0547 0.0042
Table 2. Parameter estimation and its dispersion’s of the FBP for parameter λ=0.5\lambda=0.5 and ν=0.4\nu=0.4 with μ=0.5\mu=0.5, M=30M=30 and N=500N=500.
K=100K=100 K=1,000K=1,000 K=10,000K=10,000
MeanMADMSE MeanMADMSE MeanMADMSE
λ^\hat{\lambda} 0.4942 0.0177 0.00049 0.4962 0.0060 0.00006 0.4975 0.0078 0.00009
ν^\hat{\nu} 0.4395 0.0754 0.0090 0.4206 0.0266 0.0011 0.4175 0.0311 0.0017
Table 3. Parameter estimation and its dispersion’s of the FBP for parameter λ=0.6\lambda=0.6 and ν=0.9\nu=0.9 with μ=0.5\mu=0.5, M=30M=30 and N=500N=500.
K=100K=100 K=1,000K=1,000 K=10,000K=10,000
MeanMADMSE MeanMADMSE MeanMADMSE
λ^\hat{\lambda} 0.5982 0.0141 0.00005 0.5985 0.0042 0.00003 0.5989 0.0012 0.00001
ν^\hat{\nu} 0.9062 0.0934 0.0156 0.8930 0.0290 0.0013 0.8938 0.0071 0.00008
Table 4. Parameter estimation and its dispersion’s of the FBP for parameter λ=0.7\lambda=0.7 and ν=0.2\nu=0.2 with μ=0.5\mu=0.5, M=30M=30 and N=500N=500.
K=100K=100 K=1,000K=1,000 K=10,000K=10,000
MeanMADMSE MeanMADMSE MeanMADMSE
λ^\hat{\lambda} 0.6845 0.0157 0.00031 0.6874 0.0029 0.000012 0.6804 0.0009 0.00001
ν^\hat{\nu} 0.1405 0.0595 0.00025 0.1479 0.0521 0.000012 0.1501 0.0020 0.000005
Table 5. Parameter estimation and its dispersion’s of the FBP for parameter λ=0.9\lambda=0.9 and ν=0.5\nu=0.5 with μ=0.5\mu=0.5, M=30M=30 and N=500N=500.
K=100K=100 K=1,000K=1,000 K=10,000K=10,000
MeanMADMSE MeanMADMSE MeanMADMSE
λ^\hat{\lambda} 0.8877 0.0254 0.0010 0.8896 0.0095 0.00014 0.8902 0.0028 0.00001
ν^\hat{\nu} 0.5383 0.0954 0.0095 0.5221 0.0207 0.00008 0.5209 0.0070 0.00007

The estimation Tables 1-5 demonstrate that the relative fluctuation for estimates of λ\lambda and μ\mu keep approaching true value as sample sizes increase. We also observe that the true value of parameters and estimated parameters are very close to each other and there is nearly less than 55 percent of variation between them. It is important to keep in mind that typical sample size KK in many real-world applications, including network traffic data, are in the millions or more. Given the context and calculations done, we claim that our results shows robust and accurate parameter estimation. Table 6 shows result for percent bias and coefficient of variation (CV) based on for 10001000 simulation, where

Percent bias =|parameter average value- parameter value|parameter value100\displaystyle=\frac{|\mbox{parameter average value- parameter value}|}{\mbox{parameter value}}*100
CV =standard deviation of the estimatesaverage estimates100.\displaystyle=\frac{\mbox{standard deviation of the estimates}}{\mbox{average estimates}}*100.
Table 6. Percent bias and coefficient of variation for parameter λ\lambda and ν.\nu.
(λ,ν\lambda,\nu) K=100K=100 K=1,000K=1,000 K=10,000K=10,000
Bias CV Bias CV Bias CV
λ=0.3\lambda=0.3 1.5146 2.8224 1.2027 1.8942 0.7977 0.7985
ν=0.8\nu=0.8 8.1469 25.8868 7.3469 23.4176 4.4778 7.8188
λ=0.5\lambda=0.5 1.1597 5.0131 0.7068 4.5306 0.7011 1.5336
ν=0.4\nu=0.4 9.8809 23.6730 9.4664 21.4996 5.1618 8.0970
λ=0.6\lambda=0.6 0.3976 2.8965 0.3026 0.8610 0.2382 0.2425
ν=0.9\nu=0.9 4.0488 13.8720 0.7369 4.1093 0.6852 1.0077
λ=0.7\lambda=0.7 2.2198 2.7068 1.8022 0.5726 1.3710 0.1674
ν=0.2\nu=0.2 29.7280 11.8007 26.0687 2.8390 25.7242 1.6509
λ=0.9\lambda=0.9 1.3624 3.5978 1.1550 1.3553 1.1537 0.3964
ν=0.5\nu=0.5 13.2784 18.2131 12.9863 5.112 10.2784 1.7249

Concluding Remarks

We have investigated that the FBP has the LRD property and its increment exhibits the SRD property. We have used the one-dimensional distributions of the FBP to simulate sample path for the process. We have derived the distribution of sojourn time of the FBP and used it to simulate sample trajectories. We have used MoM estimation technique to estimate parameters of the FBP. On comparing generated sample path in Figure 1, we can see that time taken to occur next birth or death reduces and population burst occurs. This behaviour makes the FBP more applicable in nature as such incidences occurs in real life, for example during Covid-1919 demand of masks, sanitizer, oxygen cylinder and many other things saw a burst in their demands.

𝐀𝐜𝐤𝐧𝐨𝐰𝐥𝐞𝐝𝐠𝐦𝐞𝐧𝐭\mathbf{Acknowledgment}

First author would like to acknowledge the Centre for Mathematical & Financial Computing and the DST-FIST grant for the infrastructure support for the computing lab facility under the scheme FIST (File No: SR/FST/MS-I/2018/24) at the LNMIIT, Jaipur.

References

  • [1] David middleton, an introduction to statistical communication theory, mcgraw-hill, 1961.
  • [2] Jean Avan, Jean Avan, Nicolas Grosjean, Nicolas Grosjean, Thierry Huillet, and Thierry Huillet. Did the ever dead outnumber the living and when? a birth-and-death approach. Physica A-statistical Mechanics and Its Applications, 2015.
  • [3] Norman T. J. Bailey. The elements of stochastic processes with applications to the natural sciences. John Wiley & Sons, Inc., New York-London-Sydney, 1964.
  • [4] Luisa Beghin and Costantino Ricciuti. Lévy processes linked to the lower-incomplete gamma function. Fractal and Fractional, 5(3), 2021.
  • [5] Mario N Berberan-Santos. Properties of the mittag-leffler relaxation function. Journal of Mathematical Chemistry, 38(4):629–635, 2005.
  • [6] N. H. Bingham. Limit theorems for occupation times of Markov processes. Z. Wahrscheinlichkeitstheorie und Verw. Gebiete, 17:1–22, 1971.
  • [7] Dexter O Cahoy and Federico Polito. On a fractional binomial process. Journal of Statistical Physics, 146:646–662, 2012.
  • [8] Dexter O. Cahoy and Federico Polito. Parameter estimation for fractional birth and fractional death processes. Stat. Comput., 24(2):211–222, 2014.
  • [9] Dexter O Cahoy, Federico Polito, and Vir Phoha. Transient behavior of fractional queues and related processes. Methodology and Computing in Applied Probability, 17:739–759, 2015.
  • [10] Rui Chen, Ollivier Hyrien, Rui Chen, Rui Chen, and Ollivier Hyrien. Quasi- and pseudo-maximum likelihood estimators for discretely observed continuous-time markov branching processes. Journal of Statistical Planning and Inference, 2011.
  • [11] Cláudia Codeço, Subhash Lele, Mercedes Pascual, Menno Bouma, and Albert Ko. A stochastic model for ecological systems with strong nonlinear response to environmental drivers: Application to two water-borne diseases. Journal of the Royal Society, Interface / the Royal Society, 5:247–52, 08 2007.
  • [12] Miklós Csűrös and István Miklós. A probabilistic model for gene content evolution with duplication, loss, and horizontal transfer. 10 2005.
  • [13] Wilbur B. Davenport and William L. Root. An introduction to the theory of random signals and noise. 1958.
  • [14] A. C. Davison, Anthony C. Davison, Sophie Hautphenne, Sophie Hautphenne, Andrea Kraus, and Andrea Kraus. Parameter estimation for discretely observed linear birth-and-death processes. Biometrics, 2020.
  • [15] P. Diament, Paul Diament, Malvin C. Teich, and Malvin C. Teich. Evolution of the statistical properties of photons passed through a traveling-wave laser amplifier. IEEE Journal of Quantum Electronics, 1992.
  • [16] Mirko D’Ovidio and Erkan Nane. Time dependent random fields on spherical non-homogeneous surfaces. Stochastic Processes and their Applications, 124(6):2098–2131, 2014.
  • [17] Arthur Erdelyi, Wilhelm Magnus, Fritz Oberhettinger, and Francesco G. Tricomi, editors. Higher transcendental functions: Vol 2. California Institute of Technology Bateman manuscript project. McGraw-Hill, New York, 1953.
  • [18] William Feller and William Feller. Die grundlagen der volterraschen theorie des kampfes ums dasein in wahrscheinlichkeitstheoretischer behandlung. Acta Biotheoretica, 1939.
  • [19] Willliam Feller. An introduction to probability theory and its applications, vol 2. John Wiley & Sons, 2008.
  • [20] Plamen Ch Ivanov, Luis A Nunes Amaral, Ary L Goldberger, Shlomo Havlin, Michael G Rosenblum, Zbigniew R Struzik, and H Eugene Stanley. Multifractality in human heartbeat dynamics. Nature, 399(6735):461–465, 1999.
  • [21] E Jakeman. Statistics of binomial number fluctuations. Journal of Physics A: Mathematical and General, 23(13):2815, 1990.
  • [22] E. Jakeman and E Jakeman. Non-gaussian statistical models for scattering calculations. Waves in Random Media, 1991.
  • [23] E. Jakeman, E Jakeman, R. Loudon, and R Loudon. Fluctuations in quantum and classical populations. Journal of Physics A, 1991.
  • [24] E. Jakeman, Eric Jakeman, Eric Jakeman, Sean Phayre, Sean Phayre, Eric Renshaw, and Eric Renshaw. The evolution and measurement of a population of pairs. Journal of Applied Probability, 1995.
  • [25] Niels Keiding and Niels Keiding. Maximum likelihood estimation in the birth-and-death process. Advances in Applied Probability, 1974.
  • [26] David G. Kendall. Stochastic Processes and Population Growth. Journal of the Royal Statistical Society: Series B (Methodological), 11(2):230–264, 12 2018.
  • [27] S. C. Kou and S. C. Kou. Modeling growth stocks via birth-death processes. Advances in Applied Probability, 2003.
  • [28] Samuel Kou and Steven Kou. Modeling growth stocks via size distribution. SSRN Electronic Journal, 11 2001.
  • [29] Iratxe Landa, Aitor Blazquez, Manuel Velez, and Amaia Arrinda. Indoor measurements of iot wireless systems interfered by impulsive noise from fluorescent lamps. pages 2080–2083, 03 2017.
  • [30] Sicong Liu, Fang Yang, Wenbo Ding, and Jian Song. Double kill: Compressive-sensing-based narrow-band interference and impulsive noise mitigation for vehicular communications. IEEE Transactions on Vehicular Technology, 65:1–1, 01 2015.
  • [31] Thomas Lux. The stable paretian hypothesis and the frequency of large returns: an examination of major german stocks. Applied financial economics, 6(6):463–475, 1996.
  • [32] A. Maheshwari and P. Vellaisamy. On the long-range dependence of fractional poisson and negative binomial processes. Journal of Applied Probability, 53(4):989–1000, 2016.
  • [33] Benoit Mandelbrot. The variation of certain speculative prices. J. Bus., 36(4):394, January 1963.
  • [34] M. M. Meerschaert and P. Straka. Inverse stable subordinators. Math. Model. Nat. Phenom., 8(2):1–16, 2013.
  • [35] Sean Nee. Birth-death models in macroevolution. ann rev ecol evol s. Annu. Rev. Ecol. Evol. Syst, 37:1–17, 12 2006.
  • [36] Adrian Pagan. The econometrics of financial markets. Journal of empirical finance, 3(1):15–102, 1996.
  • [37] C-K Peng, Shlomo Havlin, H Eugene Stanley, and Ary L Goldberger. Quantification of scaling exponents and crossover phenomena in nonstationary heartbeat time series. Chaos: an interdisciplinary journal of nonlinear science, 5(1):82–87, 1995.
  • [38] Mark Pinsky and Samuel Karlin. An introduction to stochastic modeling. Academic press, 2010.
  • [39] Mary Sehl, Hua Zhou, Janet S. Sinsheimer, and Kenneth L. Lange. Extinction models for cancer stem cell therapy. Mathematical Biosciences, 234(2):132–146, 2011.
  • [40] Milica Stojanovic and J. Preisig. Underwater acoustic communication channels: Propagation models and statistical characterization. Communications Magazine, IEEE, 47:84 – 89, 02 2009.
  • [41] Hiroya Takada, Hideyuki Takada, Ushio Sumita, Ushio Sumita, Hui Jin, and Hui Jin. Development of computational algorithms for evaluating option prices associated with square-root volatility processes. Methodology and Computing in Applied Probability, 2009.
  • [42] Simon Tavaré and Simon Tavaré. The linear birth‒death process: an inferential retrospective. Advances in Applied Probability, 2018.
  • [43] Jeffrey Thorne, Hirohisa Kishino, and J. Felsenstein. Erratum: An evolutionary model for maximum likelihood alignment of dna sequences (j mol evol (1991) 33 (114-124)). Journal of Molecular Evolution, 34:91–92, 01 1992.
  • [44] C. Varotsos and D. Kirk-Davidoff. Long-memory processes in ozone and temperature variations at the region 60\,{}^{\circ}s-60\,{}^{\circ}n. Atmospheric Chemistry and Physics, 6(12):4093–4100, 2006.
  • [45] Jason Xu, Peter Guttorp, Midori Kato-Maeda, and Vladimir N. Minin. Likelihood-based inference for discretely observed birth–death-shift processes, with applications to evolution of mobile genetic elements. Biometrics, 71(4):1009–1021, 2015.