This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Least squares estimators for reflected Ornstein–Uhlenbeck processes

Han Yuecai [email protected] Zhang Dingwen [email protected] School of Mathematics, Jilin University, Changchun, China
Abstract

In this paper, we investigate the parameter estimation problem for reflected Ornstein–Uhlenbeck processes. Both the estimates based on continuously observed processes and discretely observed processes are considered. The explicit formulas for the estimators are derived using the least squares method. Under some regular conditions, we obtain the consistency and establish the asymptotic normality for the estimators. Numerical results show that the proposed estimators perform well with moderate sample sizes.

keywords:
Least squares estimator, Reflected Ornstein–Uhlenbeck process , Ergodicity , Continuously observed processes , Discretely observed processes
journal: Journal of Statistical Planning and Inference

1 Introduction

Consider a filtered probability space (Ω,,{t}t0,)(\Omega,\mathcal{F},\{\mathcal{F}_{t}\}_{t\geq 0},\mathbb{P}) where the filtration {t}t0\{\mathcal{F}_{t}\}_{t\geq 0} satisfies the usual conditions. Let W={Wt}t0W=\{W_{t}\}_{t\geq 0} be a standard Brownian motion adapted to {t}t0\{\mathcal{F}_{t}\}_{t\geq 0}. The reflected Ornstein–Uhlenbeck (OU) process reflected at 0 is described by the following stochastic differential equation (SDE)

{dXt=θXtdt+σdWt+dLt,Xt0for allt0,X0=x,\left\{\begin{aligned} &\mathrm{d}X_{t}=-\theta X_{t}\mathrm{d}t+\sigma\mathrm{d}W_{t}+\mathrm{d}L_{t},\\ &X_{t}\geq 0\quad\text{for all}\quad t\geq 0,\\ &X_{0}=x,\end{aligned}\right. (1.1)

where θ(0,)\theta\in(0,\infty) is the unknown parameter, σ(0,)\sigma\in(0,\infty) is a constant and L={Lt}t0L=\{L_{t}\}_{t\geq 0} is the minimal continuous increasing process which ensures that Xt0X_{t}\geq 0 for all t0t\geq 0. The process LL increases only when XX hits the boundary 0, so that

[0,)I(Xt0)dLt=0,\int_{[0,\infty)}I(X_{t}\geq 0)\mathrm{d}L_{t}=0,

where I()I(\cdot) is the indicator function.

The reflected OU process behaves like a standard OU process in the interior of its domain (0,)(0,\infty). Benefiting from its reflecting barrier, the reflected OU process has been widely used in many areas such as the queueing system (Ward and Glynn, 2005), financial engineering (Bo et al., 2010) and mathematical biology (Ricciardi and Sacerdote, 1987). The reflecting barrier is assumed to be 0 for the physical restriction of the state processes such as queue-length, stock prices and interest rates, which take non-negative values. For more details on reflected OU processes and their broad applications, one can refer to Harrison (1985) and Whitt (2002).

The parameter estimation problem in the reflected OU process has gained much attention in recent years due to its increased applications in broad fields. It is necessary that the parameters which characterize the reflected OU process should be estimated via the data in many real-world applications. As far as we know, the maximum likelihood estimator (MLE) for the drift parameter θ\theta is studied in Bo et al. (2011). They obtain the strong consistency and asymptotic normality of their estimator, but they don’t get the explicit form of asymptotic variance. The sequential MLE based on the continuously observed processes throughout a random time interval [0,τ][0,\tau] is studied in Lee et al. (2012), where τ\tau is a stopping time. The main tool used in the above two papers is the Girsanov’s theorem of reflected Brownian motion. On the other hand, an ergodic type of estimator for θ\theta based on discrete observations is studied in Hu et al. (2015). Recently, the moment estimators for all parameters (θ,σ)(\theta,\sigma) based on the ergodic theorem is studied in Hu and Xi (2021). However, there is only limited literature on least squares estimator (LSE) for the drift parameter of a reflected OU process.

In this paper, we propose two types of LSEs for the drift parameter θ\theta based on continuously observed processes and discretely observed processes respectively. The continuous-type LSE is motivated by aiming to minimize

0T|X˙t+θXtL˙t|2dt.\int_{0}^{T}\left|\dot{X}_{t}+\theta X_{t}-\dot{L}_{t}\right|^{2}\mathrm{d}t.

It is a quadratic function of θ\theta, although we don’t know L˙t\dot{L}_{t} and X˙t\dot{X}_{t}. The minimum is achieved when

θ^T=0TXtdXt0TXtdLt0TXt2dt.\hat{\theta}_{T}=-\frac{\int_{0}^{T}X_{t}\mathrm{d}X_{t}-\int_{0}^{T}X_{t}\mathrm{d}L_{t}}{\int_{0}^{T}X_{t}^{2}\mathrm{d}t}.

Assume that h0h\rightarrow 0 and nhnh\rightarrow\infty, as nn\rightarrow\infty. When the processes is observed at the discrete time instants {tk=kh,k=0,1,,n}\{t_{k}=kh,k=0,1,\cdots,n\}, the discrete-type LSE is motivated by minimizing the following contrast function

k=0n1|Xtk+1Xtk+θXtkhkL|2,\sum_{k=0}^{n-1}|X_{t_{k+1}}-X_{t_{k}}+\theta X_{t_{k}}h-\vartriangle_{k}L|^{2},

where kL=Ltk+1Ltk\vartriangle_{k}L=L_{t_{k+1}}-L_{t_{k}}. The minimum is achieved when

θ~n=k=0n1Xtk(Xtk+1XtkkL)k=0n1Xtk2h.\tilde{\theta}_{n}=-\frac{\sum_{k=0}^{n-1}X_{t_{k}}(X_{t_{k+1}}-X_{t_{k}}-\vartriangle_{k}L)}{\sum_{k=0}^{n-1}X_{t_{k}}^{2}h}.

The remainder of this paper is organized as follows. In Section 2, we describe some preliminary results related to our context. Section 3 is devoted to obtaining the asymptotic behavior of the two estimators. Section 4 presents some numerical results and Section 5 concludes.

2 Preliminaries

In this section, we first introduce some basic facts. Throughout this paper, we shall use the notation “P\stackrel{{\scriptstyle P}}{{\longrightarrow}}” to denote “convergence in probability” and the notation “\sim” to denote “convergence in distribution”.

With the previous results (Hu et al., 2015; Linesky, 2005; Ward and Glynn, 2003), we know that the unique invariant density of {Xt}t0\{X_{t}\}_{t\geq 0} is

p(x)=22γσ2ϕ(2γσ2x),x[0,),p(x)=2\sqrt{\frac{2\gamma}{\sigma^{2}}}\phi\left(\sqrt{\frac{2\gamma}{\sigma^{2}}}x\right),\quad x\in[0,\infty), (2.1)

where ϕ(u)=(2π)1/2eu22\phi(u)=(2\pi)^{-1/2}e^{-\frac{u^{2}}{2}} is the (standard) Gaussian density function.

Based on the basic stability theories of Markov processes, we have the following ergodic lemma.

Lemma 1

For any x+x\in\mathbb{R}_{+} and any fL1(+,(+))f\in L_{1}(\mathbb{R}_{+},\mathcal{B}(\mathbb{R}_{+})), we have

  1. a.

    The continuously observed processes {Xt}t0\{X_{t}\}_{t\geq 0} is ergodic,

    limT1T0Tf(Xt)dt=𝔼[f(X)]=0f(x)p(x)𝑑x.\lim_{T\rightarrow\infty}\frac{1}{T}\int_{0}^{T}f(X_{t})\mathrm{d}t=\mathbb{E}[f(X_{\infty})]=\int_{0}^{\infty}f(x)p(x)dx.
  2. b.

    The discretely observed processes {Xtk,k=0,1,n}\{X_{t_{k}},k=0,1\cdots,n\} is ergodic,

    limn1nk=1nf(Xtk)=𝔼[f(X)]=0f(x)p(x)𝑑x.\lim_{n\rightarrow\infty}\frac{1}{n}\sum_{k=1}^{n}f(X_{t_{k}})=\mathbb{E}[f(X_{\infty})]=\int_{0}^{\infty}f(x)p(x)dx.

Proof of Lemma 1. One can see Han et al. (2016) and Hu et al. (2015) for a proof. \square

By Lemma 1 and the unique invariant density Eq. (2.1), we obtain the formula of second-order moment estimator as following

limT1T0TXt2dt=limn1nk=1nXtk2=𝔼|X()|2=0x2p(x)dx=σ22θ.\lim_{T\rightarrow\infty}\frac{1}{T}\int_{0}^{T}X^{2}_{t}\mathrm{d}t=\lim_{n\rightarrow\infty}\frac{1}{n}\sum_{k=1}^{n}X_{t_{k}}^{2}=\mathbb{E}|X(\infty)|^{2}=\int_{0}^{\infty}x^{2}p(x)\mathrm{d}x=\frac{\sigma^{2}}{2\theta}. (2.2)

3 Asymptotic behavior of the least squares estimators

In this section, we consider the asymptotic behavior of the LSEs for the drift parameter θ\theta. By Eq. (1.1), we provide two useful and crucial alternative expressions for θ^T\hat{\theta}_{T} and θ~n\tilde{\theta}_{n}

θ^T=θσ0TXtdWt0TXt2dt,\hat{\theta}_{T}=\theta-\sigma\frac{\int_{0}^{T}X_{t}\mathrm{d}W_{t}}{\int_{0}^{T}X_{t}^{2}\mathrm{d}t}, (3.1)

and

θ~n=θ+k=0n1Xtktktk+1θ(XtXtk)dtσkWk=0n1Xtk2h.\tilde{\theta}_{n}=\theta+\frac{\sum_{k=0}^{n-1}X_{t_{k}}\int_{t_{k}}^{t_{k+1}}\theta(X_{t}-X_{t_{k}})\mathrm{d}t-\sigma\vartriangle_{k}W}{\sum_{k=0}^{n-1}X^{2}_{t_{k}}h}. (3.2)

The following theorem proves the consistency of the continuous-type LSE.

Theorem 2

The continuous-type LSE θ^T\hat{\theta}_{T} of θ\theta admits the strong consistency, i.e.,

θ^TPθ,\hat{\theta}_{T}\stackrel{{\scriptstyle P}}{{\longrightarrow}}\theta,

as TT tends to infinity.

Proof of Theorem 2. From the alternative expression Eq. (3.1), we have

θ^Tθ=σ1T0TXtdWt1T0TXt2dt.\hat{\theta}_{T}-\theta=-\sigma\frac{\frac{1}{T}\int_{0}^{T}X_{t}\mathrm{d}W_{t}}{\frac{1}{T}\int_{0}^{T}X_{t}^{2}\mathrm{d}t}.

By Lemma 1 and Eq. (2.2), we have

limT1T0TXt2dt=σ22θ.\lim_{T\rightarrow\infty}\frac{1}{T}\int_{0}^{T}X_{t}^{2}\mathrm{d}t=\frac{\sigma^{2}}{2\theta}. (3.3)

Taking into account that the process {0tXsdWs,t0}\{\int_{0}^{t}X_{s}\mathrm{d}W_{s},t\geq 0\} is a martingale and with quadratic variation 0tXs2ds\int_{0}^{t}X_{s}^{2}\mathrm{d}s. Then

𝔼[1T0TXtdWt]=0,\mathbb{E}\bigg{[}\frac{1}{T}\int_{0}^{T}X_{t}\mathrm{d}W_{t}\bigg{]}=0,

and

limT𝔼[(1T0TXtdWt)2]=σ22θT=O(T1).\lim_{T\rightarrow\infty}\mathbb{E}\bigg{[}\big{(}\frac{1}{T}\int_{0}^{T}X_{t}\mathrm{d}W_{t}\big{)}^{2}\bigg{]}=\frac{\sigma^{2}}{2\theta T}=O(T^{-1}).

By Chebyshev’s inequality, we have

limT1T0TXtdWt=0.\lim_{T\rightarrow\infty}\frac{1}{T}\int_{0}^{T}X_{t}\mathrm{d}W_{t}=0. (3.4)

Combining Eq. (3.3) and (3.4), we obtain the desired results. \square

We establish the asymptotic normality of the continuous-type LSE in the following theorem. The convergence rate is comparable to MLE based approch (Bo et al., 2011), and we obtain the explicit formula of the asymptotic variance.

Theorem 3

The continuous-type LSE θ^T\hat{\theta}_{T} of θ\theta admits the asymptotic normality, i.e.,

T(θ^Tθ)𝒩(0,2θ),\sqrt{T}(\hat{\theta}_{T}-\theta)\sim\mathcal{N}(0,2\theta),

as TT tends to infinity.

Proof of Theorem 3 Note that

T(θ^Tθ)\displaystyle\sqrt{T}(\hat{\theta}_{T}-\theta) =σT0TXtdWt0TXt2dt\displaystyle=-\sigma\sqrt{T}\frac{\int_{0}^{T}X_{t}\mathrm{d}W_{t}}{\int_{0}^{T}X_{t}^{2}\mathrm{d}t}
=σT0TXtdWt1T0TXt2dt.\displaystyle=-\frac{\frac{\sigma}{\sqrt{T}}\int_{0}^{T}X_{t}\mathrm{d}W_{t}}{\frac{1}{T}\int_{0}^{T}X_{t}^{2}\mathrm{d}t}.

From Eq. (2.2), we have that 1T0TXt2dt\frac{1}{T}\int_{0}^{T}X_{t}^{2}\mathrm{d}t converges to σ22θ\frac{\sigma^{2}}{2\theta} almost surely, as TT tends to infinity. Then, it is sufficient to show that σT0TXtdWt\frac{\sigma}{\sqrt{T}}\int_{0}^{T}X_{t}\mathrm{d}W_{t} converges in law to a centered normal distribution as TT tends to infinity. It follows immediately from XtX_{t} is adapted with respect to t\mathcal{F}_{t} that σT0TXtdWt\frac{\sigma}{\sqrt{T}}\int_{0}^{T}X_{t}\mathrm{d}W_{t} is a Gaussian random variable with mean 0 and variance σ2T0TXt2dt\frac{\sigma^{2}}{T}\int_{0}^{T}X_{t}^{2}\mathrm{d}t. Based on Eq. (2.2) again, we obtain

σT0TXtdWt𝒩(0,σ42θ).\frac{\sigma}{\sqrt{T}}\int_{0}^{T}X_{t}\mathrm{d}W_{t}\sim\mathcal{N}(0,\frac{\sigma^{4}}{2\theta}). (3.5)

By Slutsky’s theorem and Eq. (3.5), we have

σT0TXtdWt1T0TXt2dt𝒩(0,2θ),\frac{\frac{\sigma}{\sqrt{T}}\int_{0}^{T}X_{t}\mathrm{d}W_{t}}{\frac{1}{T}\int_{0}^{T}X_{t}^{2}\mathrm{d}t}\sim\mathcal{N}(0,2\theta),

which completes the proof. \square

The following theorem proves the consistency of the discrete-type LSE.

Theorem 4

The discrete-type LSE θ~n\tilde{\theta}_{n} admits the consistency, i.e.,

θ~nPθ,\tilde{\theta}_{n}\stackrel{{\scriptstyle P}}{{\longrightarrow}}\theta,

as nn tends to infinity.

Proof of Theorem 4. From the alternative expression Eq. (3.2), we have

θ~nθ=1nhk=0n1Xtk(tktk+1θ(XtXtk)dtσkW)1nhk=0n1Xtk2h.\tilde{\theta}_{n}-\theta=\frac{\frac{1}{nh}\sum_{k=0}^{n-1}X_{t_{k}}\big{(}\int_{t_{k}}^{t_{k+1}}\theta(X_{t}-X_{t_{k}})\mathrm{d}t-\sigma\vartriangle_{k}W\big{)}}{\frac{1}{nh}\sum_{k=0}^{n-1}X^{2}_{t_{k}}h}.

We first consider the estimate of suptkttk+1|XtXtk|\sup_{t_{k}\leq t\leq t_{k+1}}|X_{t}-X_{t_{k}}|. For tkttk+1t_{k}\leq t\leq t_{k+1}, we have

|XtXtk|\displaystyle|X_{t}-X_{t_{k}}|
=\displaystyle= |θtktXudu+σ(WtWtk)+(LtLtk)|\displaystyle|-\theta\int_{t_{k}}^{t}X_{u}\mathrm{d}u+\sigma(W_{t}-W_{t_{k}})+(L_{t}-L_{t_{k}})|
=\displaystyle= |θtkt(XuXtk)duθXtk(ttk)+σ(WtWtk)(LtLtk)|\displaystyle|-\theta\int_{t_{k}}^{t}(X_{u}-X_{t_{k}})\mathrm{d}u-\theta X_{t_{k}}(t-t_{k})+\sigma(W_{t}-W_{t_{k}})(L_{t}-L_{t_{k}})|
\displaystyle\leq |Xtk|hθ+suptkttk+1(σ|WtWtk|+(LtLtk))+θtkt|XuXtk|du.\displaystyle|X_{t_{k}}|h\theta+\sup_{t_{k}\leq t\leq t_{k+1}}\big{(}\sigma|W_{t}-W_{t_{k}}|+(L_{t}-L_{t_{k}})\big{)}+\theta\int_{t_{k}}^{t}|X_{u}-X_{t_{k}}|\mathrm{d}u.

By Gronwall’s inequality, we have

|XtXtk|(|Xtk|hθ+suptkttk+1(σ|WtWtk|+(LtLtk)))eθ(ttk).|X_{t}-X_{t_{k}}|\leq\bigg{(}|X_{t_{k}}|h\theta+\sup_{t_{k}\leq t\leq t_{k+1}}\big{(}\sigma|W_{t}-W_{t_{k}}|+(L_{t}-L_{t_{k}})\big{)}\bigg{)}e^{\theta(t-t_{k})}.

It follows that

suptkttk+1|XtXtk|(|Xtk|h+suptkttk+1(σ|WtWtk|+(LtLtk)))eθh.\sup_{t_{k}\leq t\leq t_{k+1}}|X_{t}-X_{t_{k}}|\leq\bigg{(}|X_{t_{k}}|h+\sup_{t_{k}\leq t\leq t_{k+1}}\big{(}\sigma|W_{t}-W_{t_{k}}|+(L_{t}-L_{t_{k}})\big{)}\bigg{)}e^{\theta h}.

By the properties of the process LL, we have

Ltk+1Ltk=max(0,AtkXtk),L_{t_{k+1}}-L_{t_{k}}=\max\big{(}0,A_{t_{k}}-X_{t_{k}}\big{)},

where Atk=suptkttk+1{θXtk(ttk)σ(WtWtk)}A_{t_{k}}=\sup_{t_{k}\leq t\leq t_{k+1}}\big{\{}\theta X_{t_{k}}(t-t_{k})-\sigma(W_{t}-W_{t_{k}})\big{\}}. By all paths of Brownian motion are α\alpha-Ho¨\ddot{o}lder continuity, where α(0,12)\alpha\in(0,\frac{1}{2}), we have

suptkttk+1|XtXtk|\displaystyle\sup_{t_{k}\leq t\leq t_{k+1}}|X_{t}-X_{t_{k}}| Chαeθh=O(hα),\displaystyle\leq Ch^{\alpha}e^{\theta h}=O(h^{\alpha}),

where CC is a constant. Then

1nhk=0n1Xtktktk+1θ(XtXtk)dt\displaystyle\frac{1}{nh}\sum_{k=0}^{n-1}X_{t_{k}}\int_{t_{k}}^{t_{k+1}}\theta(X_{t}-X_{t_{k}})\mathrm{d}t (3.6)
\displaystyle\leq θnhk=0n1Xtksuptkttk+1|XtXtk|h\displaystyle\frac{\theta}{nh}\sum_{k=0}^{n-1}X_{t_{k}}\sup_{t_{k}\leq t\leq t_{k+1}}|X_{t}-X_{t_{k}}|h
=\displaystyle= O(hα),\displaystyle O(h^{\alpha}),

which goes to 0 as h0h\rightarrow 0.

Let ϕk(t)=XtkI{t[tk,tk+1)}(t)\phi_{k}(t)=X_{t_{k}}I_{\{t\in[t_{k},t_{k+1})\}}(t). Then we have

limnk=0n1XtkkW=limnk=0n10nhϕk(t)dWt.\lim_{n\rightarrow\infty}\sum_{k=0}^{n-1}X_{t_{k}}\vartriangle_{k}W=\lim_{n\rightarrow\infty}\sum_{k=0}^{n-1}\int_{0}^{nh}\phi_{k}(t)\mathrm{d}W_{t}. (3.7)

By some similar arguments as in the proof of Theorem 2, we have

limn1nhk=0n1XtkσkW=0.\lim_{n\rightarrow\infty}\frac{1}{nh}\sum_{k=0}^{n-1}X_{t_{k}}\sigma\vartriangle_{k}W=0. (3.8)

Combining Eq. (2.2), (3.6) and (3.8), we obtain the desired results. \square

The following theorem establishes the asymptotic normality of the discrete-type LSE.

Theorem 5

Assume that nh1+2α0nh^{1+2\alpha}\rightarrow 0 for α(0,1/2)\alpha\in(0,1/2), as nn tends to infinity. The discrete-type LSE θ~n\tilde{\theta}_{n} of θ\theta admits the asymptotic normality, i.e.,

nh(θ~nθ)𝒩(0,2θ),\sqrt{nh}(\tilde{\theta}_{n}-\theta)\sim\mathcal{N}(0,2\theta),

as nn tends to infinity.

Proof of Theorem 5. Note that

nh(θ~nθ)=1nhk=0n1Xtk(tktk+1θ(XtXtk)dtσkW)1nhk=0n1Xtk2h.\sqrt{nh}(\tilde{\theta}_{n}-\theta)=\frac{\frac{1}{\sqrt{nh}}\sum_{k=0}^{n-1}X_{t_{k}}\big{(}\int_{t_{k}}^{t_{k+1}}\theta(X_{t}-X_{t_{k}})\mathrm{d}t-\sigma\vartriangle_{k}W\big{)}}{\frac{1}{nh}\sum_{k=0}^{n-1}X^{2}_{t_{k}}h}.

By Eq. (3.6), we have

1nhk=0n1Xtktktk+1θ(XtXtk)dtO(nh1+2α),\frac{1}{\sqrt{nh}}\sum_{k=0}^{n-1}X_{t_{k}}\int_{t_{k}}^{t_{k+1}}\theta(X_{t}-X_{t_{k}})\mathrm{d}t\leq O(\sqrt{nh^{1+2\alpha}}), (3.9)

which goes to 0 as nn tends to infinity. By some similar arguments as in the proof of Theorem 3, we have

1nh0nhϕk(t)dWt𝒩(0,σ42θ).\frac{1}{nh}\int_{0}^{nh}\phi_{k}(t)\mathrm{d}W_{t}\sim\mathcal{N}(0,\frac{\sigma^{4}}{2\theta}).

By Eq. (3.7), we have

σnhk=0n1XtkkW𝒩(0,σ42θ).\frac{\sigma}{\sqrt{nh}}\sum_{k=0}^{n-1}X_{t_{k}}\vartriangle_{k}W\sim\mathcal{N}(0,\frac{\sigma^{4}}{2\theta}).

By Eq. (2.2) and Slutsky’s theorem, we obtain the desired results. \square

Remark 6

Our method can be applied to the reflected OU processes with two-sided reflecting barriers (0,b)(0,b), where b(0,)b\in(0,\infty). The two types of LSEs of a two-sided reflected OU process are

θ^T=0TXtdXt0TXtdLt+0TXtdRt0TXt2dt,\hat{\theta}_{T}=-\frac{\int_{0}^{T}X_{t}\mathrm{d}X_{t}-\int_{0}^{T}X_{t}\mathrm{d}L_{t}+\int_{0}^{T}X_{t}\mathrm{d}R_{t}}{\int_{0}^{T}X_{t}^{2}\mathrm{d}t},

and

θ~n=k=0n1Xtk(Xtk+1XtkkL+kR)k=0n1Xtk2h,\tilde{\theta}_{n}=-\frac{\sum_{k=0}^{n-1}X_{t_{k}}(X_{t_{k+1}}-X_{t_{k}}-\vartriangle_{k}L+\vartriangle_{k}R)}{\sum_{k=0}^{n-1}X_{t_{k}}^{2}h},

where RR is the minimal continuous increasing process such that XbX\leq b. The unique invariant density is given by (Linesky, 2005)

p(x)=2θσϕ(2θθx)Φ(2θσb)12,x[0,b],p(x)=\frac{\sqrt{2\theta}}{\sigma}\frac{\phi\left(\frac{\sqrt{2\theta}}{\theta}x\right)}{\Phi\left(\frac{\sqrt{2\theta}}{\sigma}b\right)-\frac{1}{2}},\quad x\in[0,b],

where Φ(y)=yϕ(u)du\Phi(y)=\int_{-\infty}^{y}\phi(u)\mathrm{d}u. Hence the consistency and asymptotic distributions of the two estimators could be obtained. The proofs are similar to the proofs of Theorem 2-5. We omit the details here.

4 Numerical results

In this section, we present some numerical results. For a Monte Carlo simulation of the reflected OU process, one can refer to Lépingle (1995), which is known to yield the same rate of convergence as the usual Euler–Maruyama scheme.

Denote the time between each discretization step by h=0.01h=0.01. We perform N=1000N=1000 Monte Carlo simulations of the sample paths generated by the model with different settings. The overall parameter estimates are evaluated by the bias, standard deviation (Std.dev) and mean squared error (MSE). We also give calculations for the asymptotic variance (Asy.var) nh(θ~nθ)\sqrt{nh}(\tilde{\theta}_{n}-\theta). The results are presented in Table 1.

What we need to emphasize is that the asymptotic variance is exactly the approximation of 2θ2\theta even with different settings of θ\theta. It is effective to verify the explicit, closed form formula proposed in Theorem 3 and 5.

Table 1 summarizes the main findings over 1000 simulations. We observe that as the sample size increases, the bias decreases and is small, that the empirical and model-based standard errors agree reasonably well. The performance improves with larger sample sizes.

The distribution of the proposed estimator with two different settings are illustrated as a histogram in Figure 1 and 2. In each figure, the standard normal distribution density is overlayed as a solid curve. The histogram asymptotically approximates to the standard normal distribution density. Thus, the LSEs work well whether θ\theta is big (θ=1\theta=1) or small (θ=0.5\theta=0.5) and whether in a fairly short time (T=10T=10) or long (T=1000T=1000) time.

Table 1: Simulation results
True parameter n=103n=10^{3} n=104n=10^{4} n=105n=10^{5}
θ=0.5\theta=0.5, Bias 0.2006 0.0129 -0.0076
σ=0.2\sigma=0.2 Std.dev 0.4380 0.1040 0.0310
Asy.var 1.9100 1.0800 0.9620
MSE 0.2320 0.0110 0.0010
θ=0.5\theta=0.5, Bias 0.1890 0.0171 -0.0084
σ=0.5\sigma=0.5 Std.dev 0.4550 0.1080 0.0315
Asy.var 2.0700 1.1700 0.9900
MSE 0.2430 0.0120 0.0011
θ=1\theta=1, Bias 0.1410 -0.0124 -0.0255
σ=1\sigma=1 Std.dev 0.5030 0.1450 0.0439
Asy.var 2.5300 2.1200 1.9300
MSE 0.2730 0.0213 0.0026
Refer to caption
Figure 1: Histogram of T(θ^Tθ)\sqrt{T}(\hat{\theta}_{T}-\theta) with T=100T=100, h=0.01h=0.01, θ=0.5\theta=0.5 and σ=0.2\sigma=0.2.
Refer to caption
Figure 2: Histogram of T(θ^Tθ)\sqrt{T}(\hat{\theta}_{T}-\theta) with T=1000T=1000, h=0.01h=0.01, θ=0.2\theta=0.2 and σ=0.2\sigma=0.2.

5 Conclusion

In this paper, we present two types of least squares estimators for the reflected Ornstein–Uhlenbeck process based on continuously observed processes and discretely observed processes respectively. The consistency and the asymptotic normality have been studied. Moreover, we derive the explicit formula of the asymptotic variance, which is 2θ2\theta. Numerical results show that the least squares estimators work well with different settings. Some further research may include investigating the statistical inference for the other reflected diffusions.

References

  • Bo et al. (2010) Bo, L., Wang, Y., Yang, X., 2010. Some integral functionals of reflected SDEs and their applications in finance. Quant. Financ. 11, 343–348.
  • Bo et al. (2011) Bo, L., Wang, Y., Yang, X., Zhang, G., 2011. Maximum likelihood estimation for reflected Ornstein–Uhlenbeck processes. J. Stat. Plan. Infer. 141, 588–596.
  • Harrison (1985) Harrison, J.M., 1985. Brownian motion and stochastic flow systems. Wiley, New York.
  • Hu et al. (2015) Hu, Y., Lee, C., Lee, M. H., Song, J., 2015. Parameter estimation for reflected Ornstein-Uhlenbeck processes with discrete observations. Stat. Infer. Stoch. Proc. 18, 279–291.
  • Han et al. (2016) Han, Z, Hu, Y., Lee, C., 2016. Optimal pricing barriers in a regulated market using reflected diffusion processes. Quant. Financ. 16, 639-647.
  • Hu and Xi (2021) Hu, Y., Xi, Y., 2021. Estimation of all parameters in the reflected Ornstein–Uhlenbeck process from discrete observations. Stat. Probab. Lett. 174.
  • Lépingle (1995) Lépingle, D., 1995. Euler scheme for reflected stochastic differential equations. Math. Comput. Simul. 38, 119–126.
  • Linesky (2005) Linetsky, V., 2005. On the transition densities for reflected diffusions. Adv. Appl. Probab. 37, 435–460.
  • Lee et al. (2012) Lee, C., Bishwal, J. P. N., Lee, M. H., 2012. Sequential maximum likelihood estimation for reflected Ornstein–Uhlenbeck processes. J. Stat. Plan. Infer. 142, 1234–1242.
  • Ricciardi and Sacerdote (1987) Ricciardi, L.M., Sacerdote, L., 1987. On the probability densities of an Ornstein–Uhlenbeck process with a reflecting boundary. J. Appl. Probab. 24, 355–369.
  • Whitt (2002) Whitt, W., (2002). Stochastic-process limits. Springer, New York.
  • Ward and Glynn (2003) Ward, A.R., Glynn, P.W., 2003. Properties of the reflected Ornstein–Uhlenbeck process. Queueing Syst. 44, 109–123.
  • Ward and Glynn (2005) Ward, A. R., Glynn, P. W., 2005. A diffusion approximation for a GI=GI=1 queue with balking or reneging. Queueing Syst. 50, 371–400.