This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Estimation of the Parameters of Vector Autoregressive (VAR) Time Series Model with Symmetric Stable Noise

Aastha M. Sathe and N. S. Upadhye
Email address: [email protected], [email protected]
Centre of Excellence in Computational Mathematics and Data Science,
Department of Mathematics, Indian Institute of Technology Madras, Chennai-600036, INDIA
(January 21, 2025)
Abstract

In this article, we propose the fractional lower order covariance method (FLOC) for estimating the parameters of vector autoregressive process (VAR) of order pp, p1p\geq 1 with symmetric stable noise. Further, we show the efficiency, accuracy and simplicity of our methods through Monte-Carlo simulation.

Keywords: VAR model, stable distributions, parameter estimation, simulation

1 Introduction

The univariate stationary time series models, namely, the autoregressive models (AR), moving average models (MA) and the general autoregressive moving average models (ARMA) are popular tools in the statistical analysis of univariate time series data (see [1]-[5]). On the other hand, for the analysis of multivariate time series data, one of the most successful, flexible, and easy to use models is the vector autoregressive (VAR) model. The VAR model is especially useful for describing. In the classical definition, the above mentioned models are assumed to be second-order due to finite second moment of the noise term. However, these models fail to capture the heavy-tails of the data. This motivates us to use the family of stable distributions to model the data. Some of the significant and attractive features of stable distributions, apart from stability are heavy-tails, leptokurtic shape, domains of attraction, infinite second moment (with the exception of Gaussian case) and skewness. For more details on stable distributions, see [12]. Hence, there is a need to explore the behaviour of the above mentioned models with stable noise for effective modelling of the time series data which also involves estimation of the parameters of these models. However, because of the infinite variance, only a handful of estimation techniques are available for models based on stable noise (see [13]-[20]).

The structure of dependence for the stable-based models cannot be described by the covariance or correlation functions (univariate case) and cross-covariance or cross-correlation (multivariate case). However, one can find alternative measures of dependence that can replace the classical ones in the case of infinite variance. Some of them are: codifference, covariation and fractional lower order covariance (FLOC) for one-dimensional models and cross-codifference, cross-covariation and cross-FLOC for multidimensional models (see [21]-[29]).

In this article, we develop a method for estimating the parameters of multidimensional VAR model of order pp, p1p\geq 1, with symmetric stable noise for α[1.5,2]\alpha\in[1.5,2]. The proposed method employs the use of FLOC, considered as an extension of the covariance function to the stable case and with several interesting applications. The method is reasonable and effective both from the theoretical and practical aspects. The efficiency of the method is shown on the simulated data and by comparing it with the classical least squares and Yule-Walker method for VAR models.

The paper is organized as follows. Section 2 gives a brief introduction to the stable distributions and bidimensional VAR(pp) model along with the necessary definitions and notations. Section 3 discusses the FLOC based parameter estimation method. Section 4 deals with simulations and comparative analysis of the proposed method with the classical Yule-Walker method. Section 5 deals with an application to financial data. Finally, Section 6 gives some concluding remarks on the proposed method.

2 Preliminaries and Notations

2.1 Stable Distributions

These distributions form a rich class of heavy-tailed distributions, introduced by Paul Lévy [30], in his study on the Generalized Central Limit Theorem. In the one-dimensional case, each distribution, in this class, is characterized by four parameters, namely α\alpha, β\beta, σ\sigma, and δ\delta, which, respectively, denote the index of stability, skewness, scale and shift of the distribution. Their respective ranges are given by α(0,2]\alpha\in(0,2], β[1,1]\beta\in[-1,1], σ>0\sigma>0 and δ\delta\in\mathbb{R}. For more details, see [31]. The characteristic function representation for a univariate stable random variable ZZ is given by [12]

ϕ(t)\displaystyle\phi(\textit{t}) =\displaystyle= {exp{(σ|t|)α[1ιβsign(t)tan(πα2)]+ιδt},α1,exp{σ|t|[1+ιβ2πsign(t)ln|t|]+ιδt},α=1,\displaystyle\begin{cases}\exp\left\{-(\sigma|t|)^{\alpha}\left[1-\iota\beta{\rm sign}(t)\tan\left(\frac{\pi\alpha}{2}\right)\right]+\iota\delta t\right\},&\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \alpha\neq 1,\\ \exp\left\{-\sigma|t|\left[1+\iota\beta\frac{2}{\pi}{\rm sign}(t)\ln|t|\right]+\iota\delta\textit{t}\right\},&\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \alpha=1,\end{cases} (1)

In this paper, we deal with symmetric stable distributions. We say that the distribution is symmetric around zero if and only if β=δ=0\beta=\delta=0 in (1), i.e., if the characteristic function is

ϕ(t)=𝔼(expitZ)=exp{(σ|t|)α}\phi(t)=\mathbb{E}(\exp itZ)=\exp\left\{-(\sigma|t|)^{\alpha}\right\} (2)

Note that when α=2\alpha=2, β=0\beta=0 and δ=0\delta=0, the distribution is SαSS\alpha S Gaussian. Also, except the Gaussian case with α=2\alpha=2, the variance of ZZ is infinite.

In the multi-dimensional case, the characteristic function of a symmetric stable vector 𝐙=(Z1,Z2,,Zr)\mathbf{Z}=(Z_{1},Z_{2},\cdots,Z_{r}) takes the following form [12]

𝔼(expi<𝒕,𝒁>)=exp{Sr<𝒕,𝒔>αΓ(ds)}\mathbb{E}(\exp i<\boldsymbol{t},\boldsymbol{Z}>)=\exp\Big{\{}\int_{S_{r}}\mid<\boldsymbol{t},\boldsymbol{s}>\mid^{\alpha}\Gamma(ds)\Big{\}} (3)

where Γ()\Gamma(\cdot) is a finite spectral symmetric measure on the unit sphere SrS_{r} in r\mathbb{R}^{r} and <,><\cdot,\cdot> is the inner product. Thus, a necessary and sufficient condition for a stable vector to be symmetric is that the shift vector δ𝟎=𝟎\delta_{\boldsymbol{0}}=\boldsymbol{0} and Γ()\Gamma(\cdot) is a finite spectral symmetric measure on SrS_{r}. Note that the information about skewness and scale of the multi-dimensional stable distributions are included in the spectral measure Γ()\Gamma(\cdot).

2.2 Multidimensional VAR(pp) model with symmetric stable noise

Next, we discuss some important definitions and notations required for parameter estimation of multidimensional vector autoregressive of order pp (VAR(pp)) model with symmetric stable noise. We begin our discussion with the classical definitions of the second-order white noise and of the general multidimensional VAR(pp) model which is later extended and modified to incorporate the infinite-variance noise instead of the classical finite variance white noise.

Let the multidimensional time series {𝐙t=(Z1t,,Zrt)T:t}\{\mathbf{Z}_{t}=(Z_{1t},\cdots,Z_{rt})^{T}:t\in\mathbb{Z}\} be a white noise process with mean 𝟎\boldsymbol{0} and covariance matrix \sum if {𝐙t}\{\mathbf{Z}_{t}\} is weak-sense stationary with mean vector 𝟎\boldsymbol{0} and covariance matrix function given by [4]

γ(h)=𝔼[𝐙(t+h)T𝐙t]={whenh=0,0otherwise.\displaystyle\gamma(h)=\mathbb{E}[\mathbf{Z}_{(t+h)}^{T}\mathbf{Z}_{t}]=\begin{cases}\sum\leavevmode\nobreak\ \leavevmode\nobreak\ \rm{when}\leavevmode\nobreak\ \textit{h}=0,\\ 0\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \rm{otherwise}.\end{cases} (4)

Let the multidimensional time series {𝐗t=(X1t,,Xrt)T:t}\{\mathbf{X}_{t}=(X_{1t},\cdots,X_{rt})^{T}:t\in\mathbb{Z}\} (mean-corrected) be a causal VAR(pp) process if it is weak-sense stationary and for all tt\in\mathbb{Z} it satisfies the following equation ([4], [21])

𝐗t=A1𝐗t1++Ap𝐗tp+𝐙t\mathbf{X}_{t}=A_{1}\mathbf{X}_{t-1}+\cdots+A_{p}\mathbf{X}_{t-p}+\mathbf{Z}_{t} (6)

where {𝐙t}\{\mathbf{Z}_{t}\} is a multidimensional white noise and A1ApA_{1}\cdots A_{p} are r×rr\times r matrices of the coefficients. Moreover,

det(IA1zApzp)0\det(I-A_{1}z-\cdots-A_{p}z^{p})\neq 0 (7)

for all zz\in\mathbb{C} such that |z|1|z|\leq 1, where II denotes an identity matrix.
Equivalently, if there exists matrices {Ψj}\{\Psi_{j}\} with absolutely summable components such that for all tt\in\mathbb{Z}

𝐗t=j=0Ψj𝐙tj\mathbf{X}_{t}=\sum_{j=0}^{\infty}\Psi_{j}\mathbf{Z}_{t-j} (8)

where the matrices {Ψj}\{\Psi_{j}\} are found recursively

Ψj=Φj+k=1AkΨjk,forj=0,1,2,\Psi_{j}=\Phi_{j}+\sum_{k=1}^{\infty}A_{k}\Psi_{j-k},\leavevmode\nobreak\ \mathrm{for}\leavevmode\nobreak\ j=0,1,2,\cdots (9)

where Φ0=I\Phi_{0}=I, Φj=0\Phi_{j}=0 for j>0j>0, Aj=0A_{j}=0 for j>pj>p, Ψj=0\Psi_{j}=0 for j<0j<0.

Let the multidimensional time series {𝐗t=(X1t,,Xrt)T:t}\{\mathbf{X}_{t}=(X_{1t},\cdots,X_{rt})^{T}:t\in\mathbb{Z}\} (mean-corrected) be a causal VAR(pp) process with symmetric stable noise if for all tt\in\mathbb{Z} it satisfies the following equation [21]

𝐗t=A1𝐗t1++Ap𝐗tp+𝐙t\mathbf{X}_{t}=A_{1}\mathbf{X}_{t-1}+\cdots+A_{p}\mathbf{X}_{t-p}+\mathbf{Z}_{t} (10)

where the multidimensional noise {𝐙t}\{\mathbf{Z}_{t}\} is a symmetric stable vector in r\mathbb{R}^{r} with the characteristic function defined in (3) and A1,,ApA_{1},\cdots,A_{p} are r×rr\times r matrices of the coefficients. Additionally, we assume that 𝐙t\mathbf{Z}_{t} is independent from 𝐙t+h\mathbf{Z}_{t+h} for all h0h\neq 0. Note that the causality of the model is defined in the same way as in the classical VAR(pp) process defined above.

2.3 Measures of dependence for stable processes

Note that, due to undefined covariance when α<2\alpha<2, the classical dependence mesaures such as the autocovariance or autocorrelation function cannot be considered as a tool for developing methods of parameter estimation of the process defined in (10). In such case, alternative measures of dependence are available in literature that can replace the classical dependence measures. A few known choices are: normalized autocovariation, autocodifference and fractional lower order covariance (FLOC). For details, see ([21], Subsection 2.1 ). Amongst these choices, for the estimation of the parameters of the process defined in (10), we prefer to use FLOC due to its simple formulation and accuracy. Next, we present the definitions of FLOC and the cross-FLOC estimator.

2.3.1 Fractional lower order covariance-FLOC

The fractional lower order covariance for a bidimensional symmetric stable random vector (X,Y)(X,Y) is defined as follows [32]

FLOC(X,Y,A,B)=𝔼[X<A>Y<B>]\mathrm{FLOC}(X,Y,A,B)=\mathbb{E}[X^{<A>}Y^{<B>}] (11)

such that A,B0A,B\geq 0 satisfying A+B<αA+B<\alpha.

Note that the term defined in (11) is dependent on the choice of AA and BB and in the Gaussian case, it reduces to the classical covariance when A=B=1A=B=1. The above measure is applicable to any symmetric stable vector, even with 0<α<10<\alpha<1.

Also, we observe that when A=1,B=q1A=1,\leavevmode\nobreak\ B=q-1, where q[1,α)q\in[1,\alpha) and α(1,2)\alpha\in(1,2), the following relation holds between FLOC and covariation function CV(X,Y)\mathrm{CV}(X,Y) [21]

FLOC(X,Y,1,q1)=CV(X,Y)𝔼(|Y|q)qσYα,\mathrm{FLOC}(X,Y,1,q-1)=\frac{\mathrm{CV}(X,Y)\mathbb{E}(|Y|^{q})}{q\sigma_{Y}^{\alpha}}, (12)

where

CV(X,Y)=q𝔼(XY<q1>)𝔼(|Y|q)σYα,q[1,α).\mathrm{CV}(X,Y)=q\frac{\mathbb{E}(XY^{<q-1>})}{\mathbb{E}(|Y|^{q})}\sigma_{Y}^{\alpha},q\in[1,\alpha).
Remark.

Both the measures of dependence for stable processes, namely, CV(X,Y)\mathrm{CV}(X,Y) and FLOC(X,Y,A,B)\mathrm{FLOC}(X,Y,A,B) are not symmetric in its arguments as opposed to the classical covariance function Cov(X,Y)\mathrm{Cov}(X,Y).

The FLOC is used as a measure of the spatio-temporal dependence of the multidimensional process {𝐗t=(X1t,,Xrt)T:t}\{\mathbf{X}_{t}=(X_{1t},\cdots,X_{rt})^{T}:t\in\mathbb{Z}\}. The estimator of the cross-FLOC is defined as follows [21]

FLOC^(Xit,Xj(tk),A,B)=n=L1L2|xin|A|xj(nk)|Bsign(xinxj(nk))L2L1,i,j=1,,r\widehat{FLOC}(X_{it},X_{j(t-k)},A,B)=\frac{\sum_{n=L_{1}}^{L_{2}}|x_{in}|^{A}|x_{j(n-k)}|^{B}\mathrm{sign}(x_{in}x_{j(n-k)})}{L_{2}-L_{1}},\leavevmode\nobreak\ i,j=1,\cdots,r (13)

where {xi1,xi2,,xiN}\{x_{i1},x_{i2},\cdots,x_{iN}\} and {xj1,xj2,,xjN}\{x_{j1},x_{j2},\cdots,x_{jN}\} are sample trajectories of length NN corresponding to the multivariate process (X1t,,Xrt)(X_{1t},\cdots,X_{rt}) and L1=max(0,k)L_{1}=\max(0,k), L2=min(N,N+k)L_{2}=\min(N,N+k). Note that in (13) replace kk with k-k to obtain FLOC^(Xit,Xj(t+k),A,B)\widehat{FLOC}(X_{it},X_{j(t+k)},A,B)

3 FLOC based parameter estimation of multidimensional VAR(pp) process with symmetric stable noise

In this section, we propose a new method for estimating the coefficients of the r×rr\times r matrices A1ApA_{1}\cdots A_{p} of a causal multidimensional VAR(pp) process {𝐗t=(X1t,,Xrt)T:t}\{\mathbf{X}_{t}=(X_{1t},\cdots,X_{rt})^{T}:t\in\mathbb{Z}\} with symmetric stable noise. The proposed method is based on FLOC introduced in the previous section. FLOC is well defined for stable distributions and can be used as a substitute of covariance function specially when the second moment is infinite. The proposed method is discussed below.

  • Let {Xt}\{X_{t}\} be a multidimensional causal VAR(pp) process with symmetric stable noise given by

    𝐗t=A1𝐗t1++Ap𝐗tp+𝐙t\mathbf{X}_{t}=A_{1}\mathbf{X}_{t-1}+\cdots+A_{p}\mathbf{X}_{t-p}+\mathbf{Z}_{t} (14)

    where {𝐙t}\{\mathbf{Z}_{t}\} is a multidimensional symmetric stable noise with α>1\alpha>1.

  • Multiply both sides of (14) by {𝐗tl<B>T=(X1(tl)<B>,,Xr(tl)<B>)}\{\mathbf{X}_{t-l}^{<B>^{T}}=(X_{1(t-l)}^{<B>},\cdots,X_{r(t-l)}^{<B>})\} and taking expectation, we obtain the following expression

    𝔼[𝐗t𝐗tl<B>T]=A1𝔼[𝐗t1𝐗tl<B>T]++Ap𝔼[𝐗tp𝐗tl<B>T]\mathbb{E}[\mathbf{X}_{t}\mathbf{X}_{t-l}^{<B>^{T}}]=A_{1}\mathbb{E}[\mathbf{X}_{t-1}\mathbf{X}_{t-l}^{<B>^{T}}]+\cdots+A_{p}\mathbb{E}[\mathbf{X}_{t-p}\mathbf{X}_{t-l}^{<B>^{T}}] (15)
  • Next, define

    𝚪l=FLOC(𝐗t,𝐗tl,1,q1)=𝔼[𝐗t𝐗tl<q1>T],l=0,±1,±2,\mathbf{\Gamma}_{l}=FLOC(\mathbf{X}_{t},\mathbf{X}_{t-l},1,q-1)=\mathbb{E}[\mathbf{X}_{t}\mathbf{X}_{t-l}^{<q-1>^{T}}],\leavevmode\nobreak\ l=0,\pm 1,\pm 2,\cdots (16)

    where 1q<α1\leq q<\alpha and 1<α<21<\alpha<2. Thus, 𝚪l\mathbf{\Gamma}_{l} represents r×rr\times r lag ll cross-FLOC matrix given as

    𝚪l=[𝔼[X1tX1(tl)<q1>]𝔼[X1tXr(tl)<q1>]𝔼[XrtX1(tl)<q1>]𝔼[XrtXr(tl)<q1>]]\mathbf{\Gamma}_{l}=\begin{bmatrix}\mathbb{E}[X_{1t}X_{1(t-l)}^{<q-1>}]&\dots&\mathbb{E}[X_{1t}X_{r(t-l)}^{<q-1>}]\\ \vdots&\cdots&\vdots\\ \mathbb{E}[X_{rt}X_{1(t-l)}^{<q-1>}]&\dots&\mathbb{E}[X_{rt}X_{r(t-l)}^{<q-1>}]\end{bmatrix}
  • Thus, (15) takes the following form

    𝚪l=A1𝚪l1++Ap𝚪lp,l=1,p\mathbf{\Gamma}_{l}=A_{1}\mathbf{\Gamma}_{l-1}+\cdots+A_{p}\mathbf{\Gamma}_{l-p},\leavevmode\nobreak\ l=1,\cdots p (17)

    and we get the following system of matrix equations

    [𝚪1𝚪p]=[A1Ap][𝚪0𝚪1𝚪p1𝚪1𝚪0𝚪p2𝚪1p𝚪2p𝚪0][\mathbf{\Gamma}_{1}\cdots\mathbf{\Gamma}_{p}]=[A_{1}\cdots A_{p}]\begin{bmatrix}\mathbf{\Gamma}_{0}&\mathbf{\Gamma}_{1}&\dots&\mathbf{\Gamma}_{p-1}\\ \mathbf{\Gamma}_{-1}&\mathbf{\Gamma}_{0}&\dots&\mathbf{\Gamma}_{p-2}\\ \vdots&\vdots&\cdots&\vdots\\ \mathbf{\Gamma}_{1-p}&\mathbf{\Gamma}_{2-p}&\cdots&\mathbf{\Gamma}_{0}\end{bmatrix}
  • Finally, the estimates of the coefficients of the r×rr\times r matrices A1ApA_{1}\cdots A_{p} are obtained from the expression given below using the estimator of the cross-FLOC defined in (13).

    [A^1A^p]=[𝚪^1𝚪^p][𝚪^0𝚪^1𝚪^p1𝚪^1𝚪^0𝚪^p2𝚪^1p𝚪^2p𝚪^0]1[\widehat{A}_{1}\cdots\widehat{A}_{p}]=[\mathbf{\widehat{\Gamma}}_{1}\cdots\mathbf{\widehat{\Gamma}}_{p}]\begin{bmatrix}\mathbf{\widehat{\Gamma}}_{0}&\mathbf{\widehat{\Gamma}}_{1}&\dots&\mathbf{\widehat{\Gamma}}_{p-1}\\ \mathbf{\widehat{\Gamma}}_{-1}&\mathbf{\widehat{\Gamma}}_{0}&\dots&\mathbf{\widehat{\Gamma}}_{p-2}\\ \vdots&\vdots&\cdots&\vdots\\ \mathbf{\widehat{\Gamma}}_{1-p}&\mathbf{\widehat{\Gamma}}_{2-p}&\cdots&\mathbf{\widehat{\Gamma}}_{0}\end{bmatrix}^{-1}

4 Simulation and Comparative Analysis

In this section, we investigate the performance of the proposed FLOC based parameter estimation method for bidimensional VAR(pp) models with symmetric stable noise through Monte Carlo simulations. Further, focussing on the practical aspect, we compare the results of the proposed method with the classical least squares (LS) and Yule-Walker (Y-W) [9] method to emphasize the difference in the two approaches. Note that, A=1A=1 is fixed throughout.

To test the proposed estimation procedure, we consider the bidimensional VAR(2) model {𝐗t=(X1t,X2t)T:t}\{\mathbf{X}_{t}=(X_{1t},X_{2t})^{T}:t\in\mathbb{Z}\} with bidimensional independent symmetric stable noise 𝐙t=(Z1t,Z2t)T:t}\mathbf{Z}_{t}=(Z_{1t},Z_{2t})^{T}:t\in\mathbb{Z}\} given below

𝐗t=A1𝐗t1+A2𝐗t2+𝐙t\mathbf{X}_{t}=A_{1}\mathbf{X}_{t-1}+A_{2}\mathbf{X}_{t-2}+\mathbf{Z}_{t} (18)

where A1=[a1a3a2a4]A_{1}=\begin{bmatrix}a_{1}&a_{3}\\ a_{2}&a_{4}\end{bmatrix} and A2=[a5a7a6a8]A_{2}=\begin{bmatrix}a_{5}&a_{7}\\ a_{6}&a_{8}\end{bmatrix}.
Next, for different values of the sample size nn, α[1.5,2]\alpha\in[1.5,2], A1=[0.10.30.20.1]A_{1}=\begin{bmatrix}0.1&0.3\\ 0.2&0.1\end{bmatrix} and A2=[0.20.20.050.1]A_{2}=\begin{bmatrix}0.2&0.2\\ 0.05&0.1\end{bmatrix}, simulations are run where 500 realisations of the model in (18) are generated using the function varima.sim available in “portes” package in R. For each realisation, we calculate A^1\widehat{A}_{1} and A^2\widehat{A}_{2} for different values of BB. From Tables 1, 2, 3 and 4, we observe that for all considered nn, α\alpha and different values of BB, the root mean squared errors (RMSE) get smaller as BB approaches close to α1\alpha-1.

Comparison of classical LS and Y-W with FLOC for VAR models

For comparison of our proposed method with the classical LS and Y-W method, we use the VAR.est function available in the “VAR.etp” package and marfit function available in the “TSSS” package in R respectively. Next, simulations are run where 500 realisations of the model in (18), for different values of nn and α\alpha, are generated using the function varima.sim available in “portes” package in R, where A1=[0.10.30.20.1]A_{1}=\begin{bmatrix}0.1&0.3\\ 0.2&0.1\end{bmatrix} and A2=[0.30.20.40.1]A_{2}=\begin{bmatrix}0.3&0.2\\ 0.4&0.1\end{bmatrix}.
We then estimate the coefficients of the matrices A1A_{1} and A2A_{2} using our proposed method and the classical LS and Y-W method. For the FLOC-based estimators, we choose the parameter BB close to α1\alpha-1 as observed from Tables 1, 2, 3 and 4, more specifically, B=α1.05B=\alpha-1.05. From Tables 5, 6 and 7, we observe that the RMSE of the estimates obtained via the classical LS and Y-W method is fairly larger than our proposed FLOC method for different values of nn and α\alpha. Summarising, our proposed method works better than classical LS and Y-W method.

B 0.00 0.11 0.22 0.33 0.44 0.55
a1=0.1a_{1}=0.1
Mean 0.1068 0.0973 0.1037 0.0983 0.1007 0.0984
RMSE 0.1634 0.1244 0.1156 0.0923 0.0850 0.0856
a2=0.2a_{2}=0.2
Mean 0.02057 0.1735 0.1983 0.1862 0.2016 0.2017
RMSE 0.3383 0.3950 0.2358 0.2846 0.1297 0.1749
a3=0.3a_{3}=0.3
Mean 0.3129 0.3046 0.3067 0.3047 0.3056 0.2925
RMSE 0.1980 0.1887 0.1768 0.1754 0.1320 0.2206
a4=0.1a_{4}=0.1
Mean 0.1288 0.1141 0.1173 0.1198 0.1216 0.982
RMSE 0.1654 0.1343 0.1144 0.1642 0.0912 0.4188
a5=0.2a_{5}=0.2
Mean 0.1750 0.1841 0.1834 0.1912 0.1945 0.1937
RMSE 0.1529 0.1258 0.1141 0.0994 0.0866 0.0828
a6=0.05a_{6}=0.05
Mean 0.0356 0.0558 0.0451 0.0443 0.0408 0.0442
RMSE 0.2655 0.1969 0.1767 0.1966 0.1020 0.1524
a7=0.2a_{7}=0.2
Mean 0.1947 0.2007 0.1959 0.2062 0.1913 0.1998
RMSE 0.1848 0.1687 0.1503 0.1373 0.1156 0.1292
a8=0.1a_{8}=0.1
Mean 0.0947 0.1010 0.0866 0.1053 0.0901 0.1062
RMSE 0.2515 0.2027 0.1393 0.1292 0.0890 0.1679
Table 1: Mean and RMSE of 500 estimates of coefficients of A1A_{1} and A2A_{2} for α=1.6\alpha=1.6 and n=200n=200
B 0.00 0.11 0.22 0.33 0.44 0.55
a1=0.1a_{1}=0.1
Mean 0.0893 0.0983 0.0959 0.1006 0.1036 0.0989
RMSE 0.1479 0.1000 0.0772 0.0578 0.0500 0.0393
a2=0.2a_{2}=0.2
Mean 0.2410 0.1896 0.2022 0.1951 0.2034 0.2021
RMSE 0.9466 0.4107 0.1085 0.1200 0.0882 0.0892
a3=0.3a_{3}=0.3
Mean 0.3457 0.3097 0.3013 0.2970 0.3008 0.3020
RMSE 0.6879 0.1852 0.1230 0.0791 0.1711 0.0768
a4=0.1a_{4}=0.1
Mean 0.1374 0.1153 0.1199 0.1241 0.1203 0.1270
RMSE 0.5535 0.0876 0.0670 0.0719 0.0880 0.0708
a5=0.2a_{5}=0.2
Mean 0.1834 0.1950 0.1958 0.1981 0.1964 0.1981
RMSE 0.2322 0.1022 0.0743 0.0596 0.0587 0.0429
a6=0.05a_{6}=0.05
Mean 0.0164 0.0480 0.0416 0.0424 0.0404 0.0403
RMSE 0.6018 0.2008 0.0808 0.0785 0.0662 0.0669
a7=0.2a_{7}=0.2
Mean 0.1905 0.1948 0.2033 0.1970 0.2067 0.1974
RMSE 0.3064 0.1265 0.1024 0.0720 0.0850 0.0575
a8=0.1a_{8}=0.1
Mean 0.0660 0.0996 0.0972 0.1003 0.0977 0.0950
RMSE 0.5399 0.1922 0.0738 0.0688 0.0600 0.0507
Table 2: Mean and RMSE of 500 estimates of coefficients of A1A_{1} and A2A_{2} for α=1.6\alpha=1.6 and n=700n=700
B 0.12 0.24 0.36 0.48 0.60 0.72
a1=0.1a_{1}=0.1
Mean 0.1023 0.0965 0.1019 0.0978 0.0988 0.0987
RMSE 0.1066 0.0902 0.0853 0.0755 0.0701 0.0760
a2=0.2a_{2}=0.2
Mean 0.2026 0.1873 0.1993 0.1930 0.2007 0.1996
RMSE 0.1461 0.1499 0.1215 0.1464 0.0909 0.1037
a3=0.3a_{3}=0.3
Mean 0.3040 0.3038 0.2968 0.3017 0.3026 0.2918
RMSE 0.1281 0.1127 0.1153 0.1089 0.0959 0.1151
a4=0.1a_{4}=0.1
Mean 0.1348 0.1287 0.1332 0.1352 0.1330 0.1285
RMSE 0.1109 0.0996 0.0924 0.1072 0.0805 0.1611
a5=0.2a_{5}=0.2
Mean 0.1855 0.1886 0.1892 0.1922 0.1965 0.1940
RMSE 0.1016 0.0893 0.0848 0.0794 0.0717 0.0675
a6=0.05a_{6}=0.05
Mean 0.0371 0.0473 0.0374 0.0380 0.0388 0.0407
RMSE 0.01315 0.1044 0.1102 0.1152 0.0804 0.0951
a7=0.2a_{7}=0.2
Mean 0.1951 0.2009 0.1952 0.2042 0.1930 0.2002
RMSE 0.1223 0.1182 0.1069 0.1035 0.0917 0.0892
a8=0.1a_{8}=0.1
Mean 0.0924 0.0937 0.0895 0.0993 0.0875 0.0985
RMSE 0.1221 0.1090 0.0927 0.0849 0.0721 0.0836
Table 3: Mean and RMSE of 500 estimates of coefficients of A1A_{1} and A2A_{2} for α=1.75\alpha=1.75 and n=200n=200
B 0.12 0.24 0.36 0.48 0.60 0.72
a1=0.1a_{1}=0.1
Mean 0.0956 0.0979 0.0966 0.0987 0.1023 0.0989
RMSE 0.0618 0.0578 0.0527 0.0438 0.0411 0.0342
a2=0.2a_{2}=0.2
Mean 0.2031 0.1948 0.1996 0.1977 0.2014 0.2012
RMSE 0.2535 0.1811 0.0661 0.0587 0.0547 0.0543
a3=0.3a_{3}=0.3
Mean 0.3080 0.3032 0.2982 0.2979 0.3011 0.3026
RMSE 0.1766 0.0810 0.0664 0.0529 0.0919 0.0512
a4=0.1a_{4}=0.1
Mean 0.1319 0.1308 0.1336 0.1382 0.1344 0.1380
RMSE 0.0729 0.0644 0.0575 0.0615 0.0707 0.0593
a5=0.2a_{5}=0.2
Mean 0.1951 0.1973 0.1990 0.1984 0.1975 0.1974
RMSE 0.0828 0.0585 0.0486 0.0442 0.0423 0.0362
a6=0.05a_{6}=0.05
Mean 0.0405 0.0424 0.0404 0.0388 0.0387 0.0382
RMSE 0.1281 0.1090 0.0557 0.0521 0.0502 0.0493
a7=0.2a_{7}=0.2
Mean 0.2045 0.1979 0.2027 0.1970 0.2040 0.1972
RMSE 0.0877 0.0679 0.0614 0.0500 0.0565 0.0449
a8=0.1a_{8}=0.1
Mean 0.0898 0.0945 0.0951 0.0978 0.0956 0.0923
RMSE 0.1254 0.0843 0.0488 0.0444 0.0432 0.0396
Table 4: Mean and RMSE of 500 estimates of coefficients of A1A_{1} and A2A_{2} for α=1.75\alpha=1.75 and n=700n=700
True Values FLOC LS Y-W
(𝒏=𝟏𝟎𝟎,𝜶=𝟐,𝑩=0.95)\boldsymbol{(n=100,\alpha=2,B=0.95)} Mean RMSE Mean RMSE Mean RMSE
a1=0.1a_{1}=0.1 0.088 0.092 0.072 0.099 0.079 0.095
a2=0.2a_{2}=0.2 0.192 0.104 0.191 0.119 0.198 0.100
a3=0.3a_{3}=0.3 0.297 0.091 0.290 0.087 0.297 0.095
a4=0.1a_{4}=0.1 0.089 0.084 0.077 0.011 0.091 0.096
a5=0.3a_{5}=0.3 0.277 0.091 0.251 0.102 0.251 0.102
a6=0.4a_{6}=0.4 0.397 0.097 0.386 0.090 0.378 0.094
a7=0.2a_{7}=0.2 0.204 0.096 0.193 0.093 0.192 0.098
a8=0.1a_{8}=0.1 0.092 0.081 0.078 0.103 0.063 0.102
(𝒏=𝟖𝟎𝟎,𝜶=𝟐,𝑩=0.95)\boldsymbol{(n=800,\alpha=2,B=0.95)} Mean RMSE Mean RMSE Mean RMSE
a1=0.1a_{1}=0.1 0.096 0.032 0.096 0.033 0.095 0.033
a2=0.2a_{2}=0.2 0.198 0.036 0.203 0.033 0.200 0.035
a3=0.3a_{3}=0.3 0.300 0.032 0.296 0.029 0.300 0.032
a4=0.1a_{4}=0.1 0.098 0.028 0.094 0.030 0.096 0.032
a5=0.3a_{5}=0.3 0.298 0.032 0.299 0.032 0.296 0.033
a6=0.4a_{6}=0.4 0.400 0.032 0.401 0.025 0.399 0.032
a7=0.2a_{7}=0.2 0.201 0.033 0.202 0.029 0.200 0.034
a8=0.1a_{8}=0.1 0.098 0.027 0.095 0.034 0.095 0.033
Table 5: Comparison of mean and RMSE of 500 estimates of coefficients of A1A_{1} and A2A_{2} using FLOC, LS and Y-W method
True Values FLOC LS Y-W
(𝒏=𝟐𝟎𝟎,𝜶=1.85,𝑩=0.8)\boldsymbol{(n=200,\alpha=1.85,B=0.8)} Mean RMSE Mean RMSE Mean RMSE
a1=0.1a_{1}=0.1 0.098 0.068 0.092 0.066 0.096 0.066
a2=0.2a_{2}=0.2 0.195 0.081 0.192 0.076 0.195 0.078
a3=0.3a_{3}=0.3 0.300 0.073 0.0296 0.070 0.299 0.076
a4=0.1a_{4}=0.1 0.087 0.069 0.098 0.060 0.100 0.060
a5=0.3a_{5}=0.3 0.286 0.069 0.279 0.070 0.272 0.074
a6=0.4a_{6}=0.4 0.403 0.073 0.396 0.067 0.391 0.070
a7=0.2a_{7}=0.2 0.196 0.073 0.194 0.069 0.190 0.071
a8=0.1a_{8}=0.1 0.098 0.059 0.085 0.066 0.081 0.068
(𝒏=𝟕𝟎𝟎,𝜶=1.85,𝑩=0.8)\boldsymbol{(n=700,\alpha=1.85,B=0.8)} Mean RMSE Mean RMSE Mean RMSE
a1=0.1a_{1}=0.1 0.098 0.035 0.097 0.034 0.098 0.034
a2=0.2a_{2}=0.2 0.198 0.067 0.197 0.044 0.197 0.045
a3=0.3a_{3}=0.3 0.299 0.039 0.298 0.038 0.302 0.060
a4=0.1a_{4}=0.1 0.085 0.059 0.100 0.031 0.100 0.032
a5=0.3a_{5}=0.3 0.296 0.035 0.294 0.035 0.292 0.036
a6=0.4a_{6}=0.4 0.404 0.052 0.399 0.037 0.397 0.032
a7=0.2a_{7}=0.2 0.202 0.043 0.201 0.043 0.200 0.047
a8=0.1a_{8}=0.1 0.103 0.042 0.094 0.035 0.093 0.036
Table 6: Comparison of mean and RMSE of 500 estimates of coefficients of A1A_{1} and A2A_{2} using FLOC, LS and Y-W method
True Values FLOC LS Y-W
(𝒏=𝟑𝟎𝟎,𝜶=1.65,𝑩=0.6)\boldsymbol{(n=300,\alpha=1.65,B=0.6)} Mean RMSE Mean RMSE Mean RMSE
a1=0.1a_{1}=0.1 0.098 0.059 0.095 0.081 0.095 0.051
a2=0.2a_{2}=0.2 0.200 0.109 0.199 0.073 0.199 0.071
a3=0.3a_{3}=0.3 0.298 0.073 0.295 0.078 0.295 0.079
a4=0.1a_{4}=0.1 0.056 0.118 0.093 0.053 0.094 0.038
a5=0.3a_{5}=0.3 0.293 0.062 0.292 0.054 0.287 0.059
a6=0.4a_{6}=0.4 0.410 0.093 0.395 0.073 0.389 0.093
a7=0.2a_{7}=0.2 0.195 0.122 0.190 0.181 0.183 0.0184
a8=0.1a_{8}=0.1 0.116 0.077 0.092 0.052 0.089 0.055
(𝒏=𝟔𝟎𝟎,𝜶=1.65,𝑩=0.6)\boldsymbol{(n=600,\alpha=1.65,B=0.6)} Mean RMSE Mean RMSE Mean RMSE
a1=0.1a_{1}=0.1 0.101 0.041 0.097 0.034 0.099 0.035
a2=0.2a_{2}=0.2 0.201 0.083 0.200 0.094 0.198 0.058
a3=0.3a_{3}=0.3 0.301 0.047 0.301 0.041 0.300 0.045
a4=0.1a_{4}=0.1 0.053 0.107 0.100 0.033 0.100 0.034
a5=0.3a_{5}=0.3 0.297 0.040 0.296 0.032 0.294 0.037
a6=0.4a_{6}=0.4 0.412 0.064 0.397 0.065 0.396 0.043
a7=0.2a_{7}=0.2 0.197 0.048 0.195 0.040 0.196 0.037
a8=0.1a_{8}=0.1 0.116 0.066 0.095 0.043 0.095 0.036
Table 7: Comparison of mean and RMSE of 500 estimates of coefficients of A1A_{1} and A2A_{2} using FLOC, LS and Y-W method

5 Application to Financial Data

In this section, we give an empirical analysis of the real financial bivariate data using the vector autoregressive model with symmetric stable noise. We consider the monthly simple returns of the stocks of International Business Machines (IBM) and the SP Composite index (SP) from January 1961 to December 2011 with 612 observations. The datasets are available in the “MTS” package in R.

To determine whether the considered time series datasets are stationary, we implement the Augmented Dickey-Fuller Test (ADF) “adf.test” available in the “tseries” package in R. The pp-values obtained are 0.01 which confirms the stationarity of the datasets. Next, we fit our proposed FLOC based bidimensional autoregressive model with p=2p=2 to the data. Since A=1A=1 is fixed, we need to estimate α\alpha from the bivariate data to find the value of BB. We use the hybrid method [31] to obtain the estimate of α\alpha. Thus, α^=1.86\hat{\alpha}=1.86 and B=0.8B=0.8, value close to 1α^1-\hat{\alpha} as observed in the simulation study. The estimates of the coefficient matrices are as follows: A^1=[0.0030.0690.0140.023]\widehat{A}_{1}=\begin{bmatrix}0.003&0.069\\ 0.014&0.023\end{bmatrix} and A^2=[0.0400.0310.0210.020]\widehat{A}_{2}=\begin{bmatrix}-0.040&0.031\\ 0.021&0.020\end{bmatrix}.
Next, in order to check if the fitted model is appropriate we analyse the residuals of the time series {X1t}\{X_{1t}\} and {X2t}\{X_{2t}\}. In our model, we assume that the noise series {Z1t}\{Z_{1t}\} or {Z2t}\{Z_{2t}\} is a sample of independent and stable distributed random variables. Thus, we fit stable distribition using the hybrid method [31] to the residual time series {Z1t}\{Z_{1t}\} and {Z2t}\{Z_{2t}\}. The estimates obtained are (α^,β^,σ^,δ^)=(1.900,0.708,0.027,0.008)\hat{\alpha},\hat{\beta},\hat{\sigma},\hat{\delta})=(1.900,-0.708,0.027,0.008) and 1.851,0.148,0.035,0.0101.851,0.148,0.035,0.010), respectively for {Z1t}\{Z_{1t}\} and {Z2t}\{Z_{2t}\}.

In order to check if the residuals can be considered as an independent sample, we make use of the empirical auto-FLOC function defined in [34] instead of the classical autocovariance (autocorrelation) function for our model. From the residuals auto-FLOC plot in Figure 1 and 2, one can observe that the dependence in the residual series is almost unidentifiable. Thus, we assume that {Z1t}\{Z_{1t}\} and {Z2t}\{Z_{2t}\} are independent for all values of tt. Note that for empirical auto-FLOC function for {Z1t}\{Z_{1t}\}, A=1A=1 and B=0.85B=0.85 while for {Z2t}\{Z_{2t}\}, A=1A=1 and B=0.8B=0.8. Additionally, the QQ plots as shown in Figure 3, represent the quantiles for fitted stable distribution with the estimated parameters from the residuals and the empirical quantiles for residuals. We observe the distribution of both the residuals is almost stable. Finally, we perform the Kolmogorov-Smirnov (KS) test as discussed in [33] with the hypothesis that {Z1t}\{Z_{1t}\} and {Z2t}\{Z_{2t}\} have stable distribution. Since the obtained pp-values are 0.60 and 0.7, respectively for {Z1t}\{Z_{1t}\} and {Z2t}\{Z_{2t}\}, calculated on the basis of 100 Monte Carlo repetitions, we fail to reject the hypothesis at the significance level 0.05.

Figure 1: Residuals and Auto-FLOC Plot of Z1tZ_{1t}
Refer to caption
Refer to caption
Figure 2: Residuals and Auto-FLOC Plot of Z2tZ_{2t}
Refer to caption
Refer to caption
Figure 3: QQPlots of Z1tZ_{1t} and Z2tZ_{2t}
Refer to caption
Refer to caption

6 Conclusion

To conclude, we make the following observations in relation to our proposed methods.

  • In this article, a new estimation method for multidimensional VAR(pp), p1p\geq 1 with symmetric stable distribution (α[1.5,2])\alpha\in[1.5,2]) is introduced.

  • The proposed method is an extension of the classical Y-W method which is based on the covariance function of the underlying process.

  • The simulation study reflects the effectiveness of the proposed method in comparison to the classical LS and Y-W method. The application of the FLOC measure is justified from the theoretical point of view in the considered case (the theoretical covariance does not exist) however by simulation study we have proved it is reasonable to use the new technique taking under account the practical aspects.

  • Finally, we fit our proposed model to the bivariate financial data.

References

  • [1] Hong-Zhi, A., Zhao-Guo, C., Hannan, E.J.: A note on ARMA estimation. J. Time Ser. Anal. 4(1), 9–17 (1983).
  • [2] McKenzie, E.: A note on the derivation of theoretical autocovariances for ARMA models. J. Stat. Comput. Simul. 24, 159–162 (1986)
  • [3] Chan, H., Chinipardaz, R., Cox, T.: Discrimination of AR, MA and ARMA time series models. Commun. Stat. Theory Methods 25(6), 1247–1260 (1996)
  • [4] Brockwell, P.J., Davis, R.A.: Introduction to Time Series and Forecasting. Springer, New York (2002)
  • [5] Tsai, H., Chan, K.S.: A note on non-negative ARMA processes. J. Time Ser. Anal. 28, 350–360 (2007)
  • [6] Ansley, C.F.: Computation of the theoretical autocovariance function for a vector arma process. J. Stat. Comput. Simul. 12(1), 15–24 (1980)
  • [7] Ansley, C.F., Kohn, R.: A note on reparameterizing a vector autoregressive moving average model to enforce stationarity. J. Stat. Comput. Simul. 24(2), 99–106 (1986)
  • [8] Mauricio, J.A.: Exact maximum likelihood estimation of stationary vector ARMA models. J. Am. Stat. Assoc. 90(429), 282–291 (1995)
  • [9] Luetkepohl, H.: Forecasting Cointegrated VARMA Processes, p. 373. Humboldt Universitaet Berlin, Sonderforschungsbereich (2007)
  • [10] Boubacar Mainassara, Y.: Selection of weak VARMA models by modified Akaike’s information criteria. J. Time Ser. Anal. 33, 121–130 (2012)
  • [11] Niglio, M., Vitale, C.D.: Threshold vector ARMA models. Commun. Stat. Theory Methods 44(14), 2911–2923 (2015)
  • [12] Samorodnitsky, G. and Taqqu, M. S.: Stable Non-Gaussian Random Processes, Stochastic Models with Infinite Variance. Chapman and Hall, New York, 632 pages (1994)
  • [13] Mikosch, T., Gadrich, T., Kluppelberg, C., Adler, R.J.: Parameter estimation for ARMA models with infinite variance innovations. Ann. Stat. 23(1), 305–326 (1995)
  • [14] Anderson, P.L., Meerschaert, M.M.: Modeling river flows with heavy tails. Water Resour. Res. 34(9), 2271–2280 (1998)
  • [15] Adler, R.J., Feldman, R.E., Taqqu, M.S. (eds.): A Practical Guide to Heavy Tails: Statistical Techniques and Applications. Birkha¨user, Cambridge (1998)
  • [16] Thavaneswaran, A., Peiris, S.: Smoothed estimates for models with random coefficients and infinite variance innovations. Math. Comput. Model. 39, 363–372 (2004)
  • [17] Gallagher, C.M.: A method for fitting stable autoregressive models using the autocovariation function. Stat. Probab. Lett. 53(4), 381–390 (2001)
  • [18] Mikosch, T., Straumann, D.: Whittle estimation in a heavy-tailed GARCH(1,1) model. Stoch. Process. Appl. 100(1), 187–222 (2002)
  • [19] Rachev, S. (ed.): Handbook of Heavy Tailed Distributions in Finance. North-Holland, Amsterdam (Netherlands) (2003)
  • [20] Hill, J.B.: Robust estimation and inference for heavy tailed garch. Bernoulli 21(3), 1629–1669 (2015)
  • [21] Grzesiek, A., Sundar, S. and Wyłomańska, A. Fractional lower order covariance-based estimator for bidimensional AR(1) model with stable distribution. Int J Adv Eng Sci Appl Math 11, 217–229 (2019). https://doi.org/10.1007/s12572-019-00250-9
  • [22] Zografos, K.: On a measure of dependence based on Fisher’s information matrix. Commun. Stat. Theory Methods 27(7), 1715–1728 (1998)
  • [23] Resnick, S., Den Berg, V.: Sample correlation behavior for the heavy tailed general bilinear process. Commun. Stat. Stoch. Models 16, 233–258 (1999)
  • [24] Gallagher, C.M.: Testing for linear dependence in heavy-tailed data. Commun. Stat. Theory Methods 31(4), 611–623 (2002)
  • [25] Resnick, S.: The extremal dependence measure and asymptotic independence. Stoch. Models 20(2), 205–227 (2004)
  • [26] Rosadi, D.: Order identification for gaussian moving averages using the codifference function. J. Stat. Comput. Simul. 76, 553–559 (2007)
  • [27] Rosadi, D., Deistler, M.: Estimating the codifference function of linear time series models with infinite variance. Metrika 73(3), 395–429 (2011)
  • [28] Grahovac, D., Jia, M., Leonenko, N., Taufer, E.: Asymptotic properties of the partition function and applications in tail index inference of heavy-tailed data. Statistics 49(6), 1221–1242 (2015)
  • [29] Rosadi, D.: Measuring dependence of random variables with finite and infinite variance using the codifference and the generalized codifference function. AIP Conf. Proc. 1755(1), 120004 (2016)
  • [30] Lévy, P.: Th‘eorie des erreurs la loi de Gauss et les lois exceptionelles, Bulletin de la Soci‘et‘e de France 52:49-85 (1924)
  • [31] Sathe, Aastha M. and Upadhye, Neelesh S.: Estimation of the Parameters of Multivariate Stable Distributions. Communications in Statistics- Simulation and Computation, 1-18 (2019)
  • [32] Ma, X., Nikias, C.L.: Joint estimation of time delay and frequency delay in impulsive noise using fractional lower order statistics. IEEE Trans. Signal Process. 44(11), 2669–2687 (1996)
  • [33] Kruczek, P., Wylomanska, A., Teuerle, M., and Gajda, J. (2017). The modified Yule-Walker method for α\alpha-stable time series models. Physica A: Statistical Mechanics and Its Applications 469, 588-603.
  • [34] Kruczek, P., Żuławiński, W., Pagacz, P. and Wyłomańska, A. (2019). Fractional lower order covariance based-estimator for Ornstein-Uhlenbeck process with stable distribution. Mathematica Applicanda, 47(2).