This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

\jyear

2023

[1]\fnmMartial \surLongla

1]\orgdivDepartment of mathematics, \orgnameUniversity of Mississippi, \orgaddress\streetUniversity Ave, \cityUniversity, \postcode38677, \stateMississippi, \countryUS
ORCID: 0000-0002-9432-0640 2]\orgdivDepartment of mathematics, \orgnameUniversity of Maroua, \countryCameroon

Estimation problems for some perturbations of the independence copula.

[email protected]    \fnmMous-Abou \sur Hamadou [email protected] [ [
Abstract

This work provides a study of parameter estimators based on functions of Markov chains generated by some perturbations of the independence copula. We provide asymptotic distributions of maximum likelihood estimators and confidence intervals for copula parameters of several families of copulas introduced in Longla (2023). Another set of moment-like estimators is proposed along with a multivariate central limit theorem, that provides their asymptotic distributions. We investigate the particular case of Markov chains generated by sine copulas, sine-cosine copulas and the extended Farlie-Gumbel-Morgenstern copula family. Some tests of independence are proposed. A simulation study is provided for the three copula families of interest. This simulation proposes a comparative study of the two introduced estimators and the robust estimator of Longla and Peligrad (2021), showing advantages of the proposed work.

keywords:
Estimation of copula parameters; Robust estimation; square integrable copulas; Reversible Markov chains; Confidence interval for copula parameters; Estimation of Long memory processes
pacs:
[

MSC Classification]62G08; 62M02; 60J35

1 Introduction

Over the past decades, many authors have worked on building copulas with various properties such as prescribed diagonal section, known association properties, perturbation and mixing. Some of these constructions can be found in Nelsen (2006). Arakelian and Karlis (2014) studied mixtures of copulas, that were investigated for mixing by Longla (2015). Longla (2014) constructed copulas based on prescribed ρ\rho-mixing properties obtained from extending results of Beare (2010) on mixing for copula-based Markov chains. Longla and al. (2022, 2022b) constructed copulas in general based on perturbations. In these two works, two perturbation methods were considered. Our current work is based on perturbations of the independence copula. These perturbations are of the form CD(u,v)=C(u,v)+D(u,v)C_{D}(u,v)=C(u,v)+D(u,v), where C(u,v)C(u,v) is the copula that is undergoing the perturbation and CD(u,v)C_{D}(u,v) the perturbed copula. This method of perturbation was presented in Komornik et al (2017), where authors mention that it allows in the case of the independence copula to introduce some level of dependence. Several other authors have considered extensions of existing copula families via other methods, that are not the focus of this work, see for instance, Morillas (2005), Aas et al (2009), Klement et al (2017), Chesneau (2021), among others.

We should mention, that trigonometric copulas that were investigated in Chesneau (2021), are very different from the perturbations that we have in this paper. This can be seen on the level of the dependence structure and the functional form of the copulas. This paper is concerned with the important question of estimation, confidence intervals and testing for copula-based Markov chains, that are generated by copula families of Longla (2023). These families were introduced, but not studied for the mentioned questions. Their mixing properties were examined, making it possible to explore central limit theorems. We base our study of MLE on the work of Billingsley (1961) for stationary Markov chains and their asymptotic distributions. Billingsley (1961) has been used in the setup of Markov chains by several authors over the years. Recently, it has been used by Hossain and Anam (2012) to study modelling of rainfalls in Bangladesh via Markov chains and a logistic regression model. Zhang et al. (2023) used it to study asymptotic normality of estimators in an AR(1) Binomial model. Kuljus and Ranneby (2021) considered maximum spacing as approximation of relative entropy for semi-markov processes and continuous time Markov chains. A good resource for the use of copulas in estimation problems is Chen and Fan (2006).

Our paper is dedicated to the analysis of three copula families. This analysis relates mixing coefficients defined by Bradley (2007), in the context of copulas, to the study of Markov chains. Copulas are functions that naturally represent the effect of dependence in multivariate distributions. This fact is due to Sklar’s theorem (see Sklar (1959)). Copulas are obtained by scalling out the effect of marginal distributions from joint distributions. In the case of continuous marginal distributions, most mixing coefficients of Markov chains are not affected by the marginal distributions (see Longla (2015)). Combining these facts with the special nature of the copula families of interest of this paper, Longla (2023) showed that these copulas generate ψ\psi-mixing. It is worth mentioning that copula coefficients are eigen-values of the copula operators. They are associated to the eigen-functions φk(x)\varphi_{k}(x) present in the formula of the copula of Longla (2023).

The first family that we study has a density based on orthogonal cosine functions. The second is based on orthogonal sine and cosine functions, and the third family is based on Legendre polynomials. Orthogonality here is in the sense of L2(0,1)L^{2}(0,1). This last family is an extension of the Farlie-Gumbel-Morgenstern (FGM) copula family. It introduces a second dependence coefficient λ2\lambda_{2}. The FGM has been extensively studied in the literature with several extensions (see Hurlimann (2017) or Ebaid and al. (2022)).

In general, a multivariate central limit theorem is provided in our paper for parameter estimators of copulas of Longla (2023). This central limit theorem is based on Kipnis and Varadhan (1986). Kipnis and Varadhan (1986) showed that for functions of a stationary reversible Markov chain with finite mean and variance, when the variance of partial sums is asymptotically linear in nn, the central limit theorem holds.

A general theorem of Billingsley (1961) is used to establish the limiting distribution of MLE. We show that they exist and are unique, when the true parameters are inside the support region. Billingsley proved that under some regularity conditions (which are checked for each of the copula families), the MLE is asymptotically normal with variance provided by a form of Fisher information. For practical reasons, given that the Fisher information can’t be comptuted in closed form, an estimate was used. Computation time for this procedure makes the MLE less attractive than the estimator that we propose. Based on properties of the coefficients of the construction, our proposed parameter estimators have closed form asymptotic variances.

1.1 Definitions and comments

We call copula in general a bivariate copula, which is defined as a bivariate joint cumulative distibution function with uniform marginals on [0,1][0,1]. An equivalent definition can be found in Nelsen (2006). A stationary Markov chain can be represented by a copula and its stationary or marginal distribution. The copula of a stationary Markov chain is that of any of its consecutive variables. We say that a stationary sequence (Y1,,Yn)(Y_{1},\cdots,Y_{n}) with finite mean μ\mu and variance σ2\sigma^{2} satisfies the central limit theorem if n(Y¯μ)N(0,σ02)\sqrt{n}(\bar{Y}-\mu)\to N(0,\sigma_{0}^{2}), for some positive number σ02\sigma^{2}_{0}. Kipnis and Varadhan (1986) showed that when the variance of partial sums of functions of the Markov chain satisfies σf2=limnn1var(Sn)\sigma_{f}^{2}=\lim_{n\to\infty}n^{-1}var(S_{n}), where Sn=k=1nf(Xi)S_{n}=\sum_{k=1}^{n}f(X_{i}), Y=f(X)Y=f(X), μ=𝔼(f(X))\mu=\mathbb{E}(f(X)), the central limit theorem holds. This theorem is combined with the Cramér-Wold device to prove a multivariate central limit theorem for parameter estimators in this paper. In doing so, we use the nn-fold product of copulas to compute the limiting variance. The nn-fold product of the copula uses partial derivatives to find a new copula. The notation C,i(u,v)C_{,i}(u,v) is for the derivative of C(u,v)C(u,v) with respect to the ithi^{th} variable. The nn-fold product of a copula is defined in Nelsen (2006) (or by Darsow et al. (1992)) by C1(u,v)=C(u,v),C^{1}(u,v)=C(u,v),

forn>1,Cn(u,v)=Cn1C(u,v)=01C,2n1(u,t)C,1(t,v)𝑑t.\text{for}\quad n>1,C^{n}(u,v)=C^{n-1}*C(u,v)=\int_{0}^{1}C^{n-1}_{,2}(u,t)C_{,1}(t,v)dt.

Given that we are dealing with Markov chains, we recall that the fold product for n=2n=2 is the joint cumulative distribution of (X1,X3)(X_{1},X_{3}) if (X1,X2,X3)(X_{1},X_{2},X_{3}) is a stationary Markov chain with copula C(u,v)C(u,v) and the uniform marginal distribution (see Darsow et al. (1992)).

In general, the copula Cn(u,v)C^{n}(u,v) is the joint distribution of (X0,Xn)(X_{0},X_{n}) when X0,,XnX_{0},\cdots,X_{n} is a stationary Markov chain generated by C(u,v)C(u,v) and the uniform distribution. These definitions can be found in different forms in various papers, but are equivalent when the study is concerned with absolutely continuous copulas and continuous marginal distributions. This is because Sklar’s theorem guarantees uniqueness of the representation of the joint distribution of a vector through its copula and marginal distributions (see Sklar(1959)). Copulas simplify the study of mixing properties of Markov chains due to the fact that for continuous random variables, the marginal distributions are scaled out in the computation; the formulas reduce to equivalent forms in terms of the uniform marginal distributions. This is one of the advantages of dealing with copulas while studying mixing.

Mixing coefficients, that we refer to, are defined in Bradley (2007), and were used in Longla (2023) to establish properties of Markov chains for the copulas that we are studying in this work. This work is not concerned by mixing in general, but rather uses the that fact, that mixing was established in an earlier work. This justifies the limitation of the background to the provided references. The mixing rate allows convergence of the variance of partial sums of functions of the Markov chains, and implies the needed ergodicity in the Theorem of Kipnis and Varadhan (1986). This theorem on its turn is used to establish central limit theorems for copula parameter estimators. In the simulation part of this work, we use a result of Longla and Peligrad (2021) to derive some estimators of copula parameter. These estimators are compared to the MLE, and a second class of estimators, that is derived based on the fact, that parameters are eigen-values of the operator induced by the copula of the Markov chain on L2(0,1)L^{2}(0,1).

1.2 Structure of the paper and main results

We propose several estimators of copula parameters for the copulas created in Longla (2023). Large sample study of these estimators is provided in general via derivation of a multivariate central limit theorem. These results are applied to 3 concrete families of copulas. We have conducted a comparative simulation study in which 4 different estimators are used. For each of the examples, we perform maximum likelihood estimation and show that the RR codes for our proposed estimators take less time to run than the MLE and can be preferable in large samples.

The rest of the paper is organized as follows. In section 2 we provide the copula families of interest in this work along with some graphs of copula densities and level curves. In section 3 we present parameter estimation procedures for Markov chains generated by the copula families proposed by Longla (2023). This section includes our proposed estimators, central limit theorems for estimators and χ2\chi^{2} tests of independence of observations. A multivariate central limit theorem is provided here as well. These estimators and central limit theorems are results of this papers. They are applied to each of the 3 examples that are proposed in Section 2. In section 4 we provide a simulation study of parameter estimators for the considered copula families. For each of the proposed copulas, we prove existence of the MLE, and derive its asymptotic distribution. A comparative study is provided at the end of this section to show advantages of our proposed estimators. In Section 5 we provide comments and conclusions of the work. An appendix of proofs concludes the paper.

2 Copula families of interest

Copulas in general are defined as multivariate distributions with uniform marginals on (0,1) (see Nelsen (2006), Durante and Sempi (2016) or Sklar (1959) for more on the topic). Several authors have used this tool over the years to model dependence in various applied fields of research. In this paper, we are interested in copulas that were proposed by Longla (2023). Due to their novelty, these copulas have not yet been considered for estimation problems. There is no work that suggests how to use them, how to estimate their parameters or test any related claims. Longla (2023) has shown that for any sequences λk\lambda_{k} and φk(x)\varphi_{k}(x) such that:

  • The sequence of |λk|\lvert{}\lambda_{k}\lvert{} has a finite number of values or converges to 0.

  • The set {φk(x),k}\{\varphi_{k}(x),k\in\mathbb{N}\} is an orthonormal basis of L2(0,1)L^{2}(0,1).

The function

c(u,v)=k=1λkφk(u)φk(v)c(u,v)=\sum_{k=1}^{\infty}\lambda_{k}\varphi_{k}(u)\varphi_{k}(v) (1)

is a copula density and the series converges pointwise and uniformly on (0,1)2(0,1)^{2}, if

1+k=1λkαk0,whereαk={maxφk2if λk<0minφkmaxφkifλk>0.1+\sum_{k=1}^{\infty}\lambda_{k}\alpha_{k}\geq 0,\quad\text{where}\quad\alpha_{k}=\begin{cases}\max\varphi^{2}_{k}&\text{if }\quad\lambda_{k}<0\\ \min\varphi_{k}\max\varphi_{k}&\text{if}\quad\lambda_{k}>0.\end{cases} (2)

Among examples of copula families of the form (1), that satisfy condition (2), Longla (2023) introduced the following copulas.

  • The sine-cosine copulas as perturbations of Π(u,v)\Pi(u,v)

Longla (2023) considered the trigonometric orthonormal basis of L2(0,1)L^{2}(0,1) that consists of functions: {1,2cos2πkx,2sin2πkx,k}\{1,\sqrt{2}\cos 2\pi kx,\sqrt{2}\sin 2\pi kx,k\in\mathbb{N^{*}}\}. He showed that

C(u,v)=uv+12π2k=1λkk2sin2πku×sin2πkv+\displaystyle C(u,v)=uv+\frac{1}{2\pi^{2}}\sum_{k=1}^{\infty}\frac{\lambda_{k}}{k^{2}}\sin 2\pi ku\times\sin 2\pi kv+
+12π2k=1μkk2(1cos2πku)(1cos2πkv)\displaystyle+\frac{1}{2\pi^{2}}\sum_{k=1}^{\infty}\frac{\mu_{k}}{k^{2}}(1-\cos 2\pi ku)(1-\cos 2\pi kv) (3)

is a copula, when k=1(|λk|+|μk|)1/2\displaystyle\sum_{k=1}^{\infty}(\lvert{}\lambda_{k}\lvert{}+\lvert{}\mu_{k}\lvert{})\leq 1/2. This copula was called sine-cosine copula. It is a perturbation of the independence copula.

Refer to caption
Refer to caption
Figure 1: Sine-cosine copula: λ1=0.14\lambda_{1}=0.14, μ1=0.11\mu_{1}=-0.11 λ2=0.13\lambda_{2}=0.13 and μ2=0.12\mu_{2}=-0.12
  • The sine copulas as perturbations of Π(u,v)\Pi(u,v)

Sine copulas were obtained using the basis of L2(0,1)L^{2}(0,1) given by {1,2coskπx,k>0,k}\{1,\sqrt{2}\cos k\pi x,k>0,k\in\mathbb{N}\}. Their formula is

Refer to caption
Refer to caption
Figure 2: Sine copula with λ1=0.28\lambda_{1}=0.28 and λ2=0.15\lambda_{2}=-0.15
C(u,v)=uv+2π2k=1λkk2sinkπusinkπv,fork=1|λk|1/2.C(u,v)=uv+\frac{2}{\pi^{2}}\sum_{k=1}^{\infty}\frac{\lambda_{k}}{k^{2}}\sin k\pi u\sin k\pi v,\quad\text{for}\quad\sum_{k=1}^{\infty}\lvert{}\lambda_{k}\lvert{}\leq 1/2. (4)

Any finite sum of terms from (4) that includes the first term uvuv is a copula. Sine copulas and Sine-cosine copulas are not part of the trigonometric copulas of Chesneau (2021). These copulas are based on fundamental properties of linear operators induced by copulas on the Hilbert space L2(0,1)L^{2}(0,1). Their coefficients are eigen-values and the functions φk(x)\varphi_{k}(x) are eigen-functions of the operator (see Longla (2023)).

  • The extended Farlie - Gumbel - Morgenstern copulas or Legendre copulas.

They were obtained using shifted Legendre polynomials, orthonormal basis of L2(0,1)L^{2}(0,1) for {1,φk,k>0,k}\{1,\varphi_{k},k>0,k\in\mathbb{N}\}. Legendre polynomials are defined by Rodrigues’ formula (see Rodrigues (1816))

Pk(x)=12kk!dkdxk(x21)k.P_{k}(x)=\frac{1}{2^{k}k!}\frac{d^{k}}{dx^{k}}(x^{2}-1)^{k}. (5)
Refer to caption
Refer to caption
Figure 3: Legendre copula with λ1=0.15\lambda_{1}=0.15 and λ2=0.1\lambda_{2}=0.1

The orthonormal basis of shifted Legendre polynomials is

φk(x)=2k+1Pk(2x1).\varphi_{k}(x)=\sqrt{2k+1}P_{k}(2x-1). (6)

For k>0,k>0, φk(1)=2k+1=maxφk\varphi_{k}(1)=\sqrt{2k+1}=\max\varphi_{k}, for odd kk, minφk=2k+1=φk(0)\min\varphi_{k}=-\sqrt{2k+1}=\varphi_{k}(0) and for even k,k, 0>minφk>2k+10>\min\varphi_{k}>-\sqrt{2k+1}. This minimum is achieved at least at one point. Therefore, when (1) contains φk(x)\varphi_{k}(x) alone, the range of λk\lambda_{k} is

12k+1λkmin(1,1(2k+1)|minPk(x)|).\frac{-1}{2k+1}\leq\lambda_{k}\leq\min(1,\frac{1}{(2k+1)\lvert{}\min P_{k}(x)\lvert{}}).

Using this basis in formula (1), we obtain what Longla (2023) called extended FGM copula family for any sum of terms that contains the first two functions. For more on Legendre polynomials, see Belousov (1962), Szegö (1975) and the references therein.

3 Parameter estimation

The constructed copula families include the extended Farlie-Gumbel-Morgenstern copula family. The FGM copula family has been extensively studied in recent years. Earlier studies include Morgenstern (1956), Farlie (1960) and Gumbel (1960). These authors discussed the bivariate FGM copula family. Johnson and Kotz (1975) formulated the FGM copula as a multivariate distribution on [0,1]d[0,1]^{d} for any positive integer d>1d>1. The multivariate FGM copula family is just as valuable as the multivariate normal distribution thanks to its simplicity. It has been applied to statistical modeling in various research fields such as economics (see Patton (2006)), finance (see Cossette et al. (2013)) and reliability engineering (see Navarro and Durante (2017) and recently Ota and Kimura (2021)).

Suppose that we have a Markov chain generated by a copula and the uniform distribution as stationary distribution. Assuming that this copula has the representation (1), we have the following for estimators of the copula coefficients.

Theorem 3.1.

For any copula-based Markov chain (U0,U1,,Un)(U_{0},U_{1},\cdots,U_{n}) generated by the copula (1) and the uniform marginal distribution,

λ^k=1ni=1nφk(Ui)φk(Ui1)\hat{\lambda}_{k}=\frac{1}{n}\sum_{i=1}^{n}\varphi_{k}(U_{i})\varphi_{k}(U_{i-1}) (7)

is an unbiased consistent estimator of λ^k\hat{\lambda}_{k} provided that

|λz|(01φz(x)φk2(x)dx)2)<and01φk4(x)dx<.\sum\lvert{}\lambda_{z}\lvert{}(\int_{0}^{1}\varphi_{z}(x)\varphi_{k}^{2}(x)dx)^{2})<\infty\quad\text{and}\quad\int_{0}^{1}\varphi_{k}^{4}(x)dx<\infty.

Moreover, n(λ^kλk)N(0,σ2)\sqrt{n}(\hat{\lambda}_{k}-\lambda_{k})\to N(0,\sigma^{2}), where the positve real number σ2\sigma^{2} is given by

σ2=(1λk2+λz(01φz(x)φk2(x)𝑑x)2)+\sigma^{2}=(1-\lambda_{k}^{2}+\sum\lambda_{z}(\int_{0}^{1}\varphi_{z}(x)\varphi_{k}^{2}(x)dx)^{2})+
+2λk2(01φk4(x)𝑑x1)+2m=2λk2λzm1(01φz(x)φk2(x)𝑑x)2.+2\lambda_{k}^{2}(\int_{0}^{1}\varphi^{4}_{k}(x)dx-1)+2\sum_{m=2}^{\infty}\lambda_{k}^{2}\sum\lambda^{m-1}_{z}(\int_{0}^{1}\varphi_{z}(x)\varphi_{k}^{2}(x)dx)^{2}.

Note, that when φk(x)\varphi_{k}(x) are uniformly bounded, the conditions on λk\lambda_{k} reduce to |λk|<,\sum\lvert\lambda_{k}\lvert<\infty, which is satisfied under condition (2). Based Theorem (3.1), we can construct confidence intervals for each of the values of λk\lambda_{k} separately for every case of φk(x)\varphi_{k}(x). This implies estimation of the variance as shown in the examples below. But, to tackle any problem related to joint parameters, we need to establish a result for the joint limiting distribution of these estimators of the coefficients of our copula. These distributions can be valuable in model selection for the number of parameters to include in the estimator of the copula. We need to find the distribution of Λ=(λ^k1,,λ^ks)\Lambda=(\hat{\lambda}_{k_{1}},\cdots,\hat{\lambda}_{k_{s}}) for any sequence of increasing integers λ^k1,,λ^ks\hat{\lambda}_{k_{1}},\cdots,\hat{\lambda}_{k_{s}}.

Theorem 3.2.

Assume (U0,U1,,Un)(U_{0},U_{1},\cdots,U_{n}) is a copula-based Markov chain generated by a copula of the form (1). Assume 01φki2(x)φkj2(x)𝑑x<\int_{0}^{1}\varphi^{2}_{k_{i}}(x)\varphi^{2}_{k_{j}}(x)dx<\infty,

z=1|λz|(01φz(x)φki(x)φkj(x)dx)2<and\sum_{z=1}^{\infty}\lvert{}\lambda_{z}\lvert{}(\int_{0}^{1}\varphi_{z}(x)\varphi_{k_{i}}(x)\varphi_{k_{j}}(x)dx)^{2}<\infty\quad\text{and}
z=1|λz|(01φz(x)φki2(x)dx)(01φz(x)φkj2(x)dx)<,\sum_{z=1}^{\infty}\lvert{}\lambda_{z}\lvert{}(\int_{0}^{1}\varphi_{z}(x)\varphi_{k_{i}}^{2}(x)dx)(\int_{0}^{1}\varphi_{z}(x)\varphi_{k_{j}}^{2}(x)dx)<\infty,

then for some positive definite matrix Σ\Sigma we have

n(Λ^Λ)Ns(𝟎,Σ).\sqrt{n}(\hat{\Lambda}-\Lambda)\to N_{s}({\bf 0},\Sigma).
Remark 1.

For an orthonormal basis of L2(0,1)L^{2}(0,1) made of uniformly bounded and symmetric functions, the conditions of Theorem 3.2 are automatically satisfied because the integrals are uniformly bounded and i=1|λi|<\sum_{i=1}^{\infty}\lvert{}\lambda_{i}\lvert{}<\infty.

Proposition 3.3.

Under the assumptions of Theorem 3.2, the off-diagonal elements of the matrix Σ\Sigma are given by

Σk,jkj\displaystyle\underset{k\neq j}{\Sigma_{k,j}} =\displaystyle= z=1sλz(01φz(x)φk(x)φj(x)dx)2+λkλj(2(01φk2(x)φj2(x)dx1)\displaystyle\sum\limits_{z=1}^{s}\lambda_{z}(\int_{0}^{1}\varphi_{z}(x)\varphi_{k}(x)\varphi_{j}(x)dx)^{2}+\lambda_{k}\lambda_{j}\left(2(\int_{0}^{1}\varphi^{2}_{k}(x)\varphi_{j}^{2}(x)dx-1)\right. (8)
+2z=1sm=2λzm1(01φz(x)φk2(x)dx)(01φz(x)φj2(x)dx)1).\displaystyle\left.+2\sum\limits_{z=1}^{s}\sum\limits_{m=2}^{\infty}\lambda_{z}^{m-1}(\int_{0}^{1}\varphi_{z}(x)\varphi_{k}^{2}(x)dx)(\int_{0}^{1}\varphi_{z}(x)\varphi_{j}^{2}(x)dx)-1\right).

3.1 Applications to some examples

For the classes of copulas given above, the Spearman’s ρ(C)\rho(C) is estimated by λ^1\hat{\lambda}_{1}.

  1. 1.

    Sine-cosine copulas. Theorem 3.1 implies that for the copulas based on the sine-cosine basis,

    σk2=1+λ2k2+λk2λ2k1λ2k,forλ^k=1ni=1n2cos2πkUicos2πkUi1\displaystyle\sigma^{2}_{k}=1+\frac{\lambda_{2k}}{2}+\frac{\lambda_{k}^{2}\lambda_{2k}}{1-\lambda_{2k}},\quad\text{for}\quad\hat{\lambda}_{k}=\frac{1}{n}\sum_{i=1}^{n}2\cos 2\pi kU_{i}\cos 2\pi kU_{i-1}
    andσk2=1+λ2k2+μk2λ2k1λ2k,forμ^k=1ni=1n2sin2πkUisin2πkUi1.\text{and}\quad\sigma^{2}_{k}=1+\frac{\lambda_{2k}}{2}+\frac{\mu_{k}^{2}\lambda_{2k}}{1-\lambda_{2k}},\quad\text{for}\quad\hat{\mu}_{k}=\frac{1}{n}\sum_{i=1}^{n}2\sin 2\pi kU_{i}\sin 2\pi kU_{i-1}.

    It is also clear that for sine-cosine copulas, with iji\neq j, Σi,j\Sigma_{i,j} is given by

    φki/φkj\varphi_{k_{i}}/\varphi_{k_{j}} φ1(x)=2cos2πkjx\varphi_{1}(x)=\sqrt{2}\cos 2\pi k_{j}x φ2(x)=2sin2πkjx\varphi_{2}(x)=\sqrt{2}\sin 2\pi k_{j}x
    φ1(x)\varphi_{1}(x) 12(λki+kj+λ|kikj|)λkiλkj\frac{1}{2}(\lambda_{k_{i}+k_{j}}+\lambda_{\lvert{}k_{i}-k_{j}\lvert{}})-\lambda_{k_{i}}\lambda_{k_{j}} 12(μki+kj+μ|kikj|)λkiμkj\frac{1}{2}(\mu_{k_{i}+k_{j}}+\mu_{\lvert{}k_{i}-k_{j}\lvert{}})-\lambda_{k_{i}}\mu_{k_{j}}
    φ2(x)\varphi_{2}(x) 12(μki+kj+μ|kikj|)μkiλkj\frac{1}{2}(\mu_{k_{i}+k_{j}}+\mu_{\lvert{}k_{i}-k_{j}\lvert{}})-\mu_{k_{i}}\lambda_{k_{j}} 12(λki+kj+λ|kikj|)μkiμkj\frac{1}{2}(\lambda_{k_{i}+k_{j}}+\lambda_{\lvert{}k_{i}-k_{j}\lvert{}})-\mu_{k_{i}}\mu_{k_{j}}

    Therefore, the multivariate central limit theorem holds with Σij\Sigma_{ij} as entries of the covariance matrix Σ\Sigma. A test of independence can be formed based on this limit theorem. Under the hypothesis of independence of observations agains that of a Markov chain with copula C(u,v)C(u,v), all coefficients are equal to 0. The estimators are asymptotically independent with σ2=1\sigma^{2}=1. We can formulate this in the following proposition.

    Proposition 3.4.

    For any square integrable copula with representation in sine-cosine series, under the assumption that all parameters are equal to zero, any vector of estimators with finitely many components satisfies the n\sqrt{n}-central limit theorem with variance-covariance matrix Σ=𝕀s\Sigma=\mathbb{I}_{s}, where ss is the length of the vector and 𝕀s\mathbb{I}_{s} is the identity matrix of dimension ss. Moreover, if a sequence WniW_{n_{i}} takes on values of λ^k\hat{\lambda}_{k} or μ^k\hat{\mu}_{k} for the Markov chain U0,,UnU_{0},\cdots,U_{n}, then

    ni=1sWni2χ2(s)asnand12s(ni=1sWni2s)N(0,1)n\sum_{i=1}^{s}W_{n_{i}}^{2}\to\chi^{2}(s)\quad\text{as}\quad n\to\infty\quad\text{and}\quad\frac{1}{\sqrt{2s}}(n\sum_{i=1}^{s}W_{n_{i}}^{2}-s)\to N(0,1)
    asn,thens.\text{as}\quad n\to\infty,\quad\text{then}\quad s\to\infty.
  2. 2.

    Sine copulas. Considering sine copulas, we have σk2=1+λ2k2+λk2λ2k1λ2k.\displaystyle\sigma^{2}_{k}=1+\frac{\lambda_{2k}}{2}+\frac{\lambda_{k}^{2}\lambda_{2k}}{1-\lambda_{2k}}. Estimators are

    λ^k=1ni=1n2cosπkUicosπkUi1.\hat{\lambda}_{k}=\frac{1}{n}\sum_{i=1}^{n}2\cos\pi kU_{i}\cos\pi kU_{i-1}.

    Therefore, Proposition 3.4 holds exactly with the same conditions.

4 Simulation study for the 3 examples

In this section, we compare these estimators to other known estimators such as maximum likelihood estimator, the robust estimator of Longla and Peligrad (2021) in two forms. The performance is studied on Markov chains simulated using three different copulas from the families described above.

4.1 The specific copula examples

We consider 3 types of copulas with densities of the form (1). A sine copula, a sine-cosine copula and a Legendre copula. The first two present several tops and bottoms, while the second is an extension of the FGM copula family. This third copula can be used to test departure from the FGM copula family in modelling. They are defined as follows.

  1. 1.

    Sine copulas

    C(u,v)=uv+2π2λ1sin(πu)sin(πv)+12π2λ2sin(2πu)sin(2πv),C(u,v)=uv+\dfrac{2}{\pi^{2}}\lambda_{1}\sin(\pi u)\sin(\pi v)+\dfrac{1}{2\pi^{2}}\lambda_{2}\sin(2\pi u)\sin(2\pi v), (9)

    for |λ1|+|λ2|0.5\lvert\lambda_{1}\lvert+\lvert\lambda_{2}\lvert\leq 0.5 and density

    c(u,v)=1+2λ1cos(πu)cos(πv)+2λ2cos(2πu)cos(2πv).c(u,v)=1+2\lambda_{1}\cos(\pi u)\cos(\pi v)+2\lambda_{2}\cos(2\pi u)\cos(2\pi v).
  2. 2.

    Sine-cosine copulas

    C(u,v)\displaystyle C(u,v) =\displaystyle= uv+12π2(λ1sin(2πu)sin(2πv)+μ1(1cos(2πu))(1cos(2πv))\displaystyle uv+\dfrac{1}{2\pi^{2}}(\lambda_{1}\sin(2\pi u)\sin(2\pi v)+\mu_{1}(1-\cos(2\pi u))(1-\cos(2\pi v))
    +18π2(λ2sin(4πu)sin(4πv)+μ2(1cos(4πu))(1cos(4πv)),\displaystyle+\dfrac{1}{8\pi^{2}}(\lambda_{2}\sin(4\pi u)\sin(4\pi v)+\mu_{2}(1-\cos(4\pi u))(1-\cos(4\pi v)),

    for |λ1|+|λ2|+|μ1|+|μ2|0.5\lvert\lambda_{1}\lvert+\lvert\lambda_{2}\lvert+\lvert\mu_{1}\lvert+\lvert\mu_{2}\lvert\leq 0.5 and density

    c(u,v)\displaystyle c(u,v) =\displaystyle= 1+2λ1cos(2πu)cos(2πv)+2μ1sin(2πu)sin(2πv)\displaystyle 1+2\lambda_{1}\cos(2\pi u)\cos(2\pi v)+2\mu_{1}\sin(2\pi u)\sin(2\pi v) (10)
    +2λ2cos(4πu)cos(4πv)+2μ2sin(4πu)sin(4πv)).\displaystyle+2\lambda_{2}\cos(4\pi u)\cos(4\pi v)+2\mu_{2}\sin(4\pi u)\sin(4\pi v)).
  3. 3.

    Legendre copulas
    If φk{3(2x1),5(6x26x+1)}\varphi_{k}\in\{\sqrt{3}(2x-1),~{}\sqrt{5}(6x^{2}-6x+1)\} and 3|λ1|+5|λ2|13|\lambda_{1}|+5|\lambda_{2}|\leq 1, we obtain the following Legendre copula:

    C(u,v)=1+3λ1(u2v)(v2v)+5λ2(2u33u2+u)(2v33v2+v),C(u,v)=1+3\lambda_{1}(u^{2}-v)(v^{2}-v)+5\lambda_{2}(2u^{3}-3u^{2}+u)(2v^{3}-3v^{2}+v), (11)

    with density

    c(u,v)=1+3λ1(2u1)(2v1)+5λ2(6u26u+1)(6v26v+1).c(u,v)=1+3\lambda_{1}(2u-1)(2v-1)+5\lambda_{2}(6u^{2}-6u+1)(6v^{2}-6v+1). (12)

Each of these copulas is a perturbation of the independence copula and can be used to seek some level of dependence in the data. We generate Markov chains using copulas from these families for known values of parameters. Then, we use them to build confidence intervals, and compute coverage probabilities based on the asymptotic distrubtions of estimators that are proposed.

4.2 Algorithm for simulation of the Markov chains

As shown in Darsow et al. (1992), the transition kernel for a copula-based Markov chain is given by the derivative with respect to the first variable of the copula by P(x,[0,y])=C,1(x,y)P(x,[0,y])=C_{,1}(x,y). Therefore, to generate a realization of a Markov chain (U0,,Un)(U_{0},\cdots,U_{n}) using each of the three considered copulas and the uniform marginal distribution, we use the following algorithm.

  1. 1.

    generate U0Unif(0,1)U_{0}\sim Unif(0,1);

  2. 2.

    U[1]=U0U[1]=U_{0};

  3. 3.

    For 2in+12\leq i\leq n+1:

    a). generate wUnif(0,1)w\sim Unif(0,1);

    b). U[i]U[i] is the unique root on [0,1][0,1] of the equation C,1(U[i1],U[i])w=0C_{,1}(U[i-1],U[i])-w=0.

For each of the used copulas, the last step uses the RR function UnirootUniroot. For n=999, we have obtained the following graphs, displaying time series copula 1, copula 2 and copula 3 respectively for sine copula, sine-cosine copula and Legendre copula.

Refer to caption
Figure 4: Data generated under each of the copula for n=999.

4.3 Study of the proposed estimator for our examples

Assume a Markov chain (U0,,Un)(U_{0},\cdots,U_{n}) is generated using a copula of the form (1). The estimator of λk\lambda_{k} is given by

λ^k=1ni=1nφk(Ui)φk(Ui1),\hat{\lambda}_{k}=\dfrac{1}{n}\sum\limits_{i=1}^{n}\varphi_{k}(U_{i})\varphi_{k}(U_{i-1}),

and Λ^=(λ^1,,λ^s)\hat{\Lambda}=(\hat{\lambda}_{1},...,\hat{\lambda}_{s}), satisfies n(Λ^Λ)N(𝟎,Σ),\sqrt{n}(\hat{\Lambda}-\Lambda)\longrightarrow{N}\left({\bf 0},\Sigma\right), where Σ\Sigma is the variance-covariance matrix defined in the Proposition 4.1 below.

Proposition 4.1.

Copulas of the examples are constructed by means of uniformly bounded functions φk(x)\varphi_{k}(x) and the sum of absolute values of parameters is bounded. Therefore, Theorem 3.2 holds with the following Σ\Sigma.

  1. 1.

    For the sine copula of formula (9) with Λ^=(λ^1,λ^2)\hat{\Lambda}=(\hat{\lambda}_{1},\hat{\lambda}_{2}):

    Σ=(1+λ22+λ12λ21λ2λ1(12λ2)λ1(12λ2)1).\Sigma=\left(\begin{matrix}1+\dfrac{\lambda_{2}}{2}+\dfrac{\lambda_{1}^{2}\lambda_{2}}{1-\lambda_{2}}&~{}~{}\lambda_{1}\left(\dfrac{1}{2}-\lambda_{2}\right)\\ ~{}\\ \lambda_{1}\left(\dfrac{1}{2}-\lambda_{2}\right)&1\end{matrix}\right). (13)
  2. 2.

    For the Legendre copula of formula (11) with Λ^=(λ^1,λ^2)\hat{\Lambda}=(\hat{\lambda}_{1},\hat{\lambda}_{2}):

    Σ=(1+45λ2+λ123+5λ25(1λ2)45λ1+λ1λ21+7λ27(1λ2)45λ1+λ1λ21+7λ27(1λ2)1+2049λ2+λ226323λ249(1λ2)).\Sigma=\left(\begin{matrix}1+\dfrac{4}{5}\lambda_{2}+\lambda_{1}^{2}\dfrac{3+5\lambda_{2}}{5(1-\lambda_{2})}&~{}~{}\dfrac{4}{5}\lambda_{1}+\lambda_{1}\lambda_{2}\dfrac{1+7\lambda_{2}}{7(1-\lambda_{2})}\\ ~{}\\ \dfrac{4}{5}\lambda_{1}+\lambda_{1}\lambda_{2}\dfrac{1+7\lambda_{2}}{7(1-\lambda_{2})}&~{}~{}1+\dfrac{20}{49}\lambda_{2}+\lambda_{2}^{2}\dfrac{63-23\lambda_{2}}{49(1-\lambda_{2})}\end{matrix}\right). (14)
  3. 3.

    For the sine-cosine copula with density (10) and Λ^=(λ^1,λ^2,μ^1,μ^2)\hat{\Lambda}=(\hat{\lambda}_{1},\hat{\lambda}_{2},\hat{\mu}_{1},\hat{\mu}_{2}):

    (1+λ22+λ12λ21λ2λ1(12λ2)μ222λ1μ1μ12λ1μ2λ1(12λ2)1μ12λ2μ12λ2μ2μ222λ1μ1μ12λ2μ11+λ22+μ12λ21λ2λ12μ1μ2μ12λ1μ22λ2μ2λ12μ1μ21).\left(\begin{matrix}1+\dfrac{\lambda_{2}}{2}+\dfrac{\lambda_{1}^{2}\lambda_{2}}{1-\lambda_{2}}&~{}~{}\lambda_{1}\left(\dfrac{1}{2}-\lambda_{2}\right)&~{}~{}\dfrac{\mu_{2}}{2}-2\lambda_{1}\mu_{1}&~{}~{}\dfrac{\mu_{1}}{2}-\lambda_{1}\mu_{2}\\ ~{}\\ \lambda_{1}\left(\dfrac{1}{2}-\lambda_{2}\right)&~{}~{}1&~{}~{}\dfrac{\mu_{1}}{2}-\lambda_{2}\mu_{1}&~{}~{}-2\lambda_{2}\mu_{2}\\ ~{}\\ \dfrac{\mu_{2}}{2}-2\lambda_{1}\mu_{1}&~{}~{}\dfrac{\mu_{1}}{2}-\lambda_{2}\mu_{1}&~{}~{}1+\dfrac{\lambda_{2}}{2}+\dfrac{\mu_{1}^{2}\lambda_{2}}{1-\lambda_{2}}&~{}~{}\dfrac{\lambda_{1}}{2}-\mu_{1}\mu_{2}\\ ~{}\\ \dfrac{\mu_{1}}{2}-\lambda_{1}\mu_{2}&~{}~{}-2\lambda_{2}\mu_{2}&~{}~{}\dfrac{\lambda_{1}}{2}-\mu_{1}\mu_{2}&~{}~{}1\end{matrix}\right). (15)

Proposition 4.1 shows that the estimators are asymptotically independent if λ1=0\lambda_{1}=0 or λ2=0.5\lambda_{2}=0.5 for the sine copula. This is true for the Legendre copula if λ1=0\lambda_{1}=0. These matrices allow us to construct confidence intervals for single copula parameters. Any test of hypotheses on these parameters can use the obtained multivariate central limit theorem.

Remark 2.

Under the assumptions of Proposition 4.1, the individual confidence interval for each of the parameters is given by

(λ^kzα2σ^kn,λ^k+zα2σ^kn) and (μ^kzα2τ^kn,μ^k+zα2τ^kn),\left(\hat{\lambda}_{k}-z_{\frac{\alpha}{2}}\frac{\hat{\sigma}_{k}}{\sqrt{n}},\quad\hat{\lambda}_{k}+z_{\frac{\alpha}{2}}\frac{\hat{\sigma}_{k}}{\sqrt{n}}\right)\text{ and }\left(\hat{\mu}_{k}-z_{\frac{\alpha}{2}}\frac{\hat{\tau}_{k}}{\sqrt{n}},\quad\hat{\mu}_{k}+z_{\frac{\alpha}{2}}\frac{\hat{\tau}_{k}}{\sqrt{n}}\right), (16)
Moreover,Q=n(Λ^Λ)Σ1(Λ^Λ)nχs2,and\text{Moreover,}\quad Q=n(\hat{\Lambda}-\Lambda)^{\prime}\Sigma^{-1}(\hat{\Lambda}-\Lambda)\underset{n\rightarrow\infty}{\longrightarrow}\chi^{2}_{s},\quad\text{and}
C={Λs:Q^<χα2(s)}={Λs:n(Λ^Λ)Σ^1(Λ^Λ)<χα2(s),}\displaystyle C=\left\{\Lambda\in\mathbb{R}^{s}:\hat{Q}<\chi^{2}_{\alpha}(s)\right\}=\left\{\Lambda\in\mathbb{R}^{s}:n(\hat{\Lambda}-\Lambda)^{\prime}\hat{\Sigma}^{-1}(\hat{\Lambda}-\Lambda)<\chi_{\alpha}^{2}(s),\right\} (17)

is the (1α)100%(1-\alpha)100\% confidence region for the vector Λ\Lambda, χα2(s)\chi^{2}_{\alpha}(s) is given by P(χ2(s)>χα2(s))=αP(\chi^{2}(s)>\chi^{2}_{\alpha}(s))=\alpha, σ^k\hat{\sigma}_{k} (τ^k\hat{\tau}_{k}) is the estimate of σk\sigma_{k} (τk\tau_{k}) obtained by replacing λk\lambda_{k} and μk\mu_{k} by their estimates in the matrix Σ\Sigma. The last comment is true thanks to consistency of the estimators.

4.4 The Maximum likelihood estimator (MLE)

For the Markov chain (U0,,Un)(U_{0},\cdots,U_{n}) generated by each of the considered copulas and the uniform marginal distribution, the log-likelihood function is:

  1. 1.

    For the sine copula:

    (Λ)=i=1nln(1+2λ1cos(πUi)cos(πUi1)+2λ2cos(2πUi)cos(2πUi1)),\ell(\Lambda)=\sum\limits_{i=1}^{n}\ln(1+2\lambda_{1}\cos(\pi U_{i})\cos(\pi U_{i-1})+2\lambda_{2}\cos(2\pi U_{i})\cos(2\pi U_{i-1})),

    and the MLE of Λ=(λ1,λ2)\Lambda=(\lambda_{1},\lambda_{2}) is given by:

    Λ^ML=Argmax|λ1|+|λ2|.5(Λ).\hat{\Lambda}^{ML}=\underset{\lvert\lambda_{1}\lvert+\lvert\lambda_{2}\lvert\leq.5}{Argmax}\ell(\Lambda). (18)
  2. 2.

    For the sine-cosine copula:

    (Λ)=i=1nln(1+2λ1cos(2πUi)cos(2πUi1)+2μ1sin(2πUi)sin(2πUi1)\displaystyle\ell(\Lambda)=\sum\limits_{i=1}^{n}\ln(1+2\lambda_{1}\cos(2\pi U_{i})\cos(2\pi U_{i-1})+2\mu_{1}\sin(2\pi U_{i})\sin(2\pi U_{i-1})
    +2λ2cos(4πUi)cos(4πUi1)+2μ2sin(4πUi)sin(4πUi1)),\displaystyle+2\lambda_{2}\cos(4\pi U_{i})\cos(4\pi U_{i-1})+2\mu_{2}\sin(4\pi U_{i})\sin(4\pi U_{i-1})),

    and the MLE of Λ=(λ1,λ2,μ1,μ2)\Lambda=(\lambda_{1},\lambda_{2},\mu_{1},\mu_{2}) is given by:

    Λ^ML=Argmax|λ1|+|λ2|+|μ1|+|μ2|.5(Λ).\hat{\Lambda}^{ML}=\underset{\lvert\lambda_{1}\lvert+\lvert\lambda_{2}\lvert+\lvert\mu_{1}\lvert+\lvert\mu_{2}\lvert\leq.5}{Argmax}\ell(\Lambda). (19)
  3. 3.

    For the Legendre copula:

    =i=1nln(1+3λ1(2Ui1)(2Ui11)+5λ2(6Ui26Ui+1)(6Ui126Ui1+1)),\ell=\sum\limits_{i=1}^{n}\ln(1+3\lambda_{1}(2U_{i}-1)(2U_{i-1}-1)+5\lambda_{2}(6U_{i}^{2}-6U_{i}+1)(6U_{i-1}^{2}-6U_{i-1}+1)),

    and the MLE of Λ\Lambda is given by:

    Λ^ML=Argmax3|λ1|+5|λ2|1(Λ).\hat{\Lambda}^{ML}=\underset{3\lvert\lambda_{1}\lvert+5\lvert\lambda_{2}\lvert\leq 1}{Argmax}\ell(\Lambda). (20)

Billingsley (1961) proved asymptotic likelihood theory for the MLE of parameters of a Markov process (n(Λ^MLΛ)N(𝟎,Σ1)\sqrt{n}(\hat{\Lambda}^{ML}-\Lambda)\to N({\bf 0},\Sigma^{-1})) under the following regularity conditions.

  1. 1.

    The Markov chain is ergodic and not necessarity stationary.

  2. 2.

    For u[0,1]u\in[0,1], the set of vv for which c(u,v)>0c(u,v)>0 does not depend on Λ\Lambda.

  3. 3.

    For 1i,j,ks1\leq i,j,k\leq s, where ss is the length of the vector Λ\Lambda,

    λilnc(u,v),2λiλjlnc(u,v) and 3λiλjλklnc(u,v)\frac{\partial}{\partial\lambda_{i}}\ln c(u,v),\frac{\partial^{2}}{\partial\lambda_{i}\lambda_{j}}\ln c(u,v)\text{ and }\frac{\partial^{3}}{\partial\lambda_{i}\lambda_{j}\lambda_{k}}\ln c(u,v)

    exist and are continuous on the set of all considered Λ\Lambda.

  4. 4.

    For 1i,j,ks1\leq i,j,k\leq s, AA^{\prime} an open subset of the support of the vector Λ\Lambda:
    𝔼Λ[(λilnc(u,v))2]<\mathbb{E}_{\Lambda}\left[\left(\frac{\partial}{\partial\lambda_{i}}\ln c(u,v)\right)^{2}\right]<\infty and 𝔼Λ[supΛA|3λiλjλklnc(u,v)|]<\mathbb{E}_{\Lambda}\left[\underset{\Lambda\in A^{\prime}}{\sup}\lvert\frac{\partial^{3}}{\partial\lambda_{i}\lambda_{j}\lambda_{k}}\ln c(u,v)\lvert\right]<\infty.

  5. 5.
    σi,j=𝔼Λ[λilnc(u,v)λjlnc(u,v)]\sigma_{i,j}=\mathbb{E}_{\Lambda}\left[\frac{\partial}{\partial\lambda_{i}}\ln c(u,v)\frac{\partial}{\partial\lambda_{j}}\ln c(u,v)\right] (21)

    form a non-singular matrix Σ\Sigma.

Theorem 4.2.

For any Markov chain generated by the considered copulas and the uniform distribution on [0,1][0,1], there exist a consistent MLE ΛML\Lambda^{ML} such that

n(ΛMLΛ)N(𝟎,Σ1)\sqrt{n}(\Lambda^{ML}-\Lambda)\to N({\bf 0},\Sigma^{-1})

and the (1α)100%(1-\alpha)100\% confidence intervals for λk\lambda_{k} (or μk\mu_{k} for the sine-cosine copula) is

(λkMLzα2[In1(ΛML)]k,k,λkML+zα2[In1(ΛML)]k,k).\left({\lambda}^{ML}_{k}-z_{\dfrac{\alpha}{2}}\sqrt{\left[I_{n}^{-1}({\Lambda}^{ML})\right]_{k,k}}~{},~{}{\lambda}^{ML}_{k}+z_{\dfrac{\alpha}{2}}\sqrt{[I_{n}^{-1}({\Lambda}^{ML})]_{k,k}}\right). (22)
Remark 3.

In each of the cases, we have the following.

1. For the sine copula:

In(Λ)=(i=1n4cos(kπUi)cos(kπUi1)cos(jπUi)cos(jπUi1)c2(Ui,Ui1))1k,j2I_{n}(\Lambda)=\left(\sum_{i=1}^{n}\frac{4\cos(k\pi U_{i})\cos(k\pi U_{i-1})\cos(j\pi U_{i})\cos(j\pi U_{i-1})}{c^{2}(U_{i},U_{i-1})}\right)_{1\leq k,j\leq 2}

where c(u,v)c(u,v) is the density of the cosine copula.

2. For the sine-cosine copula: In(Λ)=(aj,k,)1j,k4where\displaystyle I_{n}(\Lambda)=\left(a_{j,k},\right)_{1\leq j,k\leq 4}\quad\text{where}

a1,1=i=1n4C12(Ui)C12(Ui1)c2(Ui,Ui1)\displaystyle a_{1,1}=\sum_{i=1}^{n}\dfrac{4C^{2}_{1}(U_{i})C^{2}_{1}(U_{i-1})}{c^{2}(U_{i},U_{i-1})} ;\displaystyle\mathchar 24635\relax\; a1,2=i=1n4C1(Ui)S1(Ui)C1(Ui1)S1(Ui1)c2(Ui,Ui1);\displaystyle a_{1,2}=\sum_{i=1}^{n}\dfrac{4C_{1}(U_{i})S_{1}(U_{i})C_{1}(U_{i-1})S_{1}(U_{i-1})}{c^{2}(U_{i},U_{i-1})}\mathchar 24635\relax\;
a1,3=i=1n4C1(Ui)C2(Ui)C1(Ui1)C2(Ui1)c2(Ui,Ui1)\displaystyle a_{1,3}=\sum\limits_{i=1}^{n}\dfrac{4C_{1}(U_{i})C_{2}(U_{i})C_{1}(U_{i-1})C_{2}(U_{i-1})}{c^{2}(U_{i},U_{i-1})}
a1,4=i=1n4C1(Ui)S2(Ui)C1(Ui1)S2(Ui1)c2(Ui,Ui1);\displaystyle a_{1,4}=\sum\limits_{i=1}^{n}\dfrac{4C_{1}(U_{i})S_{2}(U_{i})C_{1}(U_{i-1})S_{2}(U_{i-1})}{c^{2}(U_{i},U_{i-1})}\mathchar 24635\relax\;
a2,2=i=1n4S12(Ui)S12(Ui1)c2(Ui,Ui1)\displaystyle a_{2,2}=\sum\limits_{i=1}^{n}\dfrac{4S^{2}_{1}(U_{i})S^{2}_{1}(U_{i-1})}{c^{2}(U_{i},U_{i-1})} ;\displaystyle\mathchar 24635\relax\; a2,3=i=1n4C2(Ui)S1(Ui)C2(Ui1)S1(Ui1)c2(Ui,Ui1);\displaystyle a_{2,3}=\sum\limits_{i=1}^{n}\dfrac{4C_{2}(U_{i})S_{1}(U_{i})C_{2}(U_{i-1})S_{1}(U_{i-1})}{c^{2}(U_{i},U_{i-1})}\mathchar 24635\relax\;
a2,4=i=1n4S2(Ui)S1(Ui)S2(Ui1)S1(Ui1)c2(Ui,Ui1)\displaystyle a_{2,4}=\sum\limits_{i=1}^{n}\dfrac{4S_{2}(U_{i})S_{1}(U_{i})S_{2}(U_{i-1})S_{1}(U_{i-1})}{c^{2}(U_{i},U_{i-1})} ;\displaystyle\mathchar 24635\relax\; a3,3=i=1n4C22(Ui)C22(Ui1)c2(Ui,Ui1);\displaystyle a_{3,3}=\sum\limits_{i=1}^{n}\dfrac{4C^{2}_{2}(U_{i})C^{2}_{2}(U_{i-1})}{c^{2}(U_{i},U_{i-1})}\mathchar 24635\relax\;
a3,4=i=1n4C2(Ui)S2(Ui)C2(Ui1)S2(Ui1)c2(Ui,Ui1)\displaystyle a_{3,4}=\sum\limits_{i=1}^{n}\dfrac{4C_{2}(U_{i})S_{2}(U_{i})C_{2}(U_{i-1})S_{2}(U_{i-1})}{c^{2}(U_{i},U_{i-1})}
a4,4=i=1n4S2(Ui)S1(Ui)S2(Ui1)S1(Ui1)c2(Ui,Ui1), Ck(x)=cos(2πkx),Sk(x)=sin(2πkx).\displaystyle a_{4,4}=\sum\limits_{i=1}^{n}\dfrac{4S_{2}(U_{i})S_{1}(U_{i})S_{2}(U_{i-1})S_{1}(U_{i-1})}{c^{2}(U_{i},U_{i-1})},\text{ $C_{k}(x)=\cos(2\pi kx),~{}S_{k}(x)=\sin(2\pi kx)$}.

3. For the Legendre copula: In(Λ)=(aj,k)1j,k2\displaystyle I_{n}(\Lambda)=\left(a_{j,k}\right)_{1\leq j,k\leq 2} where

a1,1=i=1n9L2(i)L2(i1)c2(Ui,Ui1);a2,2=i=1n25P2(i)P2(i1)c2(Ui,Ui1)anda_{1,1}=\sum\limits_{i=1}^{n}\dfrac{9L^{2}(i)L^{2}(i-1)}{c^{2}(U_{i},U_{i-1})}\mathchar 24635\relax\;~{}a_{2,2}=\sum\limits_{i=1}^{n}\dfrac{25P^{2}(i)P^{2}(i-1)}{c^{2}(U_{i},U_{i-1})}\quad\text{and}
a1,2=i=1n15L(i)L(i1)P(i)P(i1)c2(Ui,Ui1), L(i)=2Ui1;P(i)=6Ui26Ui+1.a_{1,2}=\sum\limits_{i=1}^{n}\dfrac{15L(i)L(i-1)P(i)P(i-1)}{c^{2}(U_{i},U_{i-1})},\quad\text{ $L(i)=2U_{i}-1\mathchar 24635\relax\;P(i)=6U_{i}^{2}-6U_{i}+1$.}

4.5 The robust estimator of Longla and Peligrad (2021)

The robust estimator defined in Longla and Peligrad (2021) uses an independent random sample to reduce the effect of the dependence. It makes use of kernel estimators to derive a central limit theorem that doesn’t require knowledge of the variance of partial sums. For each of the three considered Markov chains, we consider the sequence Yk=(Yk1,,Ykn)Y_{k}=(Y_{k1},\cdots,Y_{kn}), where

Yki=φk(Ui)φk(Ui1),μYk=𝔼Yki=λk.Y_{ki}=\varphi_{k}(U_{i})\varphi_{k}(U_{i-1}),\quad\mu_{Y_{k}}=\mathbb{E}Y_{ki}=\lambda_{k}. (23)

This sequence is a function of the stationary Markov chain (ξ0,,ξn1)(\xi_{0},\cdots,\xi_{n-1}), with ξi=(Ui,Ui+1)\xi_{i}=(U_{i},U_{i+1}). According to Longla and Peligrad (2021) and considering the Gaussian kernel, the robust estimator of μYk\mu_{Y_{k}}, is

λ~k=1nhni=1nYkiexp(0.5(Xihn)2),\tilde{\lambda}_{k}=\dfrac{1}{nh_{n}}\sum\limits_{i=1}^{n}Y_{ki}\exp\left(-0.5(\dfrac{X_{i}}{h_{n}})^{2}\right), (24)

where Xi𝒩(0,1)X_{i}\sim\mathcal{N}(0,1) is the independently generated random sample, the bandwidths sequence is hn=[Yk2¯Yk¯2n2]1/5h_{n}=\left[\dfrac{\overline{Y_{k}^{2}}}{\overline{{Y}_{k}}^{2}n\sqrt{2}}\right]^{1/5} and estimators of means of Yk,iY_{k,i} and Y2k,iY^{2}_{k,i} are given by Yk¯=1ni=1nYk,i\overline{{Y}_{k}}=\dfrac{1}{n}\sum\limits_{i=1}^{n}Y_{k,i}  and  Yk2¯=1ni=1nYk,i2\overline{{Y}_{k}^{2}}=\dfrac{1}{n}\sum\limits_{i=1}^{n}Y_{k,i}^{2}, leading to the (1α)100%(1-\alpha)100\% confidence intervals

(λ~k1+hn2zα/2(Yk2¯nhn2)1/2,λ~k1+hn2+zα/2(Yk2¯nhn2)1/2).\displaystyle\left(\tilde{\lambda}_{k}\sqrt{1+h_{n}^{2}}-z_{\alpha/2}\left(\dfrac{\overline{{Y}_{k}^{2}}}{nh_{n}\sqrt{2}}\right)^{1/2},~{}\tilde{\lambda}_{k}\sqrt{1+h_{n}^{2}}+z_{\alpha/2}\left(\dfrac{\overline{{Y}_{k}^{2}}}{nh_{n}\sqrt{2}}\right)^{1/2}\right). (25)
Remark 4.

Given that the means of Yk,iY_{k,i} and Y2k,iY^{2}_{k,i} can also be computed as follows

𝔼^[Yk,i]=λ^k and 𝔼^[Y2k,i]=𝔼^(φ2k(U0)φ2k(U1))=1+z=1sλ^z(01φz(x)φk2(x)dx)2,\hat{\mathbb{E}}[Y_{k,i}]=\hat{\lambda}_{k}\text{ and }\hat{\mathbb{E}}[Y^{2}_{k,i}]=\hat{\mathbb{E}}(\varphi^{2}_{k}(U_{0})\varphi^{2}_{k}(U_{1}))=1+\sum_{z=1}^{s}\hat{\lambda}_{z}(\int_{0}^{1}\varphi_{z}(x)\varphi_{k}^{2}(x)dx)^{2},

the bandwidths sequence is provided by h^n,k=[𝔼^[Y2k,i]𝔼^2[Yk,i]n2]1/5\hat{h}_{n,k}=\left[\dfrac{\hat{\mathbb{E}}[Y^{2}_{k,i}]}{\hat{\mathbb{E}}^{2}[Y_{k,i}]n\sqrt{2}}\right]^{1/5}, and the estimator of λk\lambda_{k} becomes

λ¯k=1nh^n,ki=1nYk,iexp(0.5(Xih^n,k)2)\bar{\lambda}_{k}=\dfrac{1}{n\hat{h}_{n,k}}\sum\limits_{i=1}^{n}Y_{k,i}\exp\left(-0.5(\dfrac{X_{i}}{\hat{h}_{n,k}})^{2}\right) (26)

which leads to the following (1α)100%(1-\alpha)100\% confidence interval

(λ¯k1+h^n,k2zα/2(𝔼^[Y2k,i]nh^n,k2)1/2,λ¯k1+h^n,k2+zα/2(𝔼^[Y2k,i]nh^n,k2)1/2)\left(\bar{\lambda}_{k}\sqrt{1+\hat{h}_{n,k}^{2}}-z_{\alpha/2}\left(\dfrac{\hat{\mathbb{E}}[Y^{2}_{k,i}]}{n\hat{h}_{n,k}\sqrt{2}}\right)^{1/2},~{}\bar{\lambda}_{k}\sqrt{1+\hat{h}_{n,k}^{2}}+z_{\alpha/2}\left(\dfrac{\hat{\mathbb{E}}[Y^{2}_{k,i}]}{n\hat{h}_{n,k}\sqrt{2}}\right)^{1/2}\right) (27)

where

  1. 1.

    For the sine copula
    𝔼^[Y21,i]=1+λ^22\hat{\mathbb{E}}[Y^{2}_{1,i}]=1+\frac{\hat{\lambda}_{2}}{2}, 𝔼^[Y22,i]=1\hat{\mathbb{E}}[Y^{2}_{2,i}]=1, h^n,1=[2+λ^22n2λ^12]1/5\hat{h}_{n,1}=\left[\frac{2+\hat{\lambda}_{2}}{2n\sqrt{2}\hat{\lambda}_{1}^{2}}\right]^{1/5} and h^n,2=[1n2λ^22]1/5\hat{h}_{n,2}=\left[\frac{1}{n\sqrt{2}\hat{\lambda}_{2}^{2}}\right]^{1/5}.

  2. 2.

    For the sine-cosine copula
    𝔼^[Y21,i]=1+λ^22\hat{\mathbb{E}}[Y^{2}_{1,i}]=1+\frac{\hat{\lambda}_{2}}{2}, 𝔼^[Y22,i]=1\hat{\mathbb{E}}[Y^{2}_{2,i}]=1, 𝔼^[Y23,i]=1+λ^22\hat{\mathbb{E}}[Y^{2}_{3,i}]=1+\frac{\hat{\lambda}_{2}}{2}, h^n,4=[1n2μ^22]1/5\hat{h}_{n,4}=\left[\frac{1}{n\sqrt{2}\hat{\mu}_{2}^{2}}\right]^{1/5},

    𝔼^[Y24,i]=1\hat{\mathbb{E}}[Y^{2}_{4,i}]=1, h^n,1=[2+λ^22n2λ^12]1/5\hat{h}_{n,1}=\left[\frac{2+\hat{\lambda}_{2}}{2n\sqrt{2}\hat{\lambda}_{1}^{2}}\right]^{1/5}, h^n,2=[1n2λ^22]1/5\hat{h}_{n,2}=\left[\frac{1}{n\sqrt{2}\hat{\lambda}_{2}^{2}}\right]^{1/5}, h^n,3=[2+λ^22n2μ^12]1/5\hat{h}_{n,3}=\left[\frac{2+\hat{\lambda}_{2}}{2n\sqrt{2}\hat{\mu}_{1}^{2}}\right]^{1/5}.

  3. 3.

    For the Legendre copula
    𝔼^[Y21,i]=1+45λ^2\hat{\mathbb{E}}[Y^{2}_{1,i}]=1+\frac{4}{5}\hat{\lambda}_{2}, 𝔼^[Y22,i]=1+2049λ^2\hat{\mathbb{E}}[Y^{2}_{2,i}]=1+\frac{20}{49}\hat{\lambda}_{2}, h^n,1=[5+4λ^25n2λ^12]1/5\hat{h}_{n,1}=\left[\frac{5+4\hat{\lambda}_{2}}{5n\sqrt{2}\hat{\lambda}_{1}^{2}}\right]^{1/5} and h^n,2=[49+20λ^249n2λ^12]1/5\hat{h}_{n,2}=\left[\frac{49+20\hat{\lambda}_{2}}{49n\sqrt{2}\hat{\lambda}_{1}^{2}}\right]^{1/5}.

4.6 Data simulation and analysis of the examples

We consider copulas with the following description for generation of Markov chains using the given algorithm.

  1. 1.

    For the sine copula, λ1=0.28\lambda_{1}=0.28 and λ2=0.15\lambda_{2}=-0.15;

  2. 2.

    For the sine-cosine copula, λ1=0.14,λ2=0.13,μ1=0.11,μ2=0.12\lambda_{1}=0.14,~{}\lambda_{2}=0.13,~{}\mu_{1}=-0.11,~{}\mu_{2}=-0.12;

  3. 3.

    For the Legendre copula, λ1=0.15\lambda_{1}=0.15 and λ2=0.1\lambda_{2}=0.1.

The obtained data is then used to estimate parameters and construct confidence intervals for the three cases studied above. We also provide comparisons with the MLE, the estimators Λ^\hat{\Lambda} and Λ~\tilde{\Lambda}. The obtained confidence intervals are based on formulas (16), (22) and (25). For the MLE, we have implemented an algorithm on RR using the level of significancy α=.05\alpha=.05. The graphs are obtained using Mathlab.

4.6.1 Graphs of the log-likelihood functions

Remark 5.

Comments on the log-likelihood graphs

  1. 1.

    The log-likelihood function of the sine copula in figure 5 is obtained using the Markov generated with parameter Λ=(0.14,0.8)\Lambda=(0.14,-0.8);

  2. 2.

    For the sine-cosine copula, we use the Markov chain generated with parameter Λ=(0.07,0.065,0.055,0.06)\Lambda=(0.07,0.065,-0.055,-0.06). In order to obtain the representation in figure 6, we’ve fixed μ1\mu_{1} and μ2\mu_{2} in the likelihood function;

  3. 3.

    The log-likelihood function of the Legendre copula shown in Figure 7 uses the Markov chain generated with parameter Λ=(0.075,0.05)\Lambda=(0.075,0.05).

In each of the cases, the contour plot of the log-likelihood function exhibits evident uniqueness of the MLE. This is shown by spectral colors of the level curves stretching towards a single interior point. This observation provides reassurance regarding reliability of the MLE and confirms the theoretical insights.

Refer to caption
Refer to caption
Figure 5: Log-likelihood function under thesine copula
Refer to caption
Refer to caption
Figure 6: Log-likelihood function under the sine-cosine copula
Refer to caption
Refer to caption
Figure 7: Log-likelihood function under the Legendre copula

4.6.2 Analysis of simulation results

For every given sample size, a data set has been simulated, representing a Markov chain generated by the considered copula and the uniform marginal distribution. Computations have been made for each of the copulas in RR. After running 100 times each of these simulations, we have counted the number of intervals including the true value of the parameter to get the coverage probability, wich should represent 9595 out of the 100100 constructed intervals per case. We also calculated the mean length of the confidence interval. Conclusions of the simulation study are given in the remark below. In these tables, we use CP for coverage probability and CIML for confidence interval’s mean length.

Sine-cosine copula for λ1=0.14,λ2=0.13,μ1=0.11,μ2=0.12\lambda_{1}=0.14,~{}\lambda_{2}=0.13,~{}\mu_{1}=-0.11,~{}\mu_{2}=-0.12
Estimator n=4999 n=9999 n=19999
CP CIML CP CIML CP CIML
λ^1=0.1504\hat{\lambda}_{1}=0.1504 97 0.0523 97 0.0369 95 0.0256
λ^1ML=0.1349\hat{\lambda}_{1}^{ML}=0.1349 98 0.0543 98 0.0361 98 0.0255
λ~1=0.1278\tilde{\lambda}_{1}=0.1278 98 0.0517 93 0.0525 95 0.0407
λ¯1=0.1280\bar{\lambda}_{1}=0.1280 96 0.0549 95 0.0560 93 0.0415
λ^2=0.1339\hat{\lambda}_{2}=0.1339 98 0.0511 97 0.0380 98 0.0272
λ^2ML=0.1315\hat{\lambda}_{2}^{ML}=0.1315 95 0.0497 98 0.0362 97 0.0253
λ~2\tilde{\lambda}_{2}=0.1281 94 0.0509 92 0.0515 94 0.0398
λ¯2\bar{\lambda}_{2}=0.1130 90 0.0529 0.575 99 94 0.0413
μ1^=0.1039\hat{\mu_{1}}=-0.1039 96 0.0732 91 0.0347 93 0.0250
μ^1ML=0.1023{\hat{\mu}_{1}^{ML}}=-0.1023 95 0.0694 95 0.0360 93 0.0249
μ~1\tilde{\mu}_{1}=0.0945 92 0.0665 92 0.0506 92 0.0384
μ¯1=0.1067\bar{\mu}_{1}=-0.1067 93 0.0702 96 0.0522 96 0.0396
μ2^=0.1223\hat{\mu_{2}}=-0.1223 99 0.0744 94 0.0368 90 0.0249
μ^2ML=0.1233\hat{\mu}_{2}^{ML}=-0.1233 99 0.0683 95 0.0359 93 0.0248
μ~2=0.1042\tilde{\mu}_{2}=-0.1042 95 0.0662 95 0.0530 93 0.0395
μ¯2=0.1294\bar{\mu}_{2}=-0.1294 93 0.0678 93 0.0513 93 0.0392
Table 1: Estimates, coverage probabilities and mean lengths.
Remark 6 (On simulations).

The following summarizes the results of the conducted simulations.

  1. 1.

    In Table 1, the estimates are provided for a sample size of 1000010000. For Table 2, we utilize a sample size of 50005000, while for Table 3, a sample size of 2000020000 is considered for the estimates.

  2. 2.

    Through RR’s constrOptim function, the MLE is obtained. This function is initialized at (0,0)(0,0) for the sine and Legendre copulas and at (0,0,0,0)(0,0,0,0) for the sine-cosine copula. We successfully optimized the likelihood function while maintaining the linear inequality constraints thanks to the function’s flexibility.

  3. 3.

    For each type of copula, the proposed estimator Λ^\hat{\Lambda} challenges the MLE by demonstrating its effectiveness in providing reliable parameter estimates in situations involving small sample sizes. Furthermore, it exhibits an increased level of precision as the sample size grows larger.

    Legendre copula for λ1=0.15,λ2=0.1\lambda_{1}=0.15,~{}\lambda_{2}=0.1
    Estimator n=4999 n=9999 n=19999
    CP CIML CP CIML CP CIML
    λ^1\hat{\lambda}_{1}=0.1511 93 0.0522 94 0.0387 94 0.0274
    λ^1ML\hat{\lambda}_{1}^{ML}=0.1512 96 0.0535 97 0.0383 96 0.0268
    λ~1\tilde{\lambda}_{1}=0.1309 92 0.0737 97 0.0586 91 0.0418
    λ¯1\bar{\lambda}_{1}= 0.1555 92 0.0733 95 0.0572 93 0.0426
    λ^2\hat{\lambda}_{2}=0.1101 88 0.0502 89 0.0359 92 0.0234
    λ^2ML\hat{\lambda}_{2}^{ML}=0.1081 99 0.0539 99 0.0381 91 0.0248
    λ~2\tilde{\lambda}_{2}=0.1042 90 0.0665 91 0.0509 88 0.0373
    λ¯2\bar{\lambda}_{2}=0.1042 90 0.0600 91 0.0476 93 0.0375
    Table 2: Estimates, coverage probabilities and mean lengths.
  4. 4.

    One significant advantage of Λ^\hat{\Lambda} over ΛML\Lambda^{ML} is that the latter necessitates more computational time due to the absence of a closed of Fisher information matrix for the MLE and simple form of the variance for and estimators of our proposed method for considered copulas. For each of the cases Λ^\hat{\Lambda} has an explicit expression for both the estimate and the confidence interval, resulting in reduced computation time and reduced errors for the algorithm.

    Sine copula for λ1=0.28,λ2=0.15\lambda_{1}=0.28,~{}\lambda_{2}=-0.15
    Estimator n=4999 n=9999 n=19999
    CP CIML CP CIML CP CIML
    λ^1\hat{\lambda}_{1}=0.2998 94 0.0417 98 0.0241 97 0.0127
    λ^1ML=0.2991\hat{\lambda}_{1}^{ML}=0.2991 95 0.0298 96 0.0084 97 0.0013
    λ~1=0.2784\tilde{\lambda}_{1}=0.2784 91 0.0662 97 0.0455 91 0.0290
    λ¯1=0.3078\bar{\lambda}_{1}=0.3078 97 0.0699 98 0.0491 97 0.0334
    λ^2\hat{\lambda}_{2} =0.1354-0.1354 97 0.0235 95 0.0058 96 0.0003
    λ^2ML=0.1438\hat{\lambda}_{2}^{ML}=0.1438 97 0.0044 94 0.0003 95 0.0002
    λ~2=0.1213\tilde{\lambda}_{2}=-0.1213 98 0.0356 94 0.0202 96 0.0071
    λ¯2=0.1320\bar{\lambda}_{2}=-0.1320 97 0.0383 93 0.0236 95 0.0053
    Table 3: Estimates, coverage probabilities and mean lengths.
  5. 5.

    In Table 3, the coverage probabilities of the MLE are unexpectedly high for small sample sizes, given the specified significance level. This can be interpreted as an instance of over-coverage, indicating that the MLE provides intervals that are wider than necessary to achieve the desired level of confidence.

  6. 6.

    The alternative estimator Λ¯\bar{\Lambda}, in comparison to Λ~\tilde{\Lambda}, offers improved point estimates and coverage probabilities for each copula. It demonstrates enhanced accuracy in estimating the true parameter values and achieves more reliable coverage probabilities within the considered framework.

5 Conclusions and remarks

This paper studies copula-based Markov chains with uniform marginals. These Markov chains are based, in general, on copulas of Longla (2023). This is the first study of properties of estimators of parameters of these copulas. A multivariate central limit theorem is provided for copula parameters estimators. These copulas have densities of the general form (1), which eases the investigation of asymptotic properties via mixing. This central limit theorem has been applied to examples of sine copulas, sine-cosine copulas and Legendre copulas introduced in Longla (2023). The proposed estimators of parameters have been compared to the robust estimators of Longla and Peligrad (2021). This theorem has been used, to show that our estimators perform better than the MLE for sample sizes above 50005000.

The simulation study that has been performed using RR, shows that it takes more time to run the procedures of MLE than to run their equivalents for our proposed estimators. Moreover, the possibility of having several points that indicate the maximum of the likelihood functions, and the difficulty to obtain the true variance of the MLE, are indicators that our estimators are better tools. Note that it takes long iterations to approximate integrals that lead to the variance-covariance matrix of the asymptotic distribution of the MLE, while the variance-covariance matrix of our estimator is know in closed form. It is also good to note that on large samples, coverage probabilities also indicate that the MLE is not better.

One of the main takeaways of this work is the provided theoretical proof of large sample properties of our estimators and the MLE for Markov chains generated by the copulas with densities of the form (1). This opens a path to several other questions that can be investigated for these copula families. Future research on the topic includes the use of these theorems to investigates tests of equality of two or more bivariate distributions based on realizations of Markov chains that each of them generates. We also consider working on general estimation problems, when the functions φk(x)\varphi_{k}(x) have a parameter. All these issues can also be included in problems with a general marginal distribution. The marginal distribution can be considered with parametric or non-parametetric form. All these questions are interesting and not obvious, and are among our topics of current research.

Another extension that can be considered here is the work on such questions for a perturbation of a copula other than the independence copula. The theoretical work on this last topic is a little more complicated, because orthogonality becomes more complex.

Statements and declarations

All authors certify that they have no affiliations with or involvement in any organization or entity with any financial interest or non-financial interest in the subject matter or materials discussed in this manuscript. They have no competing interest and have no funding to disclose.

References

  • (1) K. Aas, C. Czado, A. Frigessi, and H. Bakken (2009). Pair-copula constructions of multiple dependence. Insur. Math. Econ. 44: 182–198.
  • (2) V. Arakelian, D. Karlis (2014). Clustering Dependencies Via Mixtures of Copulas. Communications in Statistics - Simulation and Computation, 43:7, 1644–1661.
  • (3) B. K. Beare (2010). Copulas and temporal dependence. Econometrica 78 , no. 1, 395–410. MR2642867
  • (4) S.L. Belousov (1962). Tables of Normalized Associated Legendre Polynomials. Mathematical Tables. 18. Pergamon Press. MR0152679
  • (5) P. Billingsley (1961). Statistical inference for Markov processes. The University of Chicago Press, Chicago.
  • (6) R.C. Bradley (2007). Introduction to Strong Mixing Conditions. Vol. 1,2, Kendrick Press. MR2325294 MR2325295
  • (7) X. Chen, Y. Fan (2006). Estimation of copula-based semiparametric time series models. J. Econometrics 130 307–335.
  • (8) C. Chesneau. On New Types of Multivariate Trigonometric Copulas. AppliedMath 2021, 1, 3–17.
  • (9) H. Cossette, M.P. Cote, E. Marceau, K. Moutanabbir (2013). Multivariate distribution defined with Farlie-Gumbel-Morgenstern copula and mixed Erlang marginals: Aggregation and capital allocation. Insurance: Mathematics and Economics, 52(3), 560–572. MR3054748
  • (10) W. F. Darsow, B. Nguyen, E. T. Olsen (1992). Copulas and Markov processes. Illinois journal of mathematics 36(4) 600–642. MR1215798
  • (11) F. Durante, C. Sempi (2016). Principles of Copula Theory. CRC Press. MR3443023
  • (12) R. Ebaid, W. Elbadawy, E. Ahmed, A. Abdelghaly (2022). A new extension of the FGM copula with an application in reliability, Communications in Statistics - Theory and Methods , 51:9, 2953–2961;
  • (13) D.J. Farlie (1960). The performance of some correlation coefficients for a general bivariate distribution. Biometrika, 47(3–4), 307–323. MR0119312
  • (14) E.J. Gumbel (1960). Bivariate exponential distributions. Journal of the American Statistical Association, 55(292), 698–707. MR0116403
  • (15) O. Haggstrom, J.S. Rosenthal (2007). On variance conditions for Markov chain CLTs.Elect. Comm. in Probab. 12, 454 – 464. MR2365647
  • (16) M. Hossain , S. Anam (2012). Identifying the dependency pattern of daily rainfall of Dhaka station in Bangladesh using Markov chain and logistic regression model. Agricultural Sciences, 3, 385–391. doi: 10.4236/as.2012.33045.
  • (17) N.L. Johnson, S. Kotz (1975). On some generalized Farlie–Gumbel–Morgenstern distributions. Communications in Statistics, 4(5), 415–427. MR0373155
  • (18) W. Hurlimann (2017). A comprehensive extension of the FGM copula. Stat Papers 58, 373–392. https://doi.org/10.1007/s00362-015-0703-1.
  • (19) E.P. Klement, A. Kolesárová, R. Mesiar, S. Saminger-Platz (2017). Copula constructions using ultramodularity. In: Úbeda Flores, M., de Amo Artero, E., Durante, F., Fernández Sánchez, J. (eds) Copulas and Dependence Models with Applications. Springer, Cham.
  • (20) J. Komornik, M. Komornikova, J. Kalicka (2017). Dependence measures for perturbations of copulas. Fuzzy Sets and Systems 324 100–116. MR3685505
  • (21) C. Kipnis, S. R. S. Varadhan (1986). Central limit theorem for additive functionals of reversible Markov processes and applications to simple exclusions. Comm. Math. Phys. 104(1): 1-19. MR0834478
  • (22) K. Kuljus, B. Ranneby (2021). Maximum spacing estimation for continuous time Markov chains and semi-Markov processes. Stat Inference Stoch Process 24, 421–443. https://doi.org/10.1007/s11203-021-09238-4;
  • (23) M. Longla (2023). New copula families and mixing properties. Under review https://arxiv.org/abs/2308.11074 ;
  • (24) M. Longla, F. Djongreba Ndikwa, M. Muia Nthiani, P. Takam Soh (2022). Perturbations of copulas and mixing properties. Journal of the Korean Statistical Society, 51, no. 1, 149–171. . MR4392469
  • (25) M. Longla, M. Muia Nthiani, F. Djongreba Ndikwa (2022b). Dependence and mixing for perturbations of copula-based Markov chains. Probability and Statistics letters 180, Paper No. 109239, 8 . MR4316068
  • (26) M. Longla, H. Mous-Abou, I.S. Ngongo (2022c). On some mixing properties of copula-based Markov chains Journal of Statistical Theory and Applications 21, 131–154.
  • (27) M. Longla (2014). On dependence structure of copula-based Markov chains. ESAIM: Probability and Statistics 18, 570–583. MR3334004
  • (28) M. Longla, M. Peligrad (2021). New robust confidence intervals for the mean under dependence, Journal of Statistical Planning and Inference, 211, 90–106, ISSN 0378-3758, https://doi.org/10.1016/j.jspi.2020.06.001;
  • (29) M. Longla (2015). On mixtures of copulas and mixing coefficients. J. Multivariate Anal. 139, 259–265. MR3349491
  • (30) D. Morgenstern (1956). Einfache Beispiele zweidimensionaler Verteilungen. Mitteilungsblatt für Mathematische Statistik, 8, 234–235. MR0081575
  • (31) P.M. Morillas (2005). A method to obtain new copulas from a given one. Metrika, 61, 169–184.
  • (32) J. Navarro, F. Durante (2017). Copula-based representations for the reliability of the residual lifetimes of coherent systems with dependent components. Journal of Multivariate Analysis, 158, 87–102. MR3651375
  • (33) R.B. Nelsen (2006). An Introduction to Copulas, second edition, Springer Series in Statistics, Springer-Verlag, New York. MR2197664
  • (34) S. Ota, M. Kimura (2021). Effective estimation algorithm for parameters of multivariate Farlie–Gumbel–Morgenstern copula. Jpn J Stat Data Sci 4, 1049–1078. MR4340981
  • (35) A.J. Patton (2006). Estimation of multivariate models for time series of possibly different lengths. Journal of Applied Econometrics, 21(2), 147–173. MR2226593
  • (36) O. Rodrigues (1816). Mémoire sur l’attraction des spheroides. Correspondence sur l’Ecole Polytechnique, 3 361–385;
  • (37) A. Sklar (1959). Fonctions de répartition à nn dimensions et leurs marges. Publ. Inst. Statist. Univ. Paris, 8 229–231. MR0125600
  • (38) Li-H. Sun, X-W. Huang, M. S. Alqawba, J-M. Kim, T. Emura (2020). Copula-Based Markov Models for Time Series, Parametric Inference and Process Control, Springer Nature Singapore Pte Ltd.
  • (39) G. Szegö (1975). Orthogonal polynomials, Amer. Math. Soc., Rhode, Island.
  • (40) J. Zhang, J. Wang, Z. Tai, X. Dong (2023). A study of binomial AR(1)AR(1) process with an alternative generalized binomial thinning operator. J. Korean Stat. Soc. 52, 110–129. https://doi.org/10.1007/s42952-022-00193-1

6 Appendix of proofs of major results

6.1 Proof of Theorem 3.1

Assume U0,U1,,UnU_{0},U_{1},\cdots,U_{n} is a copula-based Markov chain generated by a copula of the form (1). The stationary distribution of the Markov chain is uniform on (0,1)(0,1). By construction, 𝔼(φj(Ui))=0\mathbb{E}(\varphi_{j}(U_{i}))=0, 𝔼(φj(Ui)φl(Ui))=δil\mathbb{E}(\varphi_{j}(U_{i})\varphi_{l}(U_{i}))=\delta_{il} for any UiU_{i}. Here δil=1\delta_{il}=1 when i=li=l and δil=0\delta_{il}=0 when ili\neq l. It follows that 𝔼(λ^k)=λk\mathbb{E}(\hat{\lambda}_{k})=\lambda_{k}. Moreover,

var(λ^k)=1n2i=1nj=1ncov(φk(Ui)φk(Ui1),φk(Uj)φk(Uj1))var(\hat{\lambda}_{k})=\frac{1}{n^{2}}\sum_{i=1}^{n}\sum_{j=1}^{n}cov(\varphi_{k}(U_{i})\varphi_{k}(U_{i-1}),\varphi_{k}(U_{j})\varphi_{k}(U_{j-1}))
=1n2i=1nj=1n𝔼(φk(Ui)φk(Ui1)φk(Uj)φk(Uj1))λk2.=\frac{1}{n^{2}}\sum_{i=1}^{n}\sum_{j=1}^{n}\mathbb{E}(\varphi_{k}(U_{i})\varphi_{k}(U_{i-1})\varphi_{k}(U_{j})\varphi_{k}(U_{j-1}))-\lambda_{k}^{2}.

Using stationarity and reversibility, we obtain

var(λ^k)=1n𝔼(φ2k(U1)φ2k(U0))+2(n1)n2𝔼(φk(U0)φ2k(U1)φk(U2))+var(\hat{\lambda}_{k})=\frac{1}{n}\mathbb{E}(\varphi^{2}_{k}(U_{1})\varphi^{2}_{k}(U_{0}))+\frac{2(n-1)}{n^{2}}\mathbb{E}(\varphi_{k}(U_{0})\varphi^{2}_{k}(U_{1})\varphi_{k}(U_{2}))+
+1n2s=2n12(ns)𝔼(φk(U0)φk(U1)φk(Us)φk(Us+1))=λk2.+\frac{1}{n^{2}}\sum_{s=2}^{n-1}2(n-s)\mathbb{E}(\varphi_{k}(U_{0})\varphi_{k}(U_{1})\varphi_{k}(U_{s})\varphi_{k}(U_{s+1}))=\lambda_{k}^{2}.

To end these computations, we need the three joint distributions of (U0,U1,U2)(U_{0},U_{1},U_{2}) and (U0,U1,Us,Us+1)(U_{0},U_{1},U_{s},U_{s+1}) for s2s\geq 2. The following Lemma can be easily established using the definitions.

Lemma 1.

If (U0,U1,,Un)(U_{0},U_{1},\cdots,U_{n}) is a copula-based Markov chain generated by the copula with density (1), then the following holds:

  1. 1.

    The density of (U0,U1,U2)(U_{0},U_{1},U_{2}) is c(u0,u1,u2)=c(u0,u1)c(u1,u2)=\displaystyle c(u_{0},u_{1},u_{2})=c(u_{0},u_{1})c(u_{1},u_{2})=

    =1+λmφm(u0)φm(u1)+λnφz(u1)φz(u2)=1+\sum\lambda_{m}\varphi_{m}(u_{0})\varphi_{m}(u_{1})+\sum\lambda_{n}\varphi_{z}(u_{1})\varphi_{z}(u_{2})
    +λmλzφm(u0)φm(u1)φz(u1)φz(u2).+\sum\sum\lambda_{m}\lambda_{z}\varphi_{m}(u_{0})\varphi_{m}(u_{1})\varphi_{z}(u_{1})\varphi_{z}(u_{2}).
  2. 2.

    𝔼(φk(U0)φ2k(U1)φk(U2))=λk201φ4k(x)dx\displaystyle\mathbb{E}(\varphi_{k}(U_{0})\varphi^{2}_{k}(U_{1})\varphi_{k}(U_{2}))=\lambda_{k}^{2}\int_{0}^{1}\varphi^{4}_{k}(x)dx.

  3. 3.

    𝔼(φ2k(U1)φ2k(U0))=1+λz(01φz(x)φk2(x)dx)2.\displaystyle\mathbb{E}(\varphi^{2}_{k}(U_{1})\varphi^{2}_{k}(U_{0}))=1+\sum\lambda_{z}(\int_{0}^{1}\varphi_{z}(x)\varphi_{k}^{2}(x)dx)^{2}.

  4. 4.

    The joint density of (U0,U1,Us,Us+1)(U_{0},U_{1},U_{s},U_{s+1}) is

    c(u0,u1,us,us+1)=c(u0,u1)(1+λzs1φz(u1)φz(us))c(us,us+1).\displaystyle c(u_{0},u_{1},u_{s},u_{s+1})=c(u_{0},u_{1})(1+\sum\lambda_{z}^{s-1}\varphi_{z}(u_{1})\varphi_{z}(u_{s}))c(u_{s},u_{s+1}).
  5. 5.

    𝔼(φk(U0)φk(U1)φk(Us)φk(Us+1)|U0,U1,Us)=λkφk(U0)φk(U1)φ2k(Us).\mathbb{E}(\varphi_{k}(U_{0})\varphi_{k}(U_{1})\varphi_{k}(U_{s})\varphi_{k}(U_{s+1})\lvert{}U_{0},U_{1},U_{s})=\lambda_{k}\varphi_{k}(U_{0})\varphi_{k}(U_{1})\varphi^{2}_{k}(U_{s}).

  6. 6.

    The joint density of (U0,U1,Us)(U_{0},U_{1},U_{s}) is

    c(u0,u1,us,us+1)=c(u0,u1)(1+λzs1φz(u1)φz(us)).\displaystyle c(u_{0},u_{1},u_{s},u_{s+1})=c(u_{0},u_{1})(1+\sum\lambda_{z}^{s-1}\varphi_{z}(u_{1})\varphi_{z}(u_{s})).
  7. 7.

    𝔼(φk(U0)φk(U1)φk(Us)φk(Us+1))=λk𝔼(φk(U0)φk(U1)φ2k(Us))\mathbb{E}(\varphi_{k}(U_{0})\varphi_{k}(U_{1})\varphi_{k}(U_{s})\varphi_{k}(U_{s+1}))=\lambda_{k}\mathbb{E}(\varphi_{k}(U_{0})\varphi_{k}(U_{1})\varphi^{2}_{k}(U_{s}))

    =λk2𝔼(φ2k(U1)φ2k(Us)).=\lambda_{k}^{2}\mathbb{E}(\varphi^{2}_{k}(U_{1})\varphi^{2}_{k}(U_{s})).
  8. 8.

    𝔼(φk(U0)φk(U1)φk(Us)φk(Us+1))=λk2(1+λs1z(01φz(x)φk2(x)dx)2).\mathbb{E}(\varphi_{k}(U_{0})\varphi_{k}(U_{1})\varphi_{k}(U_{s})\varphi_{k}(U_{s+1}))=\lambda_{k}^{2}(1+\sum\lambda^{s-1}_{z}(\int_{0}^{1}\varphi_{z}(x)\varphi_{k}^{2}(x)dx)^{2}).

Based on Lemma 1, we can conclude that

var(λ^k)=1n(1λk2+λz(01φz(x)φk2(x)dx)2)+2(n1)λk2n2(01φ4k(x)dx1)var(\hat{\lambda}_{k})=\frac{1}{n}(1-\lambda_{k}^{2}+\sum\lambda_{z}(\int_{0}^{1}\varphi_{z}(x)\varphi_{k}^{2}(x)dx)^{2})+\frac{2(n-1)\lambda_{k}^{2}}{n^{2}}(\int_{0}^{1}\varphi^{4}_{k}(x)dx-1)
+s=2n12(ns)λk2n2(λs1z(01φz(x)φk2(x)dx)2).+\sum_{s=2}^{n-1}\frac{2(n-s)\lambda_{k}^{2}}{n^{2}}(\sum\lambda^{s-1}_{z}(\int_{0}^{1}\varphi_{z}(x)\varphi_{k}^{2}(x)dx)^{2}). (28)

Therefore, knowing that |λz|(01φz(x)φk2(x)dx)2)=K<\sum\lvert{}\lambda_{z}\lvert{}(\int_{0}^{1}\varphi_{z}(x)\varphi_{k}^{2}(x)dx)^{2})=K<\infty and 01φk4(x)dx<\int_{0}^{1}\varphi_{k}^{4}(x)dx<\infty, we have

limnvar(λ^k)=limn2λk2n2s=3n1(ns)(λs1z(01φz(x)φk2(x)dx)2).\lim_{n\to\infty}var(\hat{\lambda}_{k})=\lim_{n\to\infty}\frac{2\lambda_{k}^{2}}{n^{2}}\sum_{s=3}^{n-1}(n-s)(\sum\lambda^{s-1}_{z}(\int_{0}^{1}\varphi_{z}(x)\varphi_{k}^{2}(x)dx)^{2}). (29)

Provided that λ¯=supz|λz|<1\underline{\lambda}=\sup_{z}\lvert{}\lambda_{z}\lvert{}<1 , we obtain

limnvar(λ^k)limn2λk2n2s=3n1(ns)λ¯s2(|λz|(01φz(x)φk2(x)dx)2),\lim_{n\to\infty}var(\hat{\lambda}_{k})\leq\lim_{n\to\infty}\frac{2\lambda_{k}^{2}}{n^{2}}\sum_{s=3}^{n-1}(n-s)\underline{\lambda}^{s-2}(\sum\lvert{}\lambda_{z}\lvert{}(\int_{0}^{1}\varphi_{z}(x)\varphi_{k}^{2}(x)dx)^{2}),
limnvar(λ^k)limn2Kλk2n2s=3n1(ns)λ¯s2.\lim_{n\to\infty}var(\hat{\lambda}_{k})\leq\lim_{n\to\infty}\frac{2K\lambda_{k}^{2}}{n^{2}}\sum_{s=3}^{n-1}(n-s)\underline{\lambda}^{s-2}.

Moreover, the last sum can be written as

s=3n1(ns)λ¯s2=(ns=3n1λ¯s21λ¯ddλ¯(s=3n1λ¯s))=\sum_{s=3}^{n-1}(n-s)\underline{\lambda}^{s-2}=(n\sum_{s=3}^{n-1}\underline{\lambda}^{s-2}-\frac{1}{\underline{\lambda}}\frac{d}{d\underline{\lambda}}(\sum_{s=3}^{n-1}\underline{\lambda}^{s}))=
nλ¯(1λ¯n2)1λ¯1λ¯ddλ¯λ¯2(1λ¯n2)1λ¯=n(λ¯λ¯n1)(2nλ¯n2)1λ¯+λ¯2λ¯n(1λ¯)2.\frac{n\underline{\lambda}(1-\underline{\lambda}^{n-2})}{1-\underline{\lambda}}-\frac{1}{\underline{\lambda}}\frac{d}{d\underline{\lambda}}\frac{\underline{\lambda}^{2}(1-\underline{\lambda}^{n-2})}{1-\underline{\lambda}}=\frac{n(\underline{\lambda}-\underline{\lambda}^{n-1})-(2-n\underline{\lambda}^{n-2})}{1-\underline{\lambda}}+\frac{\underline{\lambda}^{2}-\underline{\lambda}^{n}}{(1-\underline{\lambda})^{2}}.

If supz|λz|<1\sup_{z}\lvert{}\lambda_{z}\lvert{}<1, then as n>2n>2 this sum behaves like nλ¯1λ¯.\displaystyle n\frac{\underline{\lambda}}{1-\underline{\lambda}}. Therefore limnvar(λ^k)=0\lim_{n\to\infty}var(\hat{\lambda}_{k})=0. This implies, that under the conditions of the statement, λ^k\hat{\lambda}_{k} is a consistent estimator of λk\lambda_{k}. Moroever, from this proof, it follows that nvar(λ^k)σ2nvar(\hat{\lambda}_{k})\to\sigma^{2} with 0<σ2<0<\sigma^{2}<\infty. We can also establish easily that Yi=(Xi1,Xi),i=1,,nY_{i}=(X_{i-1},X_{i}),i=1,\cdots,n forms a reversible Markov chain. The variables Zi=φk(Ui)φk(Ui1):=f(Yi)Z_{i}=\varphi_{k}(U_{i})\varphi_{k}(U_{i-1}):=f(Y_{i}) satisfy n1var(Zi)σ2n^{-1}var(\sum Z_{i})\to\sigma^{2} because Zi=nλ^k\sum Z_{i}=n\hat{\lambda}_{k}. Therefore, the Kipnis and Varadhan (1986) central limit theorem holds for λ^k\hat{\lambda}_{k}.

6.2 Proof of Theorem 3.2

Assume (U0,U1,,Un)(U_{0},U_{1},\cdots,U_{n}) is a copula-based Markov chain generated by a copula of the form (1). It is already clear that each of the λ^k\hat{\lambda}_{k} is a consistent estimator of λk\lambda_{k}. To prove this theorem, all we need to do now is to show by the Cramér-Wold device that for every 𝐭=(t1,,ts){\bf t}=(t_{1},\cdots,t_{s})^{\prime}, n(𝐭Λ^𝐭Λ)N(𝟎,𝐭Σ𝐭)\sqrt{n}({\bf t}^{\prime}\hat{\Lambda}-{\bf t}^{\prime}\Lambda)\to N({\bf 0},{\bf t}^{\prime}\Sigma{\bf t}). And by the Kipnis and Varadhan central limit theorem for reversible Markov chains, to complete this proof, we need to show that nvar(tλ^)tΣtnvar(t^{\prime}\hat{\lambda})\to t^{\prime}\Sigma t as nn\to\infty. But it is easy to realize that

nvar(𝐭Λ^)=j=1si=1stjtincov(λ^ki,λ^kj).nvar({\bf t}^{\prime}\hat{\Lambda})=\sum_{j=1}^{s}\sum_{i=1}^{s}t_{j}t_{i}ncov(\hat{\lambda}_{k_{i}},\hat{\lambda}_{k_{j}}).

Therefore, to prove the central limit theorem, it is enough to show that ncov(λ^ki,λ^kj)Σijncov(\hat{\lambda}_{k_{i}},\hat{\lambda}_{k_{j}})\to\Sigma_{ij} as nn\to\infty. As we have already shown this convergence for i=ji=j, it is enough to conduct the proof for iji\neq j. But,

ncov(λ^ki,λ^kj)=1np=1nq=1ncov(φki(Up1)φki(Up),φkj(Uq1)φkj(Uq)).ncov(\hat{\lambda}_{k_{i}},\hat{\lambda}_{k_{j}})=\frac{1}{n}\sum_{p=1}^{n}\sum_{q=1}^{n}cov(\varphi_{k_{i}}(U_{p-1})\varphi_{k_{i}}(U_{p}),\varphi_{k_{j}}(U_{q-1})\varphi_{k_{j}}(U_{q})).

To compute this covariance, we consider 44 separate cases (p=q,p=q1,q=p1,|pq|=m>1)(p=q,p=q-1,q=p-1,\lvert{}p-q\lvert{}=m>1). The cases p=q1p=q-1 and q=p1q=p-1 are equivalent due to symmetry.

  1. 1.

    For p=qp=q and iji\neq j, using stationarity we have

    cov(φki(Up1)φki(Up),φkj(Uq1)φkj(Uq))=cov(\varphi_{k_{i}}(U_{p-1})\varphi_{k_{i}}(U_{p}),\varphi_{k_{j}}(U_{q-1})\varphi_{k_{j}}(U_{q}))=
    𝔼(φki(U0)φki(U1)φkj(U0)φkj(U1))λkjλki=\mathbb{E}(\varphi_{k_{i}}(U_{0})\varphi_{k_{i}}(U_{1})\varphi_{k_{j}}(U_{0})\varphi_{k_{j}}(U_{1}))-\lambda_{k_{j}}\lambda_{k_{i}}=
    =z=1λz(01φz(x)φki(x)φkj(x)dx)2λkjλki.=\sum_{z=1}^{\infty}\lambda_{z}(\int_{0}^{1}\varphi_{z}(x)\varphi_{k_{i}}(x)\varphi_{k_{j}}(x)dx)^{2}-\lambda_{k_{j}}\lambda_{k_{i}}.
  2. 2.

    For q=p1q=p-1 and iji\neq j, using stationarity we have

    cov(φki(Up1)φki(Up),φkj(Up)φkj(Up+1))=cov(\varphi_{k_{i}}(U_{p-1})\varphi_{k_{i}}(U_{p}),\varphi_{k_{j}}(U_{p})\varphi_{k_{j}}(U_{p+1}))=
    𝔼(φki(U0)φki(U1)φkj(U1)φkj(U2))λkjλki=\mathbb{E}(\varphi_{k_{i}}(U_{0})\varphi_{k_{i}}(U_{1})\varphi_{k_{j}}(U_{1})\varphi_{k_{j}}(U_{2}))-\lambda_{k_{j}}\lambda_{k_{i}}=
    =λkjλki(01φ2ki(x)φ2kj(x)dx1).=\lambda_{k_{j}}\lambda_{k_{i}}(\int_{0}^{1}\varphi^{2}_{k_{i}}(x)\varphi^{2}_{k_{j}}(x)dx-1).
  3. 3.

    For |pq|=m2\lvert{}p-q\lvert{}=m\geq 2 and jjj\neq j, using stationary we have

    cov(φki(Up1)φki(Up),φkj(Up+m1)φkj(Up+m))=cov(\varphi_{k_{i}}(U_{p-1})\varphi_{k_{i}}(U_{p}),\varphi_{k_{j}}(U_{p+m-1})\varphi_{k_{j}}(U_{p+m}))=
    𝔼(φki(U0)φki(U1)φkj(Um)φkj(Um+1))λkjλki=\mathbb{E}(\varphi_{k_{i}}(U_{0})\varphi_{k_{i}}(U_{1})\varphi_{k_{j}}(U_{m})\varphi_{k_{j}}(U_{m+1}))-\lambda_{k_{j}}\lambda_{k_{i}}=
    λkjλki(𝔼(φ2ki(U1)φ2kj(Um))1)=\lambda_{k_{j}}\lambda_{k_{i}}(\mathbb{E}(\varphi^{2}_{k_{i}}(U_{1})\varphi^{2}_{k_{j}}(U_{m}))-1)=
    =λkjλkiz=1λzm1(01φz(x)φki2(x)dx)(01φz(x)φkj2(x)dx).=\lambda_{k_{j}}\lambda_{k_{i}}\sum_{z=1}^{\infty}\lambda_{z}^{m-1}(\int_{0}^{1}\varphi_{z}(x)\varphi_{k_{i}}^{2}(x)dx)(\int_{0}^{1}\varphi_{z}(x)\varphi_{k_{j}}^{2}(x)dx).

Similarly to Haggstrom and Rosenthal (2007), it follows that

limnncov(λ^ki,λ^kj)=cov(φki(U0)φki(U1),φkj(U0)φkj(U1))+\lim_{n\to\infty}ncov(\hat{\lambda}_{k_{i}},\hat{\lambda}_{k_{j}})=cov(\varphi_{k_{i}}(U_{0})\varphi_{k_{i}}(U_{1}),\varphi_{k_{j}}(U_{0})\varphi_{k_{j}}(U_{1}))+
2m=1cov(φki(U0)φki(U1),φkj(Um)φkj(Um+1)).2\sum_{m=1}^{\infty}cov(\varphi_{k_{i}}(U_{0})\varphi_{k_{i}}(U_{1}),\varphi_{k_{j}}(U_{m})\varphi_{k_{j}}(U_{m+1})).

Knowing that c(u,v)c(u,v) is square integrable, we can establish easily that for m3m\leq 3 each of the terms of the series is in absolute value smaller than λ¯m3K\underline{\lambda}^{m-3}K, where λ¯\underline{\lambda} is the suppremum of |λz|\lvert{}\lambda_{z}\lvert{} over zz and KK is a constant that depends only on ki,kjk_{i},k_{j}. This implies that the series with sum starting at m=3m=3 is convergent (using geometric series to end the argument as 0<λ¯<10<\underline{\lambda}<1). Moreover the term with m=1m=1 is finite under any conditions and the term with m=2m=2 is finite if and only if z=1λz(01φz(x)φki2(x)dx)(01φz(x)φkj2(x)dx)<.\sum_{z=1}^{\infty}\lambda_{z}(\int_{0}^{1}\varphi_{z}(x)\varphi_{k_{i}}^{2}(x)dx)(\int_{0}^{1}\varphi_{z}(x)\varphi_{k_{j}}^{2}(x)dx)<\infty. On the other hand, cov(φki(U0)φki(U1),φkj(U0)φkj(U1))cov(\varphi_{k_{i}}(U_{0})\varphi_{k_{i}}(U_{1}),\varphi_{k_{j}}(U_{0})\varphi_{k_{j}}(U_{1})) is finite if and only if

z=1λz(01φz(x)φki(x)φkj(x)dx)2<.\sum_{z=1}^{\infty}\lambda_{z}(\int_{0}^{1}\varphi_{z}(x)\varphi_{k_{i}}(x)\varphi_{k_{j}}(x)dx)^{2}<\infty.

6.3 Proof of Proposition 3.3

Following the proof of Theorem 3.2 and Haggstorm amd Rosenthal (2007), we have

Σk,jkj\displaystyle\underset{k\neq j}{\Sigma_{k,j}} =\displaystyle= limnncov(λ^k,λ^j)\displaystyle\underset{n\rightarrow\infty}{\lim~{}}ncov(\hat{\lambda}_{k},\hat{\lambda}_{j}) (30)
=\displaystyle= cov(φk(U0)φk(U1),φj(U0)φj(U1))+2cov(φk(U0)φk(U1),φj(U1)φj(U2))\displaystyle cov(\varphi_{k}(U_{0})\varphi_{k}(U_{1}),\varphi_{j}(U_{0})\varphi_{j}(U_{1}))+2cov(\varphi_{k}(U_{0})\varphi_{k}(U_{1}),\varphi_{j}(U_{1})\varphi_{j}(U_{2}))
+2m=2cov(φk(U0)φk(U1),φk(Um)φk(Um+1).\displaystyle+2\sum\limits_{m=2}^{\infty}cov(\varphi_{k}(U_{0})\varphi_{k}(U_{1}),\varphi_{k}(U_{m})\varphi_{k}(U_{m+1}).

Using the obtained expected values, we derive the following.

cov(φk(U0)φk(U1),φj(U0)φj(U1))\displaystyle cov(\varphi_{k}(U_{0})\varphi_{k}(U_{1}),\varphi_{j}(U_{0})\varphi_{j}(U_{1})) =\displaystyle= 𝔼(φk(U0)φk(U1)φj(U0)φj(U1))λkλj\displaystyle\mathbb{E}\left(\varphi_{k}(U_{0})\varphi_{k}(U_{1})\varphi_{j}(U_{0})\varphi_{j}(U_{1})\right)-\lambda_{k}\lambda_{j}
=\displaystyle= z=1sλz(01φz(x)φk(x)φj(x)dx)2λkλj.\displaystyle\sum\limits_{z=1}^{s}\lambda_{z}\left(\int_{0}^{1}\varphi_{z}(x)\varphi_{k}(x)\varphi_{j}(x)dx\right)^{2}-\lambda_{k}\lambda_{j}.
cov(φk(U0)φk(U1),φj(U1)φj(U2))\displaystyle cov(\varphi_{k}(U_{0})\varphi_{k}(U_{1}),\varphi_{j}(U_{1})\varphi_{j}(U_{2})) =\displaystyle= 𝔼(φk(U0)φk(U1)φj(U1)φj(U2))λkλj\displaystyle\mathbb{E}\left(\varphi_{k}(U_{0})\varphi_{k}(U_{1})\varphi_{j}(U_{1})\varphi_{j}(U_{2})\right)-\lambda_{k}\lambda_{j}
=\displaystyle= λkλj(01φ2k(x)φj2(x)1).\displaystyle\lambda_{k}\lambda_{j}\left(\int_{0}^{1}\varphi^{2}_{k}(x)\varphi_{j}^{2}(x)-1\right).
cov(φk(U0)φk(U1),φk(Um)φk(Um+1)=\displaystyle cov(\varphi_{k}(U_{0})\varphi_{k}(U_{1}),\varphi_{k}(U_{m})\varphi_{k}(U_{m+1})=
=𝔼(φk(U0)φk(U1)φj(Um)φj(Um+1))λkλj=\displaystyle=\mathbb{E}\left(\varphi_{k}(U_{0})\varphi_{k}(U_{1})\varphi_{j}(U_{m})\varphi_{j}(U_{m+1})\right)-\lambda_{k}\lambda_{j}=
=λkλjz=1λzm1(01φz(x)φk2(x)dx)(01φz(x)φj2(x)dx).\displaystyle=\lambda_{k}\lambda_{j}\sum\limits_{z=1}^{\infty}\lambda_{z}^{m-1}\left(\int_{0}^{1}\varphi_{z}(x)\varphi_{k}^{2}(x)dx\right)\left(\int_{0}^{1}\varphi_{z}(x)\varphi_{j}^{2}(x)dx\right).

Therefore, formula (30) becomes

Σk,jkj\displaystyle\underset{k\neq j}{\Sigma_{k,j}} =\displaystyle= z=1sλz(01φz(x)φk(x)φj(x)dx)2+2λkλj(01φ2k(x)φj2(x)dx1)\displaystyle\sum\limits_{z=1}^{s}\lambda_{z}(\int_{0}^{1}\varphi_{z}(x)\varphi_{k}(x)\varphi_{j}(x)dx)^{2}+2\lambda_{k}\lambda_{j}(\int_{0}^{1}\varphi^{2}_{k}(x)\varphi_{j}^{2}(x)dx-1)
+2λkλjz=1sm=2λzm1(01φz(x)φk2(x)dx)(01φz(x)φj2(x)dx)λkλj.\displaystyle+2\lambda_{k}\lambda_{j}\sum\limits_{z=1}^{s}\sum\limits_{m=2}^{\infty}\lambda_{z}^{m-1}(\int_{0}^{1}\varphi_{z}(x)\varphi_{k}^{2}(x)dx)(\int_{0}^{1}\varphi_{z}(x)\varphi_{j}^{2}(x)dx)-\lambda_{k}\lambda_{j}.

6.4 Proof of Proposition 4.1

Using the provided theorems for each of the copula families, we get:

1. The sine copula, φ1(x)=2cosπx\varphi_{1}(x)=\sqrt{2}\cos\pi x  and  φ2(x)=2cos2πx\varphi_{2}(x)=\sqrt{2}\cos 2\pi x. It follows that

σ12\displaystyle\sigma_{1}^{2} =\displaystyle= 1λ12+λ1(01(2cosπx)3dx)2+λ2(012cos2πx(2cosπx)2dx)2+\displaystyle 1-\lambda_{1}^{2}+\lambda_{1}(\int_{0}^{1}(\sqrt{2}\cos\pi x)^{3}dx)^{2}+\lambda_{2}(\int_{0}^{1}\sqrt{2}\cos 2\pi x(\sqrt{2}\cos\pi x)^{2}dx)^{2}+ (31)
+2λ12(01(2cosπx)4dx1)+2λ12m=2λ1m1(01(2cosπx)3dx)2\displaystyle+2\lambda_{1}^{2}(\int_{0}^{1}(\sqrt{2}\cos\pi x)^{4}dx-1)+2\lambda_{1}^{2}\sum\limits_{m=2}^{\infty}\lambda_{1}^{m-1}(\int_{0}^{1}(\sqrt{2}\cos\pi x)^{3}dx)^{2}
+2λ12m=2λ2m1(012cos2πx(2cosπx)2dx)2.\displaystyle+2\lambda_{1}^{2}\sum\limits_{m=2}^{\infty}\lambda_{2}^{m-1}(\int_{0}^{1}\sqrt{2}\cos 2\pi x(\sqrt{2}\cos\pi x)^{2}dx)^{2}.

Given that 01(2cosπx)3dx)=0,012cos2πx(2cosπx)2dx)=12\displaystyle\int_{0}^{1}(\sqrt{2}\cos\pi x)^{3}dx)=0,\int_{0}^{1}\sqrt{2}\cos 2\pi x(\sqrt{2}\cos\pi x)^{2}dx)=\dfrac{1}{\sqrt{2}} and

014cos4πxdx1=12, we obtainσ12=1+λ22+λ12λ21λ2.\int_{0}^{1}4\cos^{4}\pi xdx-1=\dfrac{1}{2},\quad\text{ we obtain}\quad\sigma_{1}^{2}=1+\dfrac{\lambda_{2}}{2}+\lambda_{1}^{2}\dfrac{\lambda_{2}}{1-\lambda_{2}}.
Similarly,01(2cos2πx)3dx=0,012cosπx(2cos2πx)2dx)=0\text{Similarly,}\quad\int_{0}^{1}(\sqrt{2}\cos 2\pi x)^{3}dx=0,\int_{0}^{1}\sqrt{2}\cos\pi x(\sqrt{2}\cos 2\pi x)^{2}dx)=0

and 01(2cos2πx)4dx1=12\int_{0}^{1}(\sqrt{2}\cos 2\pi x)^{4}dx-1=\dfrac{1}{2} imply σ22=1,\sigma^{2}_{2}=1, Σ1,2=Σ2,1=λ1(12λ2).\displaystyle\Sigma_{1,2}=\Sigma_{2,1}=\lambda_{1}\left(\dfrac{1}{2}-\lambda_{2}\right).

2. The Sine-cosine copula. We have

φ1(x)=cos2πx;φ2(x)=cos4πx;φ3(x)=sin2πx and φ4(x)=sin4πx.\varphi_{1}(x)=\cos 2\pi x\mathchar 24635\relax\;~{}~{}\varphi_{2}(x)=\cos 4\pi x\mathchar 24635\relax\;~{}~{}\varphi_{3}(x)=\sin 2\pi x\text{ and }\varphi_{4}(x)=\sin 4\pi x.

Via simple computations, we get

01(2)3cos32πxdx=0,01(2)3cos22πxcos4πxdx=12,\displaystyle\int_{0}^{1}(\sqrt{2})^{3}\cos^{3}2\pi xdx=0,\int_{0}^{1}(\sqrt{2})^{3}\cos^{2}2\pi x\cos 4\pi xdx=\dfrac{1}{\sqrt{2}},
01(2)3cos22πxsin2πxdx=0,0122cos22πxsin4πx=0,\displaystyle\int_{0}^{1}(\sqrt{2})^{3}\cos^{2}2\pi x\sin 2\pi xdx=0,\int_{0}^{1}2\sqrt{2}\cos^{2}2\pi x\sin 4\pi x=0,
and 014cos42πxdx=32.Using these integrals leads to\displaystyle\text{ and }\int_{0}^{1}4\cos^{4}2\pi xdx=\dfrac{3}{2}.\quad\text{Using these integrals leads to}
Σ1,1=σ21=1λ12+λ22+λ12λ12+λ21λ2=1+λ22+λ12λ21λ2.\Sigma_{1,1}=\sigma^{2}_{1}=1-\lambda_{1}^{2}+\dfrac{\lambda_{2}}{2}+\lambda_{1}^{2}\lambda_{1}^{2}+\dfrac{\lambda_{2}}{1-\lambda_{2}}=1+\dfrac{\lambda_{2}}{2}+\lambda_{1}^{2}\dfrac{\lambda_{2}}{1-\lambda_{2}}.
Σ2,2=σ22=1λ22+λ1(0122cos2πxcos24πxdx)2+\displaystyle\bullet~{}\Sigma_{2,2}=\sigma_{2}^{2}=1-\lambda_{2}^{2}+\lambda_{1}(\int_{0}^{1}2\sqrt{2}\cos 2\pi x\cos^{2}4\pi xdx)^{2}+
λ2(0122cos34πxdx)2+μ1(0122cos24πxsin2πxdx)2+\displaystyle\lambda_{2}(\int_{0}^{1}2\sqrt{2}\cos^{3}4\pi xdx)^{2}+\mu_{1}(\int_{0}^{1}2\sqrt{2}\cos^{2}4\pi x\sin 2\pi xdx)^{2}+
μ2(0122cos24πxsin4πxdx)2+2λ22(014cos44πxdx1)+\displaystyle\mu_{2}(\int_{0}^{1}2\sqrt{2}\cos^{2}4\pi x\sin 4\pi xdx)^{2}+2\lambda_{2}^{2}(\int_{0}^{1}4\cos^{4}4\pi xdx-1)+
2λ22m=2λ1m1(0122cos24πxcos2πxdx)2+\displaystyle 2\lambda_{2}^{2}\sum\limits_{m=2}^{\infty}\lambda_{1}^{m-1}(\int_{0}^{1}2\sqrt{2}\cos^{2}4\pi x\cos 2\pi xdx)^{2}+
2λ22(m=2λ2m1(2201cos34πxdx)2)+\displaystyle 2\lambda_{2}^{2}\left(\sum\limits_{m=2}^{\infty}\lambda_{2}^{m-1}(2\sqrt{2}\int_{0}^{1}\cos^{3}4\pi xdx)^{2}\right)+
+2λ22(m=2μ1m1(0122sin2πxcos24πxdx)2)+\displaystyle+2\lambda_{2}^{2}\left(\sum\limits_{m=2}^{\infty}\mu_{1}^{m-1}(\int_{0}^{1}2\sqrt{2}\sin 2\pi x\cos^{2}4\pi xdx)^{2}\right)+
2λ22m=2μ2m1(0122sin4πxcos24πxdx)2.\displaystyle 2\lambda_{2}^{2}\sum\limits_{m=2}^{\infty}\mu_{2}^{m-1}(\int_{0}^{1}2\sqrt{2}\sin 4\pi x\cos^{2}4\pi xdx)^{2}.

Knowing 0122cos34πxdx=0122cos2πxcos24πxdx=0\displaystyle\int_{0}^{1}2\sqrt{2}\cos^{3}4\pi xdx=\int_{0}^{1}2\sqrt{2}\cos 2\pi x\cos^{2}4\pi xdx=0, 0122cos24πxsin2πxdx=0122cos24πxsin4πxdx=0,\displaystyle\int_{0}^{1}2\sqrt{2}\cos^{2}4\pi x\sin 2\pi xdx=\int_{0}^{1}2\sqrt{2}\cos^{2}4\pi x\sin 4\pi xdx=0, and012cos24πxdx1=12,\text{and}\quad\displaystyle\int_{0}^{1}2\cos^{2}4\pi xdx-1=\dfrac{1}{2}, we derive Σ2,2=σ22=1λ22+λ22=1.\Sigma_{2,2}=\sigma_{2}^{2}=1-\lambda_{2}^{2}+\lambda_{2}^{2}=1.

Σ3,3=σ32=1μ12+λ1(012cos2πx(2sin2πx)2dx)2+\displaystyle\bullet~{}\Sigma_{3,3}=\sigma_{3}^{2}=1-\mu_{1}^{2}+\lambda_{1}(\int_{0}^{1}\sqrt{2}\cos 2\pi x(\sqrt{2}\sin 2\pi x)^{2}dx)^{2}+
λ2(012cos4πx(2sin2πx)2dx)2+μ1(01(2sin2πx)3dx)2+\displaystyle\lambda_{2}(\int_{0}^{1}\sqrt{2}\cos 4\pi x(\sqrt{2}\sin 2\pi x)^{2}dx)^{2}+\mu_{1}(\int_{0}^{1}(\sqrt{2}\sin 2\pi x)^{3}dx)^{2}+
μ2(01(2sin2πx)22sin4πxdx)2+2μ12(01(2sin2πx)4dx1)+\displaystyle\mu_{2}(\int_{0}^{1}(\sqrt{2}\sin 2\pi x)^{2}\sqrt{2}\sin 4\pi xdx)^{2}+2\mu_{1}^{2}(\int_{0}^{1}(\sqrt{2}\sin 2\pi x)^{4}dx-1)+
2μ12m=2λ1m1(01(2sin2πx)22cos2πxdx)2+\displaystyle 2\mu_{1}^{2}\sum\limits_{m=2}^{\infty}\lambda_{1}^{m-1}(\int_{0}^{1}(\sqrt{2}\sin 2\pi x)^{2}\sqrt{2}\cos 2\pi xdx)^{2}+
2μ12m=2λ2m1(01(2sin2πx)22cos4πxdx)2+\displaystyle 2\mu_{1}^{2}\sum\limits_{m=2}^{\infty}\lambda_{2}^{m-1}(\int_{0}^{1}(\sqrt{2}\sin 2\pi x)^{2}\sqrt{2}\cos 4\pi xdx)^{2}+
2μ12m=2μ1m1(01(2sin2πx)3dx)2+\displaystyle 2\mu_{1}^{2}\sum\limits_{m=2}^{\infty}\mu_{1}^{m-1}(\int_{0}^{1}(\sqrt{2}\sin 2\pi x)^{3}dx)^{2}+
2μ12m=2μ2m1(012sin4πx(2sin2πx)2dx)2\displaystyle 2\mu_{1}^{2}\sum\limits_{m=2}^{\infty}\mu_{2}^{m-1}(\int_{0}^{1}\sqrt{2}\sin 4\pi x(\sqrt{2}\sin 2\pi x)^{2}dx)^{2}
and0122sin32πxdx=0,0122cos2πxsin22πxdx=0,\displaystyle\text{and}\quad\int_{0}^{1}2\sqrt{2}\sin^{3}2\pi xdx=0,\int_{0}^{1}2\sqrt{2}\cos 2\pi x\sin^{2}2\pi xdx=0,
0122sin22πxcos4πxdx=12,\displaystyle\int_{0}^{1}2\sqrt{2}\sin^{2}2\pi x\cos 4\pi xdx=-\dfrac{1}{\sqrt{2}},
0122sin22πxsin4πxdx=0,014sin2πxdx1=12.\displaystyle\int_{0}^{1}2\sqrt{2}\sin^{2}2\pi x\sin 4\pi xdx=0,\int_{0}^{1}4\sin 2\pi xdx-1=\dfrac{1}{2}.
Thus,Σ3,3=σ32\displaystyle\text{Thus},\quad\Sigma_{3,3}=\sigma_{3}^{2} =\displaystyle= 1μ12+λ22+μ12+μ12λ21λ2=1+λ22+μ12λ21λ2.\displaystyle 1-\mu_{1}^{2}+\dfrac{\lambda_{2}}{2}+\mu_{1}^{2}+\mu_{1}^{2}\dfrac{\lambda_{2}}{1-\lambda_{2}}=1+\dfrac{\lambda_{2}}{2}+\mu_{1}^{2}\dfrac{\lambda_{2}}{1-\lambda_{2}}.
Σ4,4=σ42=1μ22+λ1(012cos2πx(2sin4πx)2dx)2+\displaystyle\bullet\Sigma_{4,4}=\sigma_{4}^{2}=1-\mu_{2}^{2}+\lambda_{1}(\int_{0}^{1}\sqrt{2}\cos 2\pi x\cdot(\sqrt{2}\sin 4\pi x)^{2}dx)^{2}+
λ2(012cos4πx(2sin4πx)2dx)2+μ1(012sin2πx(2sin4πx)2dx)2+\displaystyle\lambda_{2}(\int_{0}^{1}\sqrt{2}\cos 4\pi x\cdot(\sqrt{2}\sin 4\pi x)^{2}dx)^{2}+\mu_{1}(\int_{0}^{1}\sqrt{2}\sin 2\pi x\cdot(\sqrt{2}\sin 4\pi x)^{2}dx)^{2}+
μ2(01(2sin4πx)3dx)2+2μ22(01(2sin4πx)4dx1)+\displaystyle\mu_{2}(\int_{0}^{1}(\sqrt{2}\sin 4\pi x)^{3}dx)^{2}+2\mu_{2}^{2}(\int_{0}^{1}(\sqrt{2}\sin 4\pi x)^{4}dx-1)+
2μ22m=2λ1m1(01(2sin4πx)22cos2πxdx)2+\displaystyle 2\mu_{2}^{2}\sum\limits_{m=2}^{\infty}\lambda_{1}^{m-1}(\int_{0}^{1}(\sqrt{2}\sin 4\pi x)^{2}\cdot\sqrt{2}\cos 2\pi xdx)^{2}+
2μ22m=2λ2m1(01(2sin4πx)22cos4πxdx)2+\displaystyle 2\mu_{2}^{2}\sum\limits_{m=2}^{\infty}\lambda_{2}^{m-1}(\int_{0}^{1}(\sqrt{2}\sin 4\pi x)^{2}\cdot\sqrt{2}\cos 4\pi xdx)^{2}+
2μ22m=2μ1m1(012sin2πx(2sin4πx)2dx)2+\displaystyle 2\mu_{2}^{2}\sum\limits_{m=2}^{\infty}\mu_{1}^{m-1}(\int_{0}^{1}\sqrt{2}\sin 2\pi x\cdot(\sqrt{2}\sin 4\pi x)^{2}dx)^{2}+
2μ22m=2μ2m1(01(2sin4πx)3dx)2,\displaystyle 2\mu_{2}^{2}\sum\limits_{m=2}^{\infty}\mu_{2}^{m-1}(\int_{0}^{1}(\sqrt{2}\sin 4\pi x)^{3}dx)^{2},
and01(2sin4πx)3dx=0,01(2cos2πx)(2sin4πx)2dx=0,\displaystyle\text{and}\int_{0}^{1}\left(\sqrt{2}\sin 4\pi x\right)^{3}dx=0,\int_{0}^{1}\left(\sqrt{2}\cos 2\pi x\right)\left(\sqrt{2}\sin 4\pi x\right)^{2}dx=0,
01(2sin4πx)2(2cos4πx)dx=0,\displaystyle\int_{0}^{1}\left(\sqrt{2}\sin 4\pi x\right)^{2}\left(\sqrt{2}\cos 4\pi x\right)dx=0,
01(2sin2πx)(2sin4πx)2dx=0,01(2sin4πx)4dx1=12.\displaystyle\int_{0}^{1}\left(\sqrt{2}\sin 2\pi x\right)\left(\sqrt{2}\sin 4\pi x\right)^{2}dx=0,\int_{0}^{1}\left(\sqrt{2}\sin 4\pi x\right)^{4}dx-1=\dfrac{1}{2}.
Therefore,Σ4,4=σ42=1μ22+μ22=1.\text{Therefore,}\quad\Sigma_{4,4}=\sigma_{4}^{2}=1-\mu_{2}^{2}+\mu_{2}^{2}=1.
Σ1,2\displaystyle\bullet~{}\Sigma_{1,2} =\displaystyle= z=14λz(01φz(x)φ1(x)φ2(x)dx)2+λ1λ2(2(01φ21(x)φ22(x)dx1)\displaystyle\sum\limits_{z=1}^{4}\lambda_{z}(\int_{0}^{1}\varphi_{z}(x)\varphi_{1}(x)\varphi_{2}(x)dx)^{2}+\lambda_{1}\lambda_{2}\left(2(\int_{0}^{1}\varphi^{2}_{1}(x)\varphi_{2}^{2}(x)dx-1)\right.
+2z=14m=2λzm1(01φz(x)φ12(x)dx)(01φz(x)φ22(x)dx)1).\displaystyle\left.+2\sum\limits_{z=1}^{4}\sum\limits_{m=2}^{\infty}\lambda_{z}^{m-1}(\int_{0}^{1}\varphi_{z}(x)\varphi_{1}^{2}(x)dx)(\int_{0}^{1}\varphi_{z}(x)\varphi_{2}^{2}(x)dx)-1\right).

Moreover, just as for the other cases, we have

01φ21(x)φ2(x)dx=12,01φ22(x)φ1(x)dx=01φ3(x)φ1(x)φ2(x)dx=0,\displaystyle\int_{0}^{1}\varphi^{2}_{1}(x)\varphi_{2}(x)dx=\dfrac{1}{\sqrt{2}},\int_{0}^{1}\varphi^{2}_{2}(x)\varphi_{1}(x)dx=\int_{0}^{1}\varphi_{3}(x)\varphi_{1}(x)\varphi_{2}(x)dx=0,
01φ4(x)φ1(x)φ2(x)dx=01φ21(x)φ22(x)dx1=01φ21(x)φ3(x)dx=0,\displaystyle\int_{0}^{1}\varphi_{4}(x)\varphi_{1}(x)\varphi_{2}(x)dx=\int_{0}^{1}\varphi^{2}_{1}(x)\varphi^{2}_{2}(x)dx-1=\int_{0}^{1}\varphi^{2}_{1}(x)\varphi_{3}(x)dx=0,
01φ21(x)φ4(x)dx=01φ22(x)φ3(x)dx=0, and 01φ22(x)φ4(x)dx=0.\displaystyle\int_{0}^{1}\varphi^{2}_{1}(x)\varphi_{4}(x)dx=\int_{0}^{1}\varphi^{2}_{2}(x)\varphi_{3}(x)dx=0,\text{ and }\int_{0}^{1}\varphi^{2}_{2}(x)\varphi_{4}(x)dx=0.
Therefore,Σ1,2=Σ2,1=λ12λ1λ2=λ1(12λ2).\text{Therefore},\quad\Sigma_{1,2}=\Sigma_{2,1}=\dfrac{\lambda_{1}}{2}-\lambda_{1}\lambda_{2}=\lambda_{1}\left(\dfrac{1}{2}-\lambda_{2}\right).
Σ2,3\displaystyle\bullet~{}\Sigma_{2,3} =\displaystyle= z=14λz(01φz(x)φ3(x)φ2(x)dx)2+λ2μ1(2(01φ23(x)φ22(x)dx1)\displaystyle\sum\limits_{z=1}^{4}\lambda_{z}(\int_{0}^{1}\varphi_{z}(x)\varphi_{3}(x)\varphi_{2}(x)dx)^{2}+\lambda_{2}\mu_{1}\left(2(\int_{0}^{1}\varphi^{2}_{3}(x)\varphi_{2}^{2}(x)dx-1)\right.
+2z=14m=2λzm1(01φz(x)φ32(x)dx)(01φz(x)φ22(x)dx)1).\displaystyle\left.+2\sum\limits_{z=1}^{4}\sum\limits_{m=2}^{\infty}\lambda_{z}^{m-1}(\int_{0}^{1}\varphi_{z}(x)\varphi_{3}^{2}(x)dx)(\int_{0}^{1}\varphi_{z}(x)\varphi_{2}^{2}(x)dx)-1\right).

Moreover,

01φ22(x)φ3(x)dx=01φ4(x)φ3(x)φ2(x)dx=01φ23(x)φ22(x)dx=0,\displaystyle\int_{0}^{1}\varphi^{2}_{2}(x)\varphi_{3}(x)dx=\int_{0}^{1}\varphi_{4}(x)\varphi_{3}(x)\varphi_{2}(x)dx=\int_{0}^{1}\varphi^{2}_{3}(x)\varphi^{2}_{2}(x)dx=0,
01φ23(x)φ1(x)dx=01φ23(x)φ4(x)dx=0,01φ23(x)φ2(x)dx=12.\displaystyle\int_{0}^{1}\varphi^{2}_{3}(x)\varphi_{1}(x)dx=\int_{0}^{1}\varphi^{2}_{3}(x)\varphi_{4}(x)dx=0,\int_{0}^{1}\varphi^{2}_{3}(x)\varphi_{2}(x)dx=-\dfrac{1}{\sqrt{2}}.
Theorefore,Σ2,3=Σ3,2=μ12λ2μ1=μ1(12λ2).\text{Theorefore,}\quad\Sigma_{2,3}=\Sigma_{3,2}=\dfrac{\mu_{1}}{2}-\lambda_{2}\mu_{1}=\mu_{1}\left(\dfrac{1}{2}-\lambda_{2}\right).
Σ1,3=z=14λz(01φz(x)φ3(x)φ1(x)dx)2+λ1μ1(2(01φ23(x)φ12(x)dx1)\displaystyle\bullet~{}\Sigma_{1,3}=\sum\limits_{z=1}^{4}\lambda_{z}(\int_{0}^{1}\varphi_{z}(x)\varphi_{3}(x)\varphi_{1}(x)dx)^{2}+\lambda_{1}\mu_{1}\left(2(\int_{0}^{1}\varphi^{2}_{3}(x)\varphi_{1}^{2}(x)dx-1)\right.
+2z=14m=2λzm1(01φz(x)φ32(x)dx)(01φz(x)φ12(x)dx)1).\displaystyle\left.+2\sum\limits_{z=1}^{4}\sum\limits_{m=2}^{\infty}\lambda_{z}^{m-1}(\int_{0}^{1}\varphi_{z}(x)\varphi_{3}^{2}(x)dx)(\int_{0}^{1}\varphi_{z}(x)\varphi_{1}^{2}(x)dx)-1\right).
Since,01φ23(x)φ21(x)dx1=12,01φ4(x)φ3(x)φ1(x)dx=12,\text{Since},\int_{0}^{1}\varphi^{2}_{3}(x)\varphi^{2}_{1}(x)dx-1=-\dfrac{1}{2},\int_{0}^{1}\varphi_{4}(x)\varphi_{3}(x)\varphi_{1}(x)dx=\dfrac{1}{\sqrt{2}},
it follows thatΣ1,3=Σ3,1=μ222λ1μ1.\text{it follows that}\quad\Sigma_{1,3}=\Sigma_{3,1}=\dfrac{\mu_{2}}{2}-2\lambda_{1}\mu_{1}.
Σ1,4=z=14λz(01φz(x)φ4(x)φ1(x)dx)2+λ1μ2(2(01φ24(x)φ12(x)dx1)\displaystyle\bullet~{}\Sigma_{1,4}=\sum\limits_{z=1}^{4}\lambda_{z}(\int_{0}^{1}\varphi_{z}(x)\varphi_{4}(x)\varphi_{1}(x)dx)^{2}+\lambda_{1}\mu_{2}\left(2(\int_{0}^{1}\varphi^{2}_{4}(x)\varphi_{1}^{2}(x)dx-1)\right.
+2z=14m=2λzm1(01φz(x)φ42(x)dx)(01φz(x)φ12(x)dx)1).\displaystyle\left.+2\sum\limits_{z=1}^{4}\sum\limits_{m=2}^{\infty}\lambda_{z}^{m-1}(\int_{0}^{1}\varphi_{z}(x)\varphi_{4}^{2}(x)dx)(\int_{0}^{1}\varphi_{z}(x)\varphi_{1}^{2}(x)dx)-1\right).
Moreover,01φ24(x)φ21(x)dx1=01φ24(x)φ1(x)dx=0,\displaystyle\text{ Moreover,}\int_{0}^{1}\varphi^{2}_{4}(x)\varphi^{2}_{1}(x)dx-1=\int_{0}^{1}\varphi^{2}_{4}(x)\varphi_{1}(x)dx=0,
01φ4(x)φ1(x)φ3(x)dx=12.Thus,Σ1,4=Σ4,1=μ12λ1μ2.\displaystyle\int_{0}^{1}\varphi_{4}(x)\varphi_{1}(x)\varphi_{3}(x)dx=\dfrac{1}{\sqrt{2}}.\quad\text{Thus,}\quad\Sigma_{1,4}=\Sigma_{4,1}=\dfrac{\mu_{1}}{2}-\lambda_{1}\mu_{2}.
Σ2,4\displaystyle\bullet~{}\Sigma_{2,4} =\displaystyle= z=14λz(01φz(x)φ4(x)φ2(x)dx)2+λ2μ2(2(01φ24(x)φ22(x)dx1)\displaystyle\sum\limits_{z=1}^{4}\lambda_{z}(\int_{0}^{1}\varphi_{z}(x)\varphi_{4}(x)\varphi_{2}(x)dx)^{2}+\lambda_{2}\mu_{2}\left(2(\int_{0}^{1}\varphi^{2}_{4}(x)\varphi_{2}^{2}(x)dx-1)\right.
+2z=14m=2λzm1(01φz(x)φ42(x)dx)(01φz(x)φ22(x)dx)1).\displaystyle\left.+2\sum\limits_{z=1}^{4}\sum\limits_{m=2}^{\infty}\lambda_{z}^{m-1}(\int_{0}^{1}\varphi_{z}(x)\varphi_{4}^{2}(x)dx)(\int_{0}^{1}\varphi_{z}(x)\varphi_{2}^{2}(x)dx)-1\right).
So,01φ24(x)φ22(x)dx=12,01φ24(x)φ2(x)dx=0,Σ2,4=Σ4,2=2λ2μ2.\displaystyle\text{So,}\int_{0}^{1}\varphi^{2}_{4}(x)\varphi^{2}_{2}(x)dx=-\dfrac{1}{2},\int_{0}^{1}\varphi^{2}_{4}(x)\varphi_{2}(x)dx=0,\Sigma_{2,4}=\Sigma_{4,2}=-2\lambda_{2}\mu_{2}.
Σ3,4\displaystyle\bullet~{}\Sigma_{3,4} =\displaystyle= z=14λz(01φz(x)φ4(x)φ3(x)dx)2+μ1μ2(2(01φ24(x)φ32(x)dx1)\displaystyle\sum\limits_{z=1}^{4}\lambda_{z}(\int_{0}^{1}\varphi_{z}(x)\varphi_{4}(x)\varphi_{3}(x)dx)^{2}+\mu_{1}\mu_{2}\left(2(\int_{0}^{1}\varphi^{2}_{4}(x)\varphi_{3}^{2}(x)dx-1)\right.
+2z=14m=2λzm1(01φz(x)φ42(x)dx)(01φz(x)φ32(x)dx)1).\displaystyle\left.+2\sum\limits_{z=1}^{4}\sum\limits_{m=2}^{\infty}\lambda_{z}^{m-1}(\int_{0}^{1}\varphi_{z}(x)\varphi_{4}^{2}(x)dx)(\int_{0}^{1}\varphi_{z}(x)\varphi_{3}^{2}(x)dx)-1\right).
But,01φ24(x)φ23(x)dx1=01φ24(x)φ3(x)dx=0,\displaystyle\text{But,}\quad\int_{0}^{1}\varphi^{2}_{4}(x)\varphi^{2}_{3}(x)dx-1=\int_{0}^{1}\varphi^{2}_{4}(x)\varphi_{3}(x)dx=0,
01φ24(x)φ2(x)dx=0,01φ4(x)φ1(x)φ3(x)dx=12.\displaystyle\int_{0}^{1}\varphi^{2}_{4}(x)\varphi_{2}(x)dx=0,\int_{0}^{1}\varphi_{4}(x)\varphi_{1}(x)\varphi_{3}(x)dx=\dfrac{1}{\sqrt{2}}.
Thus,Σ2,4=Σ4,2=λ12μ1μ2.\text{Thus,}\quad\Sigma_{2,4}=\Sigma_{4,2}=\dfrac{\lambda_{1}}{2}-\mu_{1}\mu_{2}.

For the Legendre copula:

01φ31(x)dx)=01(3(2x1))3dx=0,\displaystyle\int_{0}^{1}\varphi^{3}_{1}(x)dx)=\int_{0}^{1}(\sqrt{3}(2x-1))^{3}dx=0,
01φ2(x)φ12(x)dx)=015(6x26x+1)(3(2x1))2dx=25,\displaystyle\int_{0}^{1}\varphi_{2}(x)\varphi_{1}^{2}(x)dx)=\int_{0}^{1}\sqrt{5}(6x^{2}-6x+1)(\sqrt{3}(2x-1))^{2}dx=\dfrac{2}{\sqrt{5}},
01φ41(x)dx1=01(3(2x1))4dx1=45.\displaystyle\int_{0}^{1}\varphi^{4}_{1}(x)dx-1=\int_{0}^{1}(\sqrt{3}(2x-1))^{4}dx-1=\dfrac{4}{5}.
Thus,σ12=1λ12+45λ2+85λ12+85λ12m=2λ2m1=1+45λ2+λ123+5λ25(1λ2).\text{Thus,}\quad\sigma_{1}^{2}=1-\lambda_{1}^{2}+\dfrac{4}{5}\lambda_{2}+\dfrac{8}{5}\lambda_{1}^{2}+\dfrac{8}{5}\lambda_{1}^{2}\sum\limits_{m=2}^{\infty}\lambda_{2}^{m-1}=1+\dfrac{4}{5}\lambda_{2}+\lambda_{1}^{2}\dfrac{3+5\lambda_{2}}{5(1-\lambda_{2})}.
Similarly,01φ32(x)dx)=01(5(6x26x+1))3dx=257,\displaystyle\text{Similarly,}\quad\int_{0}^{1}\varphi^{3}_{2}(x)dx)=\int_{0}^{1}(\sqrt{5}(6x^{2}-6x+1))^{3}dx=\dfrac{2\sqrt{5}}{7},
01φ1(x)φ22(x)dx)=0153(6x26x+1)2(2x1)dx=0,\displaystyle\int_{0}^{1}\varphi_{1}(x)\varphi_{2}^{2}(x)dx)=\int_{0}^{1}5\sqrt{3}(6x^{2}-6x+1)^{2}(2x-1)dx=0,
01φ42(x)dx1=01(5(6x26x+1))4dx1=87,implies\displaystyle\int_{0}^{1}\varphi^{4}_{2}(x)dx-1=\int_{0}^{1}(\sqrt{5}(6x^{2}-6x+1))^{4}dx-1=\dfrac{8}{7},\quad\text{implies}
σ22\displaystyle\sigma_{2}^{2} =\displaystyle= 1λ22+2049λ2+167λ22+4049λ22m=2λ2m1=1+2049λ2+λ226323λ249(1λ2).\displaystyle 1-\lambda_{2}^{2}+\dfrac{20}{49}\lambda_{2}+\dfrac{16}{7}\lambda_{2}^{2}+\dfrac{40}{49}\lambda_{2}^{2}\sum\limits_{m=2}^{\infty}\lambda_{2}^{m-1}=1+\dfrac{20}{49}\lambda_{2}+\lambda_{2}^{2}\dfrac{63-23\lambda_{2}}{49(1-\lambda_{2})}.

The correlation in this case is obtained as

Σ1,2\displaystyle\Sigma_{1,2} =\displaystyle= z=12λz(01φz(x)φ1(x)φ2(x)dx)2+λ1λ2(2(01φ21(x)φ22(x)dx1)\displaystyle\sum\limits_{z=1}^{2}\lambda_{z}(\int_{0}^{1}\varphi_{z}(x)\varphi_{1}(x)\varphi_{2}(x)dx)^{2}+\lambda_{1}\lambda_{2}\left(2(\int_{0}^{1}\varphi^{2}_{1}(x)\varphi_{2}^{2}(x)dx-1)\right.
+2z=12m=2λzm1(01φz(x)φ12(x)dx)(01φz(x)φ22(x)dx)1).\displaystyle\left.+2\sum\limits_{z=1}^{2}\sum\limits_{m=2}^{\infty}\lambda_{z}^{m-1}(\int_{0}^{1}\varphi_{z}(x)\varphi_{1}^{2}(x)dx)(\int_{0}^{1}\varphi_{z}(x)\varphi_{2}^{2}(x)dx)-1\right).
But,01φ2(x)φ12(x)dx)=25,01φ12(x)φ22(x)dx1=47.\displaystyle\quad\text{But,}\int_{0}^{1}\varphi_{2}(x)\varphi_{1}^{2}(x)dx)=\dfrac{2}{\sqrt{5}},\int_{0}^{1}\varphi_{1}^{2}(x)\varphi^{2}_{2}(x)dx-1=\dfrac{4}{7}.
So,Σ1,2\displaystyle\text{So,}\quad\Sigma_{1,2} =\displaystyle= 45λ1+λ1λ2(871+87λ2(1λ2))=45λ1+λ1λ21+7λ27(1λ2).\displaystyle\dfrac{4}{5}\lambda_{1}+\lambda_{1}\lambda_{2}\left(\dfrac{8}{7}-1+\dfrac{8}{7}\dfrac{\lambda_{2}}{(1-\lambda_{2})}\right)=\dfrac{4}{5}\lambda_{1}+\lambda_{1}\lambda_{2}\dfrac{1+7\lambda_{2}}{7(1-\lambda_{2})}.

6.5 Proof of Theorem 4.2

Verification of conditions

  1. 1.

    Considering the parameters space, the density of each of the copulas is c(u,v)M<2c(u,v)\leq M<2 inside the parameter space. Therefore, according to Longla et al. (2022c), the Markov chains generated by such copulas are ψ\psi-mixing. Since ψ\psi-mixing implies ergodicity, the first condition is satisfied.

  2. 2.

    The second condition is satisfied as well since c(u,v)>0c(u,v)>0 on [0,1]×[0,1][0,1]\times[0,1] for any set of parameters not on the boundary of the parameter set.

  3. 3.

    Since c(u,v)>0c(u,v)>0 and is continuous on [0,1]2[0,1]^{2}, all partial derivatives mentioned in the third condition exist and are continuous on [0,1]2[0,1]^{2} when Λ\Lambda is not on the boundary of the set.

  4. 4.

    Since there is a constant M<c(u,v)<2M<c(u,v)<2 on [0,1]2[0,1]^{2} when Λ\Lambda is an interior point, then

    𝔼Λ[(λilnc(u,v))2]<.\mathbb{E}_{\Lambda}\left[\left(\frac{\partial}{\partial\lambda_{i}}\ln c(u,v)\right)^{2}\right]<\infty.

    Via simple calculations, we establish that when the parameters are strictly inside the region, we have for some KK not depending on Λ\Lambda,

    |3λiλjλklnc(u,v)|Kand𝔼Λ[supΛA|3λiλjλklnc(u,v)|]<.\lvert\frac{\partial^{3}}{\partial\lambda_{i}\lambda_{j}\lambda_{k}}\ln c(u,v)\lvert\leq K\quad\text{and}\quad\mathbb{E}_{\Lambda}\left[\underset{\Lambda\in A^{\prime}}{\sup}\lvert\frac{\partial^{3}}{\partial\lambda_{i}\lambda_{j}\lambda_{k}}\ln c(u,v)\lvert\right]<\infty.
  5. 5.

    Since entries of the Fisher information matrix Σ(Λ)\Sigma(\Lambda) cannot be computed in closed form, an estimation procedure is used in Matlab to approximate it. It appears to have a strictly positive determinant for each of the cases, implying that it is inversible. Considering the extra regularity condition, nΣn\Sigma is aproximated as an average by In(Λ)I_{n}(\Lambda) because In/nI_{n}/n converges in probability to Σ\Sigma thanks to ergodicity (see Sun et al (2020)). Here,

    In(Λ)=(2λkλj(Λ))1k,js.I_{n}(\Lambda)=-\left(\frac{\partial^{2}}{\partial\lambda_{k}\lambda_{j}}~{}\ell(\Lambda)\right)_{1\leq k,j\leq s}.

This completes the proof of the following statement. \square