This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Prediction in polynomial errors-in-variables models

A.Alexander Kukushlabel=e1][email protected] [    I.Ivan Senkolabel=e2][email protected] [ \institutionTaras Shevchenko National University of Kyiv, Kyiv, \cnyUkraine
(2020; \sday24 4 2020; \sday4 5 2020; \sday6 5 2020)
Abstract

A multivariate errors-in-variables (EIV) model with an intercept term, and a polynomial EIV model are considered. Focus is made on a structural homoskedastic case, where vectors of covariates are i.i.d. and measurement errors are i.i.d. as well. The covariates contaminated with errors are normally distributed and the corresponding classical errors are also assumed normal. In both models, it is shown that (inconsistent) ordinary least squares estimators of regression parameters yield an a.s. approximation to the best prediction of response given the values of observable covariates. Thus, not only in the linear EIV, but in the polynomial EIV models as well, consistent estimators of regression parameters are useless in the prediction problem, provided the size and covariance structure of observation errors for the predicted subject do not differ from those in the data used for the model fitting.

Prediction,
multivariate errors-in-variables model,
polynomial errors-in-variables model,
ordinary least squares,
consistent estimator of best prediction,
confidence interval,
62J05,
62J02,
62H12,
doi:
10.15559/20-VMSTA154
keywords:
keywords:
[MSC2010]
volume: 7issue: 2articletype: research-article

(a) Introduce the jointly Gaussian vectors

x(1)=(ξ)ϵ,x(2)=x.x^{(1)}=\pmatrix{\xi}\\ \epsilon,\qquad x^{(2)}=x.

We have {gather*} μ^(1) := \Exvtexx^(1) = (μ)
0 ,   μ^(2) := \Exvtexx^(2)=μ;
\Cov(x^(1) ) = Σ_11,   \Cov( x^(2) ) = Σ_22, which is positive definite by assumption LABEL:nonsingRegr,

\Exvtex[x(1)(x(2))T]=Σ12,\Exvtex\left[x^{(1)}\left(x^{(2)}\right)^{T}\right]=\Sigma_{12},

where the matrices Σ11\Sigma_{11}, Σ12\Sigma_{12}, Σ22\Sigma_{22} are given in \eqrefdefSigma. Now, according to Theorem 2.5.1 [A58] the conditional distribution of x(1)x^{(1)} given x(2)x^{(2)} is {gather} [ x^(1) | x^(2) ] N (μ_1|2, V_1|2 ),
μ_1|2 = μ_1|2 ( x^(2) ) = μ^(1) + Σ_12 Σ_22^-1 ( x^(2) - μ^(2) ) = (Σ)_δΣ_x^-1 μ+ Σ_ξΣ_x^-1 x
Σ_ϵδΣ_x^-1 (x - μ) ,
V_1|2 = Σ_11- Σ_12Σ_22^-1 Σ_12^T.

Hence (ξT,ϵT)Tμ1|2(x)=:(γ1T,γ2T)T(\xi^{T},\epsilon^{T})^{T}-\mu_{1|2}(x)=:(\gamma_{1}^{T},\gamma_{2}^{T})^{T} is uncorrelated with xx and has the Gaussian distribution 𝒩(0,V1|2)\mathcal{N}\left(0,V_{1|2}\right). Therefore,