Prediction in polynomial errors-in-variables models
Abstract
A multivariate errors-in-variables (EIV) model with an intercept term, and a polynomial EIV model are considered. Focus is made on a structural homoskedastic case, where vectors of covariates are i.i.d. and measurement errors are i.i.d. as well. The covariates contaminated with errors are normally distributed and the corresponding classical errors are also assumed normal. In both models, it is shown that (inconsistent) ordinary least squares estimators of regression parameters yield an a.s. approximation to the best prediction of response given the values of observable covariates. Thus, not only in the linear EIV, but in the polynomial EIV models as well, consistent estimators of regression parameters are useless in the prediction problem, provided the size and covariance structure of observation errors for the predicted subject do not differ from those in the data used for the model fitting.
doi:
10.15559/20-VMSTA154keywords:
keywords:
[MSC2010](a) Introduce the jointly Gaussian vectors
We have
{gather*}
μ^(1) := \Exvtexx^(1) =
(μ)
0
, μ^(2) := \Exvtexx^(2)=μ;
\Cov(x^(1) ) = Σ_11, \Cov( x^(2)
) = Σ_22,
which is positive definite by assumption LABEL:nonsingRegr,
where the matrices , , are given
in \eqrefdefSigma. Now, according to Theorem 2.5.1 [A58] the conditional
distribution of given is
{gather}
[ x^(1) | x^(2) ] ∼N
(μ_1|2, V_1|2 ),
μ_1|2 = μ_1|2 ( x^(2) ) = μ^(1) + Σ_12
Σ_22^-1 ( x^(2) - μ^(2) ) =
(Σ)_δΣ_x^-1 μ+ Σ_ξΣ_x^-1 x
Σ_ϵδΣ_x^-1 (x - μ)
,
V_1|2 = Σ_11- Σ_12Σ_22^-1 Σ_12^T.
Hence is uncorrelated with and has the Gaussian distribution . Therefore,