On Selection Criteria for the Tuning Parameter in Robust Divergence
Shonosuke Sugasawa1,3 and Shouto Yonekura2,3
1Center for Spatial Information Science, The University of Tokyo
2Graduate School of Social Sciences, Chiba University
3Nospare Inc.
Abstract
While robust divergence such as density power divergence and -divergence is helpful for robust statistical inference in the presence of outliers, the tuning parameter that controls the degree of robustness is chosen in a rule-of-thumb, which may lead to an inefficient inference. We here propose a selection criterion based on an asymptotic approximation of the Hyvarinen score applied to an unnormalized model defined by robust divergence. The proposed selection criterion only requires first and second-order partial derivatives of an assumed density function with respect to observations, which can be easily computed regardless of the number of parameters. We demonstrate the usefulness of the proposed method via numerical studies using normal distributions and regularized linear regression.
Key words: efficiency; Hyvarinen score; outlier; unnormalized model
Introduction
Data with outliers naturally arise in diverse areas. In the analysis of data containing outliers, statistical models with robust divergence are known to be powerful and have been used regularly. In particular, the density power divergence (Basu et al., 1998) and -divergence (Fujisawa and Eguchi, 2008) have been routinely used in this context due to their robustness properties while there now exist others. Robust divergence, in general, holds a tuning parameter that controls robustness under model misspecification or contamination. Basu et al. (1998) noted that there is a trade-off between estimation efficiency and strength of robustness; thereby, a suitable choice of the tuning parameter seems crucial in practice. However, a well-known selection strategy such as cross-validation is not straightforward under contamination, so that we need to rely on a trial-and-error way to find a reasonable value of the tuning parameter.
To select a turning parameter, we here propose a simple but novel selection criterion for the tuning parameter by using the asymptotic approximation of Hyvarinen score (Shao et al., 2019; Dawid and Musio, 2015) with unnormalized models based on robust divergence. Typical existing methods (Warwick and Jones, 2005; Basak et al., 2021) choose a tuning parameter based on the asymptotic approximation of the mean square error but have the drawback of requiring some pilot estimators and an analytical expression of the asymptotic variance. Besides, their works are essentially limited to the simple normal distribution and simple linear regression. Our proposed method has the following advantages over the existing studies.
-
1.
Our method does not require an explicit representation of the asymptotic variance. Therefore, our method can be applied to rather complex statistical models such as multivariate models, which seems difficult to be handled by the previous methods.
-
2.
In the existing studies, it is necessary to determine a certain value as a pilot estimate to optimize a tuning parameter. Thus, the estimates may strongly depend on the pilot estimate. On the other hand, our method does not require a pilot estimate and is stable and statistically efficient.
-
3.
Although our proposed method is based on a simple asymptotic expansion, it is more statistically meaningful and easier to interpret the results statistically than existing methods because it is based on the theory of parameter estimation for unnormalized statistical models.
Through numerical studies under simple settings, we show that the existing methods can be sensitive to a pilot estimate and tends to select an unnecessarily larger value of a tuning parameter, leading to loss of efficiency compared with the proposed method. Moreover, we still apply the proposed selection method, an estimation procedure in which the asymptotic variance is difficult to compute. As an illustrative example of such a case, we consider robust linear regression with -divergence and -regularization, where the existing approach is infeasible to apply.
As related works, there are two information criteria using the Hyvarinen score. Matsuda et al. (2019) proposed AIC-type information criteria for unnormalized models by deriving an asymptotic unbiased estimator of the Hyvarinen score, but it does not allow unnormalized models whose normalizing constants do not exist. Hence, the criterion cannot be applied to the current situation. On the other hand, Jewson and Rossell (2021) proposed an information criterion via Laplace approximation of the marginal likelihood in which the potential function is constructed by the Hyvarinen score. Although Jewson and Rossell (2021) covers unnormalized models with possibly diverging normalizing constants, the estimator used in the criterion is entirely different from one defined as the maximizer of robust divergence; thereby, the criterion does not apply to the tuning parameter selection of robust divergence either. Moreover, Yonekura and Sugasawa (2021) developed an robust sequential Monte Carlo sampler based on robust divergence in which is adaptively selected. However, it does not provide selection of in a frequentist framework.
The rest of the paper is organized as follows. Section 2 introduces a new selection criterion based on the Hyvarinen score. We then provide concrete expressions of the proposed criterion under density power divergence and -divergence in Section 3. We numerically illustrate the proposed method in two situations in Section 4. Concluding remarks are given in Section 5.
Tuning parameter selection of robust divergence
Suppose we observe as realizations from a true distribution or data generating process , and we want to fit a statistical model where for some . Further assume that the density of is expressed as , where is a contaminated distribution that produces outliers in observations. Our goal is to make statistical inference on by successfully eliminating information of outliers. To this end, robust divergence such as density power divergence (Basu et al., 1998) and -divergence (Fujisawa and Eguchi, 2008) is typically used for robust inference on . Let be a vector of observations and be a (negative) robust divergence with a tuning parameter . We assume that the robust divergence has a additive form, namely, , which are satisfied well-known robust divergences as discussed in Section 3.
For selecting the tuning parameter , our main idea is to regard as an unnormalized statistical model whose normalizing constant may not exist. Recently, Jewson and Rossell (2021) pointed out that the role of such unnormalized models can be recognized in terms of relative probability. For such model, we employ the Hyvarinen score (H-score) in terms of Bayesian model selection (Shao et al., 2019; Dawid and Musio, 2015), defined as
(1) |
where is the marginal likelihood given by
(2) |
with some prior distribution . We consider an asymptotic approximation of the H-score (1) under large sample sizes. Under some regularity conditions (e.g. Geisser et al., 1990), the Laplace approximation of (2) is
(3) |
where is the M-estimator given by
and is the Hessian matrix at . Then, we have the following approximation, where the proof is deferred to Appendix.
Proposition 1.
Under some regularity conditions, it follows that
where and .
The above results give the following approximation of the original H-score:
(4) |
which satisfies under . We then define the optimal as
Existing selection strategies for mostly use the asymptotic variance of . For example, under the density power divergence, Warwick and Jones (2005) and Basak et al. (2021) suggested using asymptotic approximation of the mean squared errors of . However, computation of the asymptotic variance is not straightforward, especially when an additional penalty function is incorporated into the objective function or the dimension of is large. On the other hand, the proposed criterion (4) does not require the computation of asymptotic variance but only needs the derivatives of robust divergence concerning . Furthermore, it should be noted that the proposed criterion (4) can be applied to a variety of robust divergence.
Possible robust divergences to consider
We here provide detailed expressions for the proposed criterion (4) under some robust divergences. For simplicity, we focus on two robust divergences which can be empirically estimated from the data. Still, the proposed method could be applied to other divergences such as Hellinger divergence (Devroye and Gyorfi, 1985) or -divergence (Cichocki et al., 2011). In what follows, we shall use the notations, and .
Density power divergence
-divergence
The original form of -divergence (Fujisawa and Eguchi, 2008) for a statistical model is given by
which is not an additive form. However, the maximization of the above function with respect to is equivalent to the maximization of the transformed version of -divergence, , where
Then, we have
where .
Numerical examples
Normal distribution with density power divergence
We first consider a simple example of robust estimation of the normal population mean under unknown variance. Let be sampled observations and we fit to the data. The density power divergence of the model is given by
where is the density function of . In this case, the criterion (4) is expressed as
where and are the estimator based on the density power divergence.
We first demonstrate the proposed selection strategy through simulation studies. We simulated from the normal distribution with true parameters, , and , and then replace the first observations by . We adopted four settings for . Using the simulated dataset, the optimal is selected among through the criterion , and we obtain the adaptive estimator . For comparison, we also employed two selection methods, OWJ (Warwick and Jones, 2005) and IWJ (Basak et al., 2021), in which the optimal value of is selected via asymptotic approximation of mean squared errors of the estimator. We set to compute a pilot estimator that must be specified in the two methods. Furthermore, we also computed with and . Using an estimator of the asymptotic variance of (e.g. Basak et al., 2021), we also computed the Wald-type confidence interval of . Based on 5000 simulated datasets, we obtained the squared root of mean squared error (RMSE) of the point estimator as well as coverage probability (CP) and average length (AL) of the interval estimation. The results are reported in Table 1. It is observed that the use of small (such as ) may lead to unsatisfactory results when the contamination is heavy. It can also be seen that with the use of relatively large , the estimation results can be inefficient. On the other hand, the proposed method can adaptively select a suitable value of as the averaged value of increases with the contamination ratio , and it provides reasonable performance in all the scenarios.
fixed | ||||||||
---|---|---|---|---|---|---|---|---|
HS | OWJ | IWJ | ||||||
0 | 10.3 | 10.6 | 10.3 | 10.2 | 10.5 | 11.0 | ||
RMSE | 0.05 | 10.7 | 10.9 | 10.7 | 14.4 | 10.8 | 11.3 | |
0.1 | 11.0 | 11.1 | 11.0 | 44.7 | 11.1 | 11.5 | ||
0.15 | 11.4 | 11.4 | 11.4 | 82.6 | 11.5 | 11.8 | ||
0 | 94.8 | 93.8 | 94.2 | 94.6 | 94.5 | 94.4 | ||
CP | 0.05 | 94.7 | 93.9 | 94.1 | 93.2 | 94.2 | 94.1 | |
0.1 | 94.3 | 94.1 | 94.2 | 36.7 | 94.2 | 94.4 | ||
0.15 | 94.1 | 93.7 | 93.8 | 0.1 | 93.6 | 94.1 | ||
0 | 40.6 | 40.1 | 39.8 | 40.4 | 40.7 | 42.6 | ||
AL | 0.05 | 41.7 | 41.0 | 40.9 | 50.4 | 41.2 | 43.3 | |
0.1 | 42.5 | 41.9 | 41.8 | 79.5 | 42.0 | 44.1 | ||
0.15 | 43.4 | 42.9 | 42.9 | 100.4 | 43.1 | 45.1 |
HS | OWJ | IWJ | ||
---|---|---|---|---|
0 | 0.088 | 0.212 | 0.158 | |
0.05 | 0.169 | 0.260 | 0.230 | |
0.1 | 0.217 | 0.284 | 0.267 | |
0.15 | 0.252 | 0.302 | 0.294 |
We next apply the proposed method to Simon Newcomb’s measurements of the speed of light data, motivated by applications in Stigler (1977); Basu et al. (1998); Basak et al. (2021). We searched the optimal among and the H-sores are shown in left panel in Figure 1. The obtained optimal value is , which is substantially smaller than selected by the existing methods as reported in Basak et al. (2021). Since the method proposed in Basak et al. (2021) requires a pilot estimate and the estimation results depend significantly on it, we believe that our estimation results are more reasonable. In fact, it is unlikely that we will have to use a value of for a data set that contains only two outliers. As shown in the right panel in Figure 1, the estimated density functions are almost the same when and when . However, it would be preferable to adopt the smaller value of if the estimates are almost identical in terms of statistical efficiency.


Regularized linear regression with -divergence
Note that the proposed criterion can be used when some regularized terms are introduced in the objective function, while the existing method requiring an asymptotic variance of the estimator is not simply applicable. We demonstrate the advantage of the proposed method through regularized linear regression with -divergence (Kawashima and Fujisawa, 2017). Let and be a response variable and a -dimensional vector of covariates, respectively, for . The model is . Then, the transformed -divergence is with , and the H-score is expressed as
Here and are estimated as the minimizer of the following regularized -divergence:
where is an additional tuning parameter that can be optimized via 10-fold cross-validation.
We use the R package gamreg
(Kawashima and
Fujisawa, 2017) to estimate and under given .
We apply the aforementioned method to the well-known Boston housing dataset (Harrison Jr and Rubinfeld, 1978). In this analysis, we included the original 13 covariates and 12 quadratic terms of the covariates except for one binary covariate. We searched the optimal among , and the estimated H-scores are shown in the left panel in Figure 2, where the optimal value of was . For comparison, we estimated the regression coefficients with and . Note that reduces to the (non-robust) standard regularized linear regression. The scatter plots of the estimated standardized coefficients under against ones under the two choices of are shown in the right panel of Figure 2. It is confirmed that the estimates with and are comparable while there are substantial differences between estimates with and , indicating that a certain amount of robustness is required for the dataset.


Concluding Remarks
We proposed a new criterion for selecting the optimal tuning parameter in robust divergence, using the Hyvarinen score for unnormalized models with robust divergence. The proposed criterion does not require the asymptotic variance formula of the estimator that is needed in the existing selection methods. Although we simply focused on the univariate and continuous situation, the proposed criterion can also be applied to multivariate or discrete distribution, where finite differences under discrete distributions should replace derivatives. Applications of the proposed score under such cases would also be helpful, and we left it to future work.
Appendix
Appendix A Proof of Proposition 1
We first assume standard regularity conditions in the M-estimation theory (e.g. Van der Vaart, 2000) for the objective function . We also assume that is twice continuously differentiable with respect to , is continuously differentiable and the derivative of is bounded.
We first note that is a solution of the following estimating equation:
From the implicit function theorem, it follows that
where we defined . Note that under large . From (3), the first order partial derivative of the marginal log-likelihood can be approximated as
(5) |
Under the regularity conditions for , it follows that
under large . From the same argument, we can also show that . Regarding the first term in (5), we have
(6) | ||||
because is a score function and .
References
- Basak et al. (2021) Basak, S., A. Basu, and M. Jones (2021). On the ‘optimal’density power divergence tuning parameter. Journal of Applied Statistics 48(3), 536–556.
- Basu et al. (1998) Basu, A., I. R. Harris, N. L. Hjort, and M. Jones (1998). Robust and efficient estimation by minimising a density power divergence. Biometrika 85(3), 549–559.
- Cichocki et al. (2011) Cichocki, A., S. Cruces, and S.-i. Amari (2011). Generalized alpha-beta divergences and their application to robust nonnegative matrix factorization. Entropy 13(1), 134–170.
- Dawid and Musio (2015) Dawid, A. P. and M. Musio (2015). Bayesian model selection based on proper scoring rules. Bayesian analysis 10(2), 479–499.
- Devroye and Gyorfi (1985) Devroye, L. and L. Gyorfi (1985). Nonparametric density estimation: The L1 View. John Wiley.
- Fujisawa and Eguchi (2008) Fujisawa, H. and S. Eguchi (2008). Robust parameter estimation with a small bias against heavy contamination. Journal of Multivariate Analysis 99(9), 2053–2081.
- Geisser et al. (1990) Geisser, S., J. Hodges, S. Press, and A. ZeUner (1990). The validity of posterior expansions based on laplace’s method. Bayesian and likelihood methods in statistics and econometrics 7, 473.
- Harrison Jr and Rubinfeld (1978) Harrison Jr, D. and D. L. Rubinfeld (1978). Hedonic housing prices and the demand for clean air. Journal of environmental economics and management 5(1), 81–102.
- Jewson and Rossell (2021) Jewson, J. and D. Rossell (2021). General bayesian loss function selection and the use of improper models. arXiv preprint arXiv:2106.01214.
- Kawashima and Fujisawa (2017) Kawashima, T. and H. Fujisawa (2017). Robust and sparse regression via -divergence. Entropy 19(11), 608.
- Matsuda et al. (2019) Matsuda, T., M. Uehara, and A. Hyvarinen (2019). Information criteria for non-normalized models. arXiv preprint arXiv:1905.05976.
- Shao et al. (2019) Shao, S., P. E. Jacob, J. Ding, and V. Tarokh (2019). Bayesian model comparison with the hyvärinen score: Computation and consistency. Journal of the American Statistical Association.
- Stigler (1977) Stigler, S. M. (1977). Do robust estimators work with real data? The Annals of Statistics, 1055–1098.
- Van der Vaart (2000) Van der Vaart, A. W. (2000). Asymptotic statistics, Volume 3. Cambridge university press.
- Warwick and Jones (2005) Warwick, J. and M. Jones (2005). Choosing a robustness tuning parameter. Journal of Statistical Computation and Simulation 75(7), 581–588.
- Yonekura and Sugasawa (2021) Yonekura, S. and S. Sugasawa (2021). Adaptation of the tuning parameter in general bayesian inference with robust divergence. arXiv preprint arXiv:2106.06902.