This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

On Selection Criteria for the Tuning Parameter in Robust Divergence

Shonosuke Sugasawa1,3 and Shouto Yonekura2,3

1Center for Spatial Information Science, The University of Tokyo
2Graduate School of Social Sciences, Chiba University
3Nospare Inc.

Abstract

While robust divergence such as density power divergence and γ\gamma-divergence is helpful for robust statistical inference in the presence of outliers, the tuning parameter that controls the degree of robustness is chosen in a rule-of-thumb, which may lead to an inefficient inference. We here propose a selection criterion based on an asymptotic approximation of the Hyvarinen score applied to an unnormalized model defined by robust divergence. The proposed selection criterion only requires first and second-order partial derivatives of an assumed density function with respect to observations, which can be easily computed regardless of the number of parameters. We demonstrate the usefulness of the proposed method via numerical studies using normal distributions and regularized linear regression.


Key words: efficiency; Hyvarinen score; outlier; unnormalized model

Introduction

Data with outliers naturally arise in diverse areas. In the analysis of data containing outliers, statistical models with robust divergence are known to be powerful and have been used regularly. In particular, the density power divergence (Basu et al., 1998) and γ\gamma-divergence (Fujisawa and Eguchi, 2008) have been routinely used in this context due to their robustness properties while there now exist others. Robust divergence, in general, holds a tuning parameter that controls robustness under model misspecification or contamination. Basu et al. (1998) noted that there is a trade-off between estimation efficiency and strength of robustness; thereby, a suitable choice of the tuning parameter seems crucial in practice. However, a well-known selection strategy such as cross-validation is not straightforward under contamination, so that we need to rely on a trial-and-error way to find a reasonable value of the tuning parameter.

To select a turning parameter, we here propose a simple but novel selection criterion for the tuning parameter by using the asymptotic approximation of Hyvarinen score (Shao et al., 2019; Dawid and Musio, 2015) with unnormalized models based on robust divergence. Typical existing methods (Warwick and Jones, 2005; Basak et al., 2021) choose a tuning parameter based on the asymptotic approximation of the mean square error but have the drawback of requiring some pilot estimators and an analytical expression of the asymptotic variance. Besides, their works are essentially limited to the simple normal distribution and simple linear regression. Our proposed method has the following advantages over the existing studies.

  1. 1.

    Our method does not require an explicit representation of the asymptotic variance. Therefore, our method can be applied to rather complex statistical models such as multivariate models, which seems difficult to be handled by the previous methods.

  2. 2.

    In the existing studies, it is necessary to determine a certain value as a pilot estimate to optimize a tuning parameter. Thus, the estimates may strongly depend on the pilot estimate. On the other hand, our method does not require a pilot estimate and is stable and statistically efficient.

  3. 3.

    Although our proposed method is based on a simple asymptotic expansion, it is more statistically meaningful and easier to interpret the results statistically than existing methods because it is based on the theory of parameter estimation for unnormalized statistical models.

Through numerical studies under simple settings, we show that the existing methods can be sensitive to a pilot estimate and tends to select an unnecessarily larger value of a tuning parameter, leading to loss of efficiency compared with the proposed method. Moreover, we still apply the proposed selection method, an estimation procedure in which the asymptotic variance is difficult to compute. As an illustrative example of such a case, we consider robust linear regression with γ\gamma-divergence and 1\ell_{1}-regularization, where the existing approach is infeasible to apply.

As related works, there are two information criteria using the Hyvarinen score. Matsuda et al. (2019) proposed AIC-type information criteria for unnormalized models by deriving an asymptotic unbiased estimator of the Hyvarinen score, but it does not allow unnormalized models whose normalizing constants do not exist. Hence, the criterion cannot be applied to the current situation. On the other hand, Jewson and Rossell (2021) proposed an information criterion via Laplace approximation of the marginal likelihood in which the potential function is constructed by the Hyvarinen score. Although Jewson and Rossell (2021) covers unnormalized models with possibly diverging normalizing constants, the estimator used in the criterion is entirely different from one defined as the maximizer of robust divergence; thereby, the criterion does not apply to the tuning parameter selection of robust divergence either. Moreover, Yonekura and Sugasawa (2021) developed an robust sequential Monte Carlo sampler based on robust divergence in which γ\gamma is adaptively selected. However, it does not provide selection of γ\gamma in a frequentist framework.

The rest of the paper is organized as follows. Section 2 introduces a new selection criterion based on the Hyvarinen score. We then provide concrete expressions of the proposed criterion under density power divergence and γ\gamma-divergence in Section 3. We numerically illustrate the proposed method in two situations in Section 4. Concluding remarks are given in Section 5.

Tuning parameter selection of robust divergence

Suppose we observe y1,,yny_{1},\ldots,y_{n} as realizations from a true distribution or data generating process GG, and we want to fit a statistical model {fθ:θΘ}\{f_{\theta}:\theta\in\Theta\} where Θd\Theta\subseteq\mathbb{R}^{d} for some d1d\geq 1. Further assume that the density of GG is expressed as (1ω)fθ+ωδ(1-\omega)f_{\theta^{\ast}}+\omega\delta, where δ\delta is a contaminated distribution that produces outliers in observations. Our goal is to make statistical inference on θ\theta^{\ast} by successfully eliminating information of outliers. To this end, robust divergence such as density power divergence (Basu et al., 1998) and γ\gamma-divergence (Fujisawa and Eguchi, 2008) is typically used for robust inference on θ\theta^{\ast}. Let y=(y1,,yn)y=(y_{1},\ldots,y_{n}) be a vector of observations and Dγ(y;θ)D_{\gamma}(y;\theta) be a (negative) robust divergence with a tuning parameter γ\gamma. We assume that the robust divergence has a additive form, namely, Dγ(y;θ)=i=1nDγ(yi;θ)D_{\gamma}(y;\theta)=\sum_{i=1}^{n}D_{\gamma}(y_{i};\theta), which are satisfied well-known robust divergences as discussed in Section 3.

For selecting the tuning parameter γ\gamma, our main idea is to regard Lγ(yi;θ)exp{Dγ(yi;θ)}L_{\gamma}(y_{i};\theta)\equiv\exp\{D_{\gamma}(y_{i};\theta)\} as an unnormalized statistical model whose normalizing constant may not exist. Recently, Jewson and Rossell (2021) pointed out that the role of such unnormalized models can be recognized in terms of relative probability. For such model, we employ the Hyvarinen score (H-score) in terms of Bayesian model selection (Shao et al., 2019; Dawid and Musio, 2015), defined as

Hn(γ)1ni=1n{22yi2logLγ(m)(y)+(yilogLγ(m)(y))2},H_{n}^{\ast}(\gamma)\equiv\frac{1}{n}\sum_{i=1}^{n}\left\{2\frac{\partial^{2}}{\partial y_{i}^{2}}\log L_{\gamma}^{(m)}(y)+\left(\frac{\partial}{\partial y_{i}}\log L_{\gamma}^{(m)}(y)\right)^{2}\right\}, (1)

where Lγ(m)(y)L_{\gamma}^{(m)}(y) is the marginal likelihood given by

Lγ(m)(y)=π(θ)i=1nLγ(yi;θ)dθ.L_{\gamma}^{(m)}(y)=\int\pi(\theta)\prod_{i=1}^{n}L_{\gamma}(y_{i};\theta)d\theta. (2)

with some prior distribution π(θ)\pi(\theta). We consider an asymptotic approximation of the H-score (1) under large sample sizes. Under some regularity conditions (e.g. Geisser et al., 1990), the Laplace approximation of (2) is

Lγ(m)(y)(2π)d/2π(θ^γ)|H(θ^γ)|1/2i=1nLγ(yi;θ^γ),L_{\gamma}^{(m)}(y)\approx(2\pi)^{d/2}\pi({\widehat{{\theta}}}_{\gamma})|H({\widehat{{\theta}}}_{\gamma})|^{-1/2}\prod_{i=1}^{n}L_{\gamma}(y_{i};{\widehat{{\theta}}}_{\gamma}), (3)

where θ^γ{\widehat{{\theta}}}_{\gamma} is the M-estimator given by

θ^γ=argmaxθi=1nlogLγ(yi;θ),{\widehat{{\theta}}}_{\gamma}={\rm argmax}_{\theta}\sum_{i=1}^{n}\log L_{\gamma}(y_{i};\theta),

and H(θ^γ)H({\widehat{{\theta}}}_{\gamma}) is the Hessian matrix at θ=θ^γ\theta={\widehat{{\theta}}}_{\gamma}. Then, we have the following approximation, where the proof is deferred to Appendix.

Proposition 1.

Under some regularity conditions, it follows that

yilogLγ(m)(y)=Dγ(yi;θ^γ)+op(1),2yi2logLγ(m)(y)=Dγ′′(yi;θ^γ)+op(1),\displaystyle\frac{\partial}{\partial y_{i}}\log L_{\gamma}^{(m)}(y)=D_{\gamma}^{\prime}(y_{i};{\widehat{{\theta}}}_{\gamma})+o_{p}(1),\ \ \ \ \frac{\partial^{2}}{\partial y_{i}^{2}}\log L_{\gamma}^{(m)}(y)=D_{\gamma}^{\prime\prime}(y_{i};{\widehat{{\theta}}}_{\gamma})+o_{p}(1),

where Dγ(yi;θ)=Dγ(yi;θ)/yiD_{\gamma}^{\prime}(y_{i};\theta)=\partial D_{\gamma}(y_{i};\theta)/\partial y_{i} and Dγ′′(yi;θ)=2Dγ(yi;θ)/yi2D_{\gamma}^{\prime\prime}(y_{i};\theta)=\partial^{2}D_{\gamma}(y_{i};\theta)/\partial y_{i}^{2}.

The above results give the following approximation of the original H-score:

Hn(γ)\displaystyle H_{n}(\gamma) =1ni=1n{2Dγ′′(yi;θ^γ)+(Dγ(yi;θ^γ))2},\displaystyle=\frac{1}{n}\sum_{i=1}^{n}\left\{2D_{\gamma}^{\prime\prime}(y_{i};{\widehat{{\theta}}}_{\gamma})+\left(D_{\gamma}^{\prime}(y_{i};{\widehat{{\theta}}}_{\gamma})\right)^{2}\right\}, (4)

which satisfies Hn(γ)=Hn(γ)+op(1)H_{n}(\gamma)=H_{n}^{\ast}(\gamma)+o_{p}(1) under nn\to\infty. We then define the optimal γ\gamma as

γopt=argminγHn(γ).\gamma_{opt}={\rm argmin}_{\gamma}H_{n}(\gamma).

Existing selection strategies for γ\gamma mostly use the asymptotic variance of θ^γ{\widehat{{\theta}}}_{\gamma}. For example, under the density power divergence, Warwick and Jones (2005) and Basak et al. (2021) suggested using asymptotic approximation of the mean squared errors of θ^γ{\widehat{{\theta}}}_{\gamma}. However, computation of the asymptotic variance is not straightforward, especially when an additional penalty function is incorporated into the objective function or the dimension of θ\theta is large. On the other hand, the proposed criterion (4) does not require the computation of asymptotic variance but only needs the derivatives of robust divergence concerning yiy_{i}. Furthermore, it should be noted that the proposed criterion (4) can be applied to a variety of robust divergence.

Possible robust divergences to consider

We here provide detailed expressions for the proposed criterion (4) under some robust divergences. For simplicity, we focus on two robust divergences which can be empirically estimated from the data. Still, the proposed method could be applied to other divergences such as Hellinger divergence (Devroye and Gyorfi, 1985) or αβ\alpha\beta-divergence (Cichocki et al., 2011). In what follows, we shall use the notations, f(yi;θ)=f(yi;θ)/yif^{\prime}(y_{i};{\theta})=\partial f(y_{i};\theta)/\partial y_{i} and f′′(yi;θ)=2f(yi;θ)/yi2f^{\prime\prime}(y_{i};{\theta})=\partial^{2}f(y_{i};\theta)/\partial y_{i}^{2}.

Density power divergence

The density power divergence (Basu et al., 1998) for a statistical model f(yi;θ)f(y_{i};\theta) is

Dγ(yi;θ)=1γf(yi;θ)γ11+γf(t;θ)1+γ𝑑t.D_{\gamma}(y_{i};\theta)=\frac{1}{\gamma}f(y_{i};\theta)^{\gamma}-\frac{1}{1+\gamma}\int f(t;\theta)^{1+\gamma}dt.

It can be seen that Dγ(yi;θ)+11/γlogf(yi;θ)D_{\gamma}(y_{i};\theta)+1-1/\gamma\to\log f(y_{i};\theta) as γ0\gamma\to 0, so the above function can be regarded as an extension of the standard log-likelihood. Then, a straightforward calculation leads to the expression of (4), given by

Hn(γ)=i=1n\displaystyle H_{n}(\gamma)=\sum_{i=1}^{n} [f(yi;θ^γ)2f(yi;θ^γ)γ2{2(γ1)+f(yi;θ^γ)γ}+2f(yi;θ^γ)γ1f′′(yi;θ^γ)].\displaystyle\bigg{[}f^{\prime}(y_{i};{\widehat{{\theta}}}_{\gamma})^{2}f(y_{i};{\widehat{{\theta}}}_{\gamma})^{\gamma-2}\left\{2(\gamma-1)+f(y_{i};{\widehat{{\theta}}}_{\gamma})^{\gamma}\right\}+2f(y_{i};{\widehat{{\theta}}}_{\gamma})^{\gamma-1}f^{\prime\prime}(y_{i};{\widehat{{\theta}}}_{\gamma})\bigg{]}.

γ\gamma-divergence

The original form of γ\gamma-divergence (Fujisawa and Eguchi, 2008) for a statistical model f(yi;θ)f(y_{i};\theta) is given by

1γlog{i=1nf(yi;θ)γ(f(t;θ)1+γ𝑑t)γ/(1+γ)},\frac{1}{\gamma}\log\left\{\sum_{i=1}^{n}f(y_{i};\theta)^{\gamma}\left(\int f(t;\theta)^{1+\gamma}dt\right)^{-\gamma/(1+\gamma)}\right\},

which is not an additive form. However, the maximization of the above function with respect to θ\theta is equivalent to the maximization of the transformed version of γ\gamma-divergence, Dγ(y;θ)=i=1nDγ(yi;θ)D_{\gamma}(y;\theta)=\sum_{i=1}^{n}D_{\gamma}(y_{i};\theta), where

Dγ(yi;θ)=1γf(yi;θ)γ{f(t;θ)1+γ𝑑t}γ/(1+γ).D_{\gamma}(y_{i};\theta)=\frac{1}{\gamma}f(y_{i};\theta)^{\gamma}\left\{\int f(t;\theta)^{1+\gamma}dt\right\}^{-\gamma/(1+\gamma)}.

Then, we have

Hn(γ)=i=1n\displaystyle H_{n}(\gamma)=\sum_{i=1}^{n} [f(yi;θ^γ)2f(yi;θ^γ)γ2{2(γ1)Cγ(θ^γ)+f(yi;θ^γ)γCγ(θ^γ)2}+2f(yi;θ^γ)γ1f′′(yi;θ^γ)Cγ(θ^γ)],\displaystyle\bigg{[}f^{\prime}(y_{i};{\widehat{{\theta}}}_{\gamma})^{2}f(y_{i};{\widehat{{\theta}}}_{\gamma})^{\gamma-2}\left\{\frac{2(\gamma-1)}{C_{\gamma}({\widehat{{\theta}}}_{\gamma})}+\frac{f(y_{i};{\widehat{{\theta}}}_{\gamma})^{\gamma}}{C_{\gamma}({\widehat{{\theta}}}_{\gamma})^{2}}\right\}+\frac{2f(y_{i};{\widehat{{\theta}}}_{\gamma})^{\gamma-1}f^{\prime\prime}(y_{i};{\widehat{{\theta}}}_{\gamma})}{C_{\gamma}({\widehat{{\theta}}}_{\gamma})}\bigg{]},

where Cγ(θ)=(f(t;θ)1+γ𝑑t)γ/(1+γ)C_{\gamma}(\theta)=\left(\int f(t;\theta)^{1+\gamma}dt\right)^{\gamma/(1+\gamma)}.

Numerical examples

Normal distribution with density power divergence

We first consider a simple example of robust estimation of the normal population mean under unknown variance. Let y1,,yny_{1},\ldots,y_{n} be sampled observations and we fit N(μ,σ2)N(\mu,\sigma^{2}) to the data. The density power divergence of the model is given by

Dγ(yi;μ,σ2)=1γϕ(yi;μ,σ2)γ(2πσ2)γ/2(1+γ)3/2,D_{\gamma}(y_{i};\mu,\sigma^{2})=\frac{1}{\gamma}\phi(y_{i};\mu,\sigma^{2})^{\gamma}-(2\pi\sigma^{2})^{-\gamma/2}(1+\gamma)^{-3/2},

where ϕ(yi;μ,σ2)\phi(y_{i};\mu,\sigma^{2}) is the density function of N(μ,σ2)N(\mu,\sigma^{2}). In this case, the criterion (4) is expressed as

Hn(γ)=i=1n[2{γ(yiμ^γ)2σ^γ2}σ^γ4ϕ(yi;μ^γ,σ^γ2)γ+(yiμ^γ)2σ^γ4ϕ(yi;μ^γ,σ^γ2)2γ],H_{n}(\gamma)=\sum_{i=1}^{n}\left[\frac{2\left\{\gamma(y_{i}-{\widehat{\mu}}_{\gamma})^{2}-{\widehat{\sigma}}_{\gamma}^{2}\right\}}{{\widehat{\sigma}}_{\gamma}^{4}}\phi(y_{i};{\widehat{\mu}}_{\gamma},{\widehat{\sigma}}_{\gamma}^{2})^{\gamma}+\frac{(y_{i}-{\widehat{\mu}}_{\gamma})^{2}}{{\widehat{\sigma}}_{\gamma}^{4}}\phi(y_{i};{\widehat{\mu}}_{\gamma},{\widehat{\sigma}}_{\gamma}^{2})^{2\gamma}\right],

where μ^γ{\widehat{\mu}}_{\gamma} and σ^γ{\widehat{\sigma}}_{\gamma} are the estimator based on the density power divergence.

We first demonstrate the proposed selection strategy through simulation studies. We simulated y1,,yny_{1},\ldots,y_{n} from the normal distribution with true parameters, μ=2\mu=2, and σ=1\sigma=1, and then replace the first nωn\omega observations by yi+7y_{i}+7. We adopted four settings for ω{0,0.05,0.1,0.15}\omega\in\{0,0.05,0.1,0.15\}. Using the simulated dataset, the optimal γ\gamma is selected among {0,0.01,,0.69,0.70}\{0,0.01,\ldots,0.69,0.70\} through the criterion Hn(γ)H_{n}(\gamma), and we obtain the adaptive estimator μ^γopt{\widehat{\mu}}_{\gamma_{opt}}. For comparison, we also employed two selection methods, OWJ (Warwick and Jones, 2005) and IWJ (Basak et al., 2021), in which the optimal value of γ\gamma is selected via asymptotic approximation of mean squared errors of the estimator. We set γ=0.5\gamma=0.5 to compute a pilot estimator that must be specified in the two methods. Furthermore, we also computed μ^γ{\widehat{\mu}}_{\gamma} with γ=0.1,0.3\gamma=0.1,0.3 and 0.50.5. Using an estimator of the asymptotic variance of μ^γ{\widehat{\mu}}_{\gamma} (e.g. Basak et al., 2021), we also computed the Wald-type 95%95\% confidence interval of μ\mu. Based on 5000 simulated datasets, we obtained the squared root of mean squared error (RMSE) of the point estimator as well as coverage probability (CP) and average length (AL) of the interval estimation. The results are reported in Table 1. It is observed that the use of small γ\gamma (such as γ=0.1\gamma=0.1) may lead to unsatisfactory results when the contamination is heavy. It can also be seen that with the use of relatively large γ\gamma, the estimation results can be inefficient. On the other hand, the proposed method can adaptively select a suitable value of γ\gamma as the averaged value of γopt\gamma_{opt} increases with the contamination ratio ω\omega, and it provides reasonable performance in all the scenarios.

Table 1: RMSE of the point estimation and CP and AL of interval estimation.
fixed γ\gamma
ω\omega HS OWJ IWJ 0.10.1 0.30.3 0.50.5
0 10.3 10.6 10.3 10.2 10.5 11.0
RMSE 0.05 10.7 10.9 10.7 14.4 10.8 11.3
0.1 11.0 11.1 11.0 44.7 11.1 11.5
0.15 11.4 11.4 11.4 82.6 11.5 11.8
0 94.8 93.8 94.2 94.6 94.5 94.4
CP 0.05 94.7 93.9 94.1 93.2 94.2 94.1
0.1 94.3 94.1 94.2 36.7 94.2 94.4
0.15 94.1 93.7 93.8 0.1 93.6 94.1
0 40.6 40.1 39.8 40.4 40.7 42.6
AL 0.05 41.7 41.0 40.9 50.4 41.2 43.3
0.1 42.5 41.9 41.8 79.5 42.0 44.1
0.15 43.4 42.9 42.9 100.4 43.1 45.1
Table 2: Average values of selected γ\gamma in three methods.
ω\omega HS OWJ IWJ
0 0.088 0.212 0.158
0.05 0.169 0.260 0.230
0.1 0.217 0.284 0.267
0.15 0.252 0.302 0.294

We next apply the proposed method to Simon Newcomb’s measurements of the speed of light data, motivated by applications in Stigler (1977); Basu et al. (1998); Basak et al. (2021). We searched the optimal γ\gamma among {0.01,0.02,,0.69,0.70}\{0.01,0.02,\ldots,0.69,0.70\} and the H-sores are shown in left panel in Figure 1. The obtained optimal value is γopt=0.09\gamma_{\rm opt}=0.09, which is substantially smaller than γ^=0.23\hat{\gamma}=0.23 selected by the existing methods as reported in Basak et al. (2021). Since the method proposed in Basak et al. (2021) requires a pilot estimate and the estimation results depend significantly on it, we believe that our estimation results are more reasonable. In fact, it is unlikely that we will have to use a value of γ=0.23{\gamma}=0.23 for a data set that contains only two outliers. As shown in the right panel in Figure 1, the estimated density functions are almost the same when γ=0.09\gamma=0.09 and when γ=0.23\gamma=0.23. However, it would be preferable to adopt the smaller value of γ=0.09\gamma=0.09 if the estimates are almost identical in terms of statistical efficiency.

Refer to caption
Refer to caption
Figure 1: H-scores for each γ\gamma (left) and the estimate normal density functions with optimal gamma selected via the H-score and IJW methods (right).

Regularized linear regression with γ\gamma-divergence

Note that the proposed criterion can be used when some regularized terms are introduced in the objective function, while the existing method requiring an asymptotic variance of the estimator is not simply applicable. We demonstrate the advantage of the proposed method through regularized linear regression with γ\gamma-divergence (Kawashima and Fujisawa, 2017). Let yiy_{i} and xix_{i} be a response variable and a pp-dimensional vector of covariates, respectively, for i=1,,ni=1,\ldots,n. The model is yiN(xitβ,σ2)y_{i}\sim N(x_{i}^{t}\beta,\sigma^{2}). Then, the transformed γ\gamma-divergence is Dγ(yi;θ)=γ1ϕ(yi;xitβ,σ2)γ/Cγ(σ2)D_{\gamma}(y_{i};\theta)=\gamma^{-1}\phi(y_{i};x_{i}^{t}\beta,\sigma^{2})^{\gamma}/C_{\gamma}(\sigma^{2}) with Cγ(σ2)={(1+γ)1/2(2πσ2)γ/2}γ/(1+γ)C_{\gamma}(\sigma^{2})=\{(1+\gamma)^{-1/2}(2\pi\sigma^{2})^{-\gamma/2}\}^{\gamma/(1+\gamma)}, and the H-score is expressed as

Hn(γ)=i=1n[2{γ(yixiβ^γ)2σ^γ2}σ^γ4Cγ(σ^γ2)ϕ(yi;xiβ^γ,σ^γ2)γ+(yixiβ^γ)2σ^γ4Cγ(σ^γ2)2ϕ(yi;xiβ^γ,σ^γ2)2γ].H_{n}(\gamma)=\sum_{i=1}^{n}\left[\frac{2\left\{\gamma(y_{i}-x_{i}^{\top}{\widehat{\beta}}_{\gamma})^{2}-{\widehat{\sigma}}_{\gamma}^{2}\right\}}{{\widehat{\sigma}}_{\gamma}^{4}C_{\gamma}({\widehat{\sigma}}_{\gamma}^{2})}\phi(y_{i};x_{i}^{\top}{\widehat{\beta}}_{\gamma},{\widehat{\sigma}}_{\gamma}^{2})^{\gamma}+\frac{(y_{i}-x_{i}^{\top}{\widehat{\beta}}_{\gamma})^{2}}{{\widehat{\sigma}}_{\gamma}^{4}C_{\gamma}({\widehat{\sigma}}_{\gamma}^{2})^{2}}\phi(y_{i};x_{i}^{\top}{\widehat{\beta}}_{\gamma},{\widehat{\sigma}}_{\gamma}^{2})^{2\gamma}\right].

Here β^γ{\widehat{\beta}}_{\gamma} and σ^γ2{\widehat{\sigma}}^{2}_{\gamma} are estimated as the minimizer of the following regularized γ\gamma-divergence:

1γlog{i=1nϕ(yi;xiβ,σ2)γ}γ1+γlogσ2+λk=1p|βk|,-\frac{1}{\gamma}\log\left\{\sum_{i=1}^{n}\phi(y_{i};x_{i}^{\top}\beta,\sigma^{2})^{\gamma}\right\}-\frac{\gamma}{1+\gamma}\log\sigma^{2}+\lambda\sum_{k=1}^{p}|\beta_{k}|,

where λ\lambda is an additional tuning parameter that can be optimized via 10-fold cross-validation. We use the R package gamreg (Kawashima and Fujisawa, 2017) to estimate β\beta and σ2\sigma^{2} under given γ\gamma.

We apply the aforementioned method to the well-known Boston housing dataset (Harrison Jr and Rubinfeld, 1978). In this analysis, we included the original 13 covariates and 12 quadratic terms of the covariates except for one binary covariate. We searched the optimal γ\gamma among {0.02,0.04,,0.68,0.70}\{0.02,0.04,\ldots,0.68,0.70\}, and the estimated H-scores are shown in the left panel in Figure 2, where the optimal value of γ\gamma was 0.160.16. For comparison, we estimated the regression coefficients with γ=0\gamma=0 and γ=0.5\gamma=0.5. Note that γ=0\gamma=0 reduces to the (non-robust) standard regularized linear regression. The scatter plots of the estimated standardized coefficients under γ=0.16\gamma=0.16 against ones under the two choices of γ\gamma are shown in the right panel of Figure 2. It is confirmed that the estimates with γ=0.16\gamma=0.16 and γ=0.5\gamma=0.5 are comparable while there are substantial differences between estimates with γ=0.16\gamma=0.16 and γ=0\gamma=0, indicating that a certain amount of robustness is required for the dataset.

Refer to caption
Refer to caption
Figure 2: H-scores for each γ\gamma (left) and the estimated regression coefficients with three choices of γ\gamma (right).

Concluding Remarks

We proposed a new criterion for selecting the optimal tuning parameter in robust divergence, using the Hyvarinen score for unnormalized models with robust divergence. The proposed criterion does not require the asymptotic variance formula of the estimator that is needed in the existing selection methods. Although we simply focused on the univariate and continuous situation, the proposed criterion can also be applied to multivariate or discrete distribution, where finite differences under discrete distributions should replace derivatives. Applications of the proposed score under such cases would also be helpful, and we left it to future work.

Appendix

Appendix A Proof of Proposition 1

We first assume standard regularity conditions in the M-estimation theory (e.g. Van der Vaart, 2000) for the objective function i=1mlogLγ(yi;θ)\sum_{i=1}^{m}\log L_{\gamma}(y_{i};\theta). We also assume that logLγ(yi;θ)\log L_{\gamma}(y_{i};\theta) is twice continuously differentiable with respect to yiy_{i}, logπ(θ)\log\pi(\theta) is continuously differentiable and the derivative of logπ(θ)\log\pi(\theta) is bounded.

We first note that θ^γ{\widehat{{\theta}}}_{\gamma} is a solution of the following estimating equation:

i=1nSγ(yi;θ)=0,Sγ(yi;θ)θlogLγ(yi;θ).\sum_{i=1}^{n}S_{\gamma}(y_{i};\theta)=0,\ \ \ \ \ \ S_{\gamma}(y_{i};\theta)\equiv\frac{\partial}{\partial\theta}\log L_{\gamma}(y_{i};\theta).

From the implicit function theorem, it follows that

θ^γyi\displaystyle\frac{\partial{\widehat{{\theta}}}_{\gamma}}{\partial y_{i}} ={j=1nθSγ(yj;θ)|θ=θ^γ}1yij=1nSγ(yj;γ)|θ=θ^γ=H(θ^γ)1Sγ(yi;θ^γ),\displaystyle=\left\{\sum_{j=1}^{n}\frac{\partial}{\partial\theta}S_{\gamma}(y_{j};\theta)\bigg{|}_{\theta={\widehat{{\theta}}}_{\gamma}}\right\}^{-1}\frac{\partial}{\partial y_{i}}\sum_{j=1}^{n}S_{\gamma}(y_{j};\gamma)\bigg{|}_{\theta={\widehat{{\theta}}}_{\gamma}}=H({\widehat{{\theta}}}_{\gamma})^{-1}S_{\gamma}^{\prime}(y_{i};{\widehat{{\theta}}}_{\gamma}),

where we defined Sγ(yi;θ)=Sγ(yi;θ)/yiS_{\gamma}^{\prime}(y_{i};\theta)=\partial S_{\gamma}(y_{i};\theta)/\partial y_{i}. Note that θ^γ/yi=Op(n1)\partial{\widehat{{\theta}}}_{\gamma}/\partial y_{i}=O_{p}(n^{-1}) under large nn. From (3), the first order partial derivative of the marginal log-likelihood can be approximated as

yilogLγ(m)(y)yij=1nlogLγ(yj;θ^γ)+yilogπ(θ^γ)12yilog|H(θ^γ)|.\frac{\partial}{\partial y_{i}}\log L_{\gamma}^{(m)}(y)\approx\frac{\partial}{\partial y_{i}}\sum_{j=1}^{n}\log L_{\gamma}(y_{j};{\widehat{{\theta}}}_{\gamma})+\frac{\partial}{\partial y_{i}}\log\pi({\widehat{{\theta}}}_{\gamma})-\frac{1}{2}\frac{\partial}{\partial y_{i}}\log|H({\widehat{{\theta}}}_{\gamma})|. (5)

Under the regularity conditions for π(θ)\pi(\theta), it follows that

yilogπ(θ^γ)=θ^γyi×θlogπ(θ)|θ=θ^γ=op(1)\frac{\partial}{\partial y_{i}}\log\pi({\widehat{{\theta}}}_{\gamma})=\frac{\partial{\widehat{{\theta}}}_{\gamma}}{\partial y_{i}}\times\frac{\partial}{\partial\theta}\log\pi(\theta)\bigg{|}_{\theta={\widehat{{\theta}}}_{\gamma}}=o_{p}(1)

under large nn. From the same argument, we can also show that log|H(θ^γ)|/yi=op(1)\partial\log|H({\widehat{{\theta}}}_{\gamma})|/\partial y_{i}=o_{p}(1). Regarding the first term in (5), we have

yij=1nlogLγ(yj;θ^γ)\displaystyle\frac{\partial}{\partial y_{i}}\sum_{j=1}^{n}\log L_{\gamma}(y_{j};{\widehat{{\theta}}}_{\gamma}) =yilogLγ(yi;θ)|θ=θ^γ+(θ^γyi)j=1nθlogLγ(yj;θ)|θ=θ^γ\displaystyle=\frac{\partial}{\partial y_{i}}\log L_{\gamma}(y_{i};\theta)\bigg{|}_{\theta={\widehat{{\theta}}}_{\gamma}}+\left(\frac{\partial{\widehat{{\theta}}}_{\gamma}}{\partial y_{i}}\right)^{\top}\sum_{j=1}^{n}\frac{\partial}{\partial\theta}\log L_{\gamma}(y_{j};\theta)\bigg{|}_{\theta={\widehat{{\theta}}}_{\gamma}}
=Dγ(yi;θ^γ)+{j=1nSγ(yj;θ^γ)}H(θ^γ)1Sγ(yi;θ^γ)\displaystyle=D^{\prime}_{\gamma}(y_{i};{\widehat{{\theta}}}_{\gamma})+\left\{\sum_{j=1}^{n}S_{\gamma}(y_{j};{\widehat{{\theta}}}_{\gamma})\right\}^{\top}H({\widehat{{\theta}}}_{\gamma})^{-1}S_{\gamma}^{\prime}(y_{i};{\widehat{{\theta}}}_{\gamma}) (6)
=Dγ(yi;θ^γ)+Op(n1/2),\displaystyle=D^{\prime}_{\gamma}(y_{i};{\widehat{{\theta}}}_{\gamma})+O_{p}(n^{-1/2}),

because Sγ(yj;θ)S_{\gamma}(y_{j};\theta) is a score function and j=1nSγ(yj;θ^γ)=Op(n1/2)\sum_{j=1}^{n}S_{\gamma}(y_{j};{\widehat{{\theta}}}_{\gamma})=O_{p}(n^{1/2}).

Using the expression of the first order derivative (6), it holds that

2yi2j=1nlogLγ(yj;θ^γ)\displaystyle\frac{\partial^{2}}{\partial y_{i}^{2}}\sum_{j=1}^{n}\log L_{\gamma}(y_{j};{\widehat{{\theta}}}_{\gamma}) =yiDγ(yi;θ^γ)+{yij=1nSγ(yj;θ^γ)}H(θ^γ)1Sγ(yi;θ^γ)\displaystyle=\frac{\partial}{\partial y_{i}}D^{\prime}_{\gamma}(y_{i};{\widehat{{\theta}}}_{\gamma})+\left\{\frac{\partial}{\partial y_{i}}\sum_{j=1}^{n}S_{\gamma}(y_{j};{\widehat{{\theta}}}_{\gamma})\right\}^{\top}H({\widehat{{\theta}}}_{\gamma})^{-1}S_{\gamma}^{\prime}(y_{i};{\widehat{{\theta}}}_{\gamma})
+{j=1nSγ(yj;θ^γ)}H(θ^γ)1{yiH(θ^γ)}H(θ^γ)1Sγ(yi;θ^γ)\displaystyle+\left\{\sum_{j=1}^{n}S_{\gamma}(y_{j};{\widehat{{\theta}}}_{\gamma})\right\}^{\top}H({\widehat{{\theta}}}_{\gamma})^{-1}\left\{\frac{\partial}{\partial y_{i}}H({\widehat{{\theta}}}_{\gamma})\right\}H({\widehat{{\theta}}}_{\gamma})^{-1}S_{\gamma}^{\prime}(y_{i};{\widehat{{\theta}}}_{\gamma})
+{j=1nSγ(yj;θ^γ)}H(θ^γ)1yiSγ(yi;θ^γ).\displaystyle+\left\{\sum_{j=1}^{n}S_{\gamma}(y_{j};{\widehat{{\theta}}}_{\gamma})\right\}^{\top}H({\widehat{{\theta}}}_{\gamma})^{-1}\frac{\partial}{\partial y_{i}}S_{\gamma}^{\prime}(y_{i};{\widehat{{\theta}}}_{\gamma}). (7)

Note that

yiDγ(yi;θ^γ)=Dγ′′(yi;θ^γ)+(θD(yi;θ)|θ=θ^γ)θ^γyi=Dγ′′(yi;θ^γ)+Op(n1).\frac{\partial}{\partial y_{i}}D^{\prime}_{\gamma}(y_{i};{\widehat{{\theta}}}_{\gamma})=D^{\prime\prime}_{\gamma}(y_{i};{\widehat{{\theta}}}_{\gamma})+\left(\frac{\partial}{\partial\theta}D^{\prime}(y_{i};\theta)\bigg{|}_{\theta={\widehat{{\theta}}}_{\gamma}}\right)^{\top}\frac{\partial{\widehat{{\theta}}}_{\gamma}}{\partial y_{i}}=D^{\prime\prime}_{\gamma}(y_{i};{\widehat{{\theta}}}_{\gamma})+O_{p}(n^{-1}).

By applying the same formula to Sγ(yi;θ^γ)/yi\partial S_{\gamma}^{\prime}(y_{i};{\widehat{{\theta}}}_{\gamma})/\partial y_{i}, we can confirm that the third and forth terms in (7) are Op(n1/2)O_{p}(n^{-1/2}). Regarding the second term in (7), we have

yij=1nSγ(yj;θ^γ)\displaystyle\frac{\partial}{\partial y_{i}}\sum_{j=1}^{n}S_{\gamma}(y_{j};{\widehat{{\theta}}}_{\gamma}) =Sγ(yi;θ^γ)+{j=1nθSγ(yj;θ)|θ=θ^γ}H(θ^γ)1Sγ(yi;θ^γ)\displaystyle=S_{\gamma}^{\prime}(y_{i};{\widehat{{\theta}}}_{\gamma})+\left\{\sum_{j=1}^{n}\frac{\partial}{\partial\theta}S_{\gamma}(y_{j};\theta)\bigg{|}_{\theta={\widehat{{\theta}}}_{\gamma}}\right\}H({\widehat{{\theta}}}_{\gamma})^{-1}S_{\gamma}^{\prime}(y_{i};{\widehat{{\theta}}}_{\gamma})
=2Sγ(yi;θ^γ),\displaystyle=2S_{\gamma}^{\prime}(y_{i};{\widehat{{\theta}}}_{\gamma}),

which shows that the second term in (7) is Op(n1)O_{p}(n^{-1}), so that the proof is completed.

References

  • Basak et al. (2021) Basak, S., A. Basu, and M. Jones (2021). On the ‘optimal’density power divergence tuning parameter. Journal of Applied Statistics 48(3), 536–556.
  • Basu et al. (1998) Basu, A., I. R. Harris, N. L. Hjort, and M. Jones (1998). Robust and efficient estimation by minimising a density power divergence. Biometrika 85(3), 549–559.
  • Cichocki et al. (2011) Cichocki, A., S. Cruces, and S.-i. Amari (2011). Generalized alpha-beta divergences and their application to robust nonnegative matrix factorization. Entropy 13(1), 134–170.
  • Dawid and Musio (2015) Dawid, A. P. and M. Musio (2015). Bayesian model selection based on proper scoring rules. Bayesian analysis 10(2), 479–499.
  • Devroye and Gyorfi (1985) Devroye, L. and L. Gyorfi (1985). Nonparametric density estimation: The L1 View. John Wiley.
  • Fujisawa and Eguchi (2008) Fujisawa, H. and S. Eguchi (2008). Robust parameter estimation with a small bias against heavy contamination. Journal of Multivariate Analysis 99(9), 2053–2081.
  • Geisser et al. (1990) Geisser, S., J. Hodges, S. Press, and A. ZeUner (1990). The validity of posterior expansions based on laplace’s method. Bayesian and likelihood methods in statistics and econometrics 7, 473.
  • Harrison Jr and Rubinfeld (1978) Harrison Jr, D. and D. L. Rubinfeld (1978). Hedonic housing prices and the demand for clean air. Journal of environmental economics and management 5(1), 81–102.
  • Jewson and Rossell (2021) Jewson, J. and D. Rossell (2021). General bayesian loss function selection and the use of improper models. arXiv preprint arXiv:2106.01214.
  • Kawashima and Fujisawa (2017) Kawashima, T. and H. Fujisawa (2017). Robust and sparse regression via γ\gamma-divergence. Entropy 19(11), 608.
  • Matsuda et al. (2019) Matsuda, T., M. Uehara, and A. Hyvarinen (2019). Information criteria for non-normalized models. arXiv preprint arXiv:1905.05976.
  • Shao et al. (2019) Shao, S., P. E. Jacob, J. Ding, and V. Tarokh (2019). Bayesian model comparison with the hyvärinen score: Computation and consistency. Journal of the American Statistical Association.
  • Stigler (1977) Stigler, S. M. (1977). Do robust estimators work with real data? The Annals of Statistics, 1055–1098.
  • Van der Vaart (2000) Van der Vaart, A. W. (2000). Asymptotic statistics, Volume 3. Cambridge university press.
  • Warwick and Jones (2005) Warwick, J. and M. Jones (2005). Choosing a robustness tuning parameter. Journal of Statistical Computation and Simulation 75(7), 581–588.
  • Yonekura and Sugasawa (2021) Yonekura, S. and S. Sugasawa (2021). Adaptation of the tuning parameter in general bayesian inference with robust divergence. arXiv preprint arXiv:2106.06902.