Jackknife Partially Linear Model Averaging for the Conditional Quantile Prediction
Abstract
Estimating the conditional quantile of the interested variable with respect to changes in the covariates is frequent in many economical applications as it can offer a comprehensive insight. In this paper, we propose a novel semiparametric model averaging to predict the conditional quantile even if all models under consideration are potentially misspecified. Specifically, we first build a series of non-nested partially linear sub-models, each with different nonlinear component. Then a leave-one-out cross-validation criterion is applied to choose the model weights. Under some regularity conditions, we have proved that the resulting model averaging estimator is asymptotically optimal in terms of minimizing the out-of-sample average quantile prediction error. Our modelling strategy not only effectively avoids the problem of specifying which a covariate should be nonlinear when one fits a partially linear model, but also results in a more accurate prediction than traditional model-based procedures because of the optimality of the selected weights by the cross-validation criterion. Simulation experiments and an illustrative application show that our proposed model averaging method is superior to other commonly used alternatives.
Keywords: Asymptotic optimality, B-splines, Conditional quantile prediction, Leave-one-out cross-validation, Model averaging, Partially linear models.
1 Introduction
In many situations of practical interest, especially for econometrics, social sciences and medical fields, we are more concerned to predict the conditional quantiles of interested variables because a full range of quantile analysis provides a broader insight than the classical mean regression (Koenker,, 2005). For example, in business and economics, petroleum is a primary source of nonrenewable energy, and has important influence on industrial production, electric power generation, and transportation. Most economists take care of the high quantiles of oil prices, because oil price fluctuations have considerable effects on economic activity. In the past decades, traditional parametric and semiparametric modelling strategies for quantile regression have been well developed, including Kim, (2007); Belloni and Chernozhukov, (2011); Kai et al., (2011); Wang et al., (2012); Feng and Zhu, (2016); Frumento and Bottai, (2016); Ma and He, (2016); Frumento et al., (2021) and among others.
In practice, the underlying model is often unknown and all models under consideration are potentially misspecified. We all know that it is difficult to find a optimal model for a dataset of interest. Model averaging, as a well-known ensemble technique, combines a set of candidate models by assigning heavier weights to stronger models. The main superiority of model averaging is that it effectively incorporates useful information from all possible candidate models and thus substantially reduces the risk of misspecification and generally yields more accurate prediction results than a single selected model. For example, to explain a specific economic phenomenon, many plausible candidate models are all useful. In that case, using an averaged model instead of a particular model, the risk arising from misspecification can be reduced markedly. Over the past decade or so, model averaging for condition mean regression has been developed rapidly, see Wan et al., (2010); Hansen and Racine, (2012); Ando and Li, (2014); Zhang and Liu, (2018); Zhang et al., (2020); Feng et al., (2021). However, limited works have been done for studying quantile model averaging. Recently, Lu and Su, (2015) introduced a jackknife quantile model averaging procedure with the optimal model weights by minimizing a leave-one-out cross-validation criterion. Wang and Zou, (2019) introduced a jackknife model averaging for composite quantile regression, which can be regarded as an extension of Lu and Su, (2015). Instead of choosing the optimal model weights, Lee and Shin, (2021) proposed to average over the complete subsets for quantile regression, where the optimal size of the complete subset is selected by the cross-validation.
So far the mentioned above works mainly focus on averaging a set of parameterized models such as linear regression models. We have to acknowledge that such simple models are easy to interpret and widely accepted by scientific researchers. However, it is understood that, in practice, the response variable may depend on the predictors in a very complicated manner. If only parametric sub-models are adopted, it is hard for us to acquire satisfactory prediction results because they fail to capture the complicated relationship between the response and predictors. Though all candidate models might be misspecified in reality, we hope that the approximation capabilities might be improved by using more flexible semiparametric sub-models. An alternative approach, one considers here, is to construct a weighted average of a series of flexible semiparametric sub-models. We may refer to Li et al., 2018a ; Li et al., 2018b ; Zhang and Liu, (2018); Zhu et al., (2019); Zhang and Wang, (2019) for reviews of recent developments on semiparametric model averaging. However, these aforementioned research findings only concern the conditional mean prediction, and discussions on quantile regression are rather limited.
The partially linear model (PLM) introduced by Engle et al., (1986), as one of the most popular semiparametric models, has been received extensive attention due to its flexible specification. The main merit of the PLM is that it does not require the parametric assumption for all covariates and allows one to capture potential nonlinear effects. Although we have witnessed a booming development of the PLM in recent years (e.g., Hardle et al., (2000); Liang et al., (2007); Liang and Li, (2009); Xie and Huang, (2009); Zhang et al., (2011)), these methodologies are all based on the assumption that a correctly specified model is given. So far little work has been done on quantile model averaging for the PLM. In this work, we will exploit a semiparametric model averaging by optimally combining a series of PLMs to achieve the goal of flexible conditional quantile prediction. This fills an important gap in semiparametric model averaging for the conditional quantile prediction.
The contribution of this paper is three folds. First, it is usually a challenging job to decide which a covariate should be nonlinear when one fits a PLM. Actually, any continuous covariate can be taken as the nonparametric component. Our proposed model averaging effectively avoids the criticism of artificially specifying the nonparametric component in PLMs because we average multiple partially linear sub-models (each with different nonlinear component) by assigning heavier weights to stronger sub-models. An another superiority of the proposed approach is that it is more robust against model misspecification , and thus outperforms than traditional model-based approaches (e.g., linear models, partially linear models and additive models) and parametric model averaging procedures. Second, we rapidly estimate the entire conditional quantile process over of the interested response rather than a discrete set of quantiles by modeling quantile regression coefficients as parametric functions of quantile level. Compared with standard quantile estimation procedure, the strategy of modeling quantile functions parametrically simplifies calculation and gains better estimation efficiency because of utilizing all useful information across quantiles. Third, we prove that the proposed model averaging estimator is asymptotically optimal in the sense that its out-of-sample average quantile prediction error is asymptotically identical to that of the best but infeasible model averaging estimator. It is instructive to mention that our theoretical results intrinsically distinguish from those of Lu and Su, (2015),Wang and Zou, (2019) and Lee and Shin, (2021) who focus on parametric model averaging.
2 Methodology
2.1 Model and Estimation
Let be independent and identically distributed samples of with individuals, where is a scalar response variable and is the vector of covariates. For a given quantile level , let be the th conditional quantile function of the response given the covariates . Without loss of generality, the covariates are allowed to be discrete or continuous. Suppose that , where and are vectors of -dimensional continuous and -dimensional discrete variables respectively, and ⊤ is the transpose of a vector or matrix. Our goal is to estimate , which is of particular use for prediction. This is also the typical goal in the optimal model averaging literature (Lu and Su, (2015); Zhang and Wang, (2019); Lee and Shin, (2021)).
As far as we know, it is infeasible to estimate without any structure assumption for because of the curse of dimensionality. Partially linear models (Hardle et al.,, 2000; Liang et al.,, 2007; Liang and Li,, 2009), as one of the most commonly used semiparametric models, have been developed to resolve this problem due to their flexible specification. However, all models under investigation might be incorrect in practice. Using a single model might ignore the useful information from the other models, and thus results in a poor predictive performance. As an attractive alternative, aggregating candidate models with a weighted average effectively provides a better approximation to the true quantile function and gives us great potentials for future prediction.
Specifically, we assume that each candidate model has the partially linear model structure. In practice, one will encounter the uncertainty of whether a covariate should be in the linear or nonlinear given that it is in the model. In fact, any continuous elements of might be taken as the nonparametric components. To avoid the criticism of artificially deciding which covariates are nonlinear in PLMs, we build a sequence of partially linear sub-models , and the model weights automatically adjust the relative importance of these sub-models. By taking the th element of , as the nonparametric component, the th sub-model is given by
(1) |
where is a covariate vector by removing the th predictor , and are unknown parameter vector and smooth function at the -th quantile. Note that is the condition quantile function under the th sub-model . Although the “intercept” term does not appear in model (1), it is actually included in the functional component. It is easy to see that the differences between any two candidate models lie, not only in linear components, but also in choosing which a covariate should be taken as the nonparametric element. To offer an optimal weighting scheme, we first should estimate the unknown parameter vector and function of each candidate model.
To estimate the functional component , we can approximate by B-spline basis functions because of its efficient in function approximation and stable numerical computation (see de Boor, (2001)). Under proper conditions on (e.g., Condition (C2) below), according to Corollary 4.10 of Schumaker, (1981), we can approximate as
(2) |
where is a vector of normalized B-spline basis functions of order (), , is the number of interior knots and . Equally spaced knots are used here for technical simplicity. However other regular knot sequences can also be used, with similar asymptotic results. Then, substituting (2) into the model (1), we can get
(3) |
where and .
Obviously, can be regarded as a set of quantile regression coefficient functions describing how each regression coefficient depends on the quantile level . We might obtain an estimator of at a single quantile of interest by using the standard quantile regression (e.g., Koenker, (2005)). However, to improve efficiency of the coefficients estimates, we adopt the strategy of Frumento and Bottai, (2016) to model by a series of parametric functions. Specifically, we take as a function of quantile level that relies on a finite-dimensional parameter
(4) |
where is a set of known basis functions of , and is a matrix with the th entries for and . Under the model (4), we have . In practice, to obtain an estimate of , we need to specify in advance. As mentioned in Frumento and Bottai, (2016); Yang et al., (2017); Frumento et al., (2021), valid choices of are, for example, functions of the form , , , , the quantile function of any distribution with finite moments, splines, or a combination of the above. In general, the selected basis set should satisfy two conditions. First, defines a valid quantile function (i.e., is an increasing function of ) for some at the observed values of . Second, it is differentiable in the interior of its support. The simulation results of Table 1 show that the proposed method is not sensitive to the selection of .
To facilitate the presentation, we need to introduce some notations. Let be the vectoring operation, which creates a column vector by stacking the column vectors of below one another, that is, , where is the -th column of the parameter matrix in the -th candidate model. Define with representing the Kronecker product of two matrices. Then, we have and .
Motivated by Frumento and Bottai, (2016), we can further integrate information from different quantile levels to improve efficiency and obtain the estimator of by minimizing the integrated loss function
(5) |
where and is the quantile check function. The objective function can be regarded as an average loss function, achieved by marginalizing over the entire interval . In addition, the solution of minimizing (5) is currently implemented by the iqr function in the qrcm R package. Define and for .
With the estimators of each candidate model readily available, the model averaging estimator of is thus expressed as
(6) |
where is the model weight vector belonging to the set
Remark 1.
The main merits of parametric modeling of include the following two aspects. One the one hand, the model (4) extracts the common features of the quantile regression coefficients over via the -dimensional known basis function vector . Moreover, it permits estimating the entire quantile process rather than only obtaining a discrete set of quantiles. Thus, this modelling strategy presents numerous superiorities including a simpler computation, increased statistical efficiency and easy interpretability of the results. On the other hand, our proposed modelling strategy can effectively estimate bivariate functions by a combination of B-spline approximation and parametric modeling of .
Remark 2.
In practice, researchers are ignorant of the true model. All considered candidate models might be wrong, but each candidate model may characterize only some of the properties of the true data generating process. Although our constructed partially linear sub-models are more sophisticated and flexible than traditional linear models, they still may not be the true model. To reduce the risk of model misspecification, we construct a model average strategy to achieve accurate prediction for the conditional quantile function by assigning higher weights to the better sub-models. Compared with existing strategies of constructing nested sub-models (i.e., Wan et al., (2010); Hansen and Racine, (2012); Lu and Su, (2015); Zhang and Liu, (2018); Zhang and Wang, (2019); Zhang et al., (2020)), it is worth noting that each candidate model that we consider includes all covariates and thus our modelling strategy for sub-models is non-nested, which may be another attractive scheme of building semiparametric sub-models.
2.2 Jackknife Weighting
Actually, the weight vector in is usually unknown and should be properly estimated as the choice of the weight plays a central role for model averaging strategy. Following the idea of Hansen and Racine, (2012); Lu and Su, (2015), we will adopt jackknife selection of (also known as leave-one-out cross-validation). More specifically, we measure the average prediction error by the integrated loss minimization (Frumento and Bottai,, 2016) and define the leave-one-out cross-validation criterion by
(7) |
where is the jackknife estimator of for , which is obtained by (5) without using the th sample. Minimizing with respect to leads to
(8) |
Substituting for in (6) results in the proposed model averaging estimator
(9) |
Averaging using the weight choice is called the jackknife quantile partially linear model averaging (JQPLMA).
Notice that the proposed estimator is computationally challenging due to the complicated integrated loss function and constraint conditions on the weight vector. Invoking the precursor work of Kong and Xia, (2014) and Yang et al., (2017), for the sake of computational convenience, we can approximate the objective function (7) by
where . Then a well-known nonlinear optimization such as the augmented Lagrange method is considered to solve the constrained optimization problem, which is easy to be implemented by many software packages (i.e., the Rsolnp in R).
Let be an independent copy of . Write , , , and for . Define the out-of-sample average quantile prediction error (denoted as ) as follows
(10) |
Next we will show that the weight vector selected by (8) is asymptotically optimal in the sense of achieving the lowest possible under some regularity conditions.
Remark 3.
What should be pointed out here is that the good in-sample performance does not necessarily indicate good out-of-sample performance because the future prediction of underlying models is partially or completely unknown to the practical users. Thus, we estimate the optimal weight vector by minimizing instead of directly using the integrated loss function to obtain good out-of-sample prediction performance. In addition, it is understood that the selection of the loss function is closely related to the characteristic of the response variable’s distribution that one wants to predict. For example, the traditional quadratic (or quantile) loss function corresponds to the conditional mean (or quantile) of the distribution of the response. Here the object of our interest is the average of quantile prediction over the interval , and thus it is natural to define the risk function (10) which can be regarded as a beneficial extension of the criterion (2.13) in Lu and Su, (2015).
3 Simulation Studies
In this section, we conduct Monte Carlo experiments to examine the performance of the proposed model average prediction procedure. To make a full comparison, we compare our proposal with the following popular model-based and model averaging prediction methods.
QLRM: The traditional quantile linear regression model (Koenker,, 2005), implemented by the R function rq in the package quantreg.
QRCM: The quantile regression coefficients modeling (Frumento and Bottai,, 2016) , implemented by the R function iqr in the package qrcm.
QPAM : The quantile partially linear additive model (Sherwood and Wang,, 2016), where discrete (continuous) covariates are taken as the linear (nonparametric) parts.
JQLMA: The Jackknife quantile linear model averaging (Lu and Su,, 2015).
JCQLMA: The Jackknife composite quantile linear model averaging (Wang and Zou,, 2019).
It is well known that the performance of model averaging depends on the weight selection criterion, and thus we consider three versions for our proposed procedure.
EW: Equal weights are utilized to make predictions.
QPL: We randomly set a component of the model weight vector as one and the rest components are taken as zero, indicating that a traditional partially linear model is used for prediction.
JQPLMA: The proposed optimal model averaging strategy given in Sub-section 2.2.
In all simulation examples, we generate a training data set of sample size to estimate unknown parameters, nonparametric functions and model weights, and generate extra observations (a testing set) to calculate prediction performances. We use the sample version of (given in Section LABEL:sect.3) to measure accuracy of the out-of-sample prediction performance, defined by
where , is an estimator of the th conditional quantile function and stands for the testing set with the size . Following Lee and Shin, (2021), we construct the following three comparison measures
where is an indicator function for event , is the value of in the th replication for and each subscript denotes generic notation for a prediction approach. Please note that the loss to JQPLMA ratio gives us more direct binary comparison of each approach to JQPLMA. Obviously, the smaller and the bigger , the method is better. Here the total number of replication is taken as .
Example 1. In this example, we generate the random samples from the following partially linear additive model
(11) |
where , , . The covariates are simulated according to for , and , where and are generated independently from . We have for by a simple calculation, and set , and 3, representing uncorrelated (), moderate () and high () correlations between covariates. The random error is distributed as . As in Lu and Su, (2015) and Zhang and Wang, (2019), we change the value of so that the population , where represents the sample variance. In this example, only first six covariates are continuous and might be served as the nonparametric component, resulting in partially linear sub-models for our model averaging procedure. It’s worth noting that our mission is to achieve the goal of accurately predicting the joint conditional quantile function rather than estimate the parameters and nonparametric function in (3), where the th quantile function of .
Example 2. To reflect the flexibility of our procedure, we consider the following multivariate nonparametric regression model with heteroscedasticity
where are generated from a multivariate normal distribution with mean zero and for . We also consider and , corresponding to different sparsity levels. To assess the robustness and flexibility, we consider six distributions for the random error , including standard normal distribution (case1), -distribution with three degrees of freedom (case2), a mixture of two normal distributions (case3), which is a mixture of and with the weights 95% and 5%, -distribution with one degree of freedom (case4), Gamma-distribution (case5) and Log normal distribution (case6) with the mean and standard deviation of the distribution on the log scale being 0.5 and 0.5, respectively. In this example, all covariates are continuous and thus any covariate can be taken as the nonparametric component, resulting in partially linear sub-models for our model averaging procedure.
correlation | basis | |||||||
---|---|---|---|---|---|---|---|---|
0.769(0.058) | 0(0) | 0.366(0.148) | 0.265(0.166) | 0.369(0.157) | 0(0) | 0(0) | ||
0.775(0.058) | 0(0) | 0.365(0.143) | 0.266(0.160) | 0.368(0.149) | 0(0) | 0(0.001) | ||
0.775(0.058) | 0(0) | 0.365(0.143) | 0.267(0.159) | 0.368(0.148) | 0(0) | 0(0.001) | ||
0.707(0.053) | 0(0) | 0.249(0.164) | 0.174(0.162) | 0.574(0.126) | 0(0) | 0.002(0.024) | ||
0.712(0.053) | 0(0.001) | 0.257(0.163) | 0.178(0.159) | 0.561(0.126) | 0(0) | 0.003(0.030) | ||
0.711(0.053) | 0(0) | 0.259(0.157) | 0.181(0.157) | 0.557(0.121) | 0(0) | 0.003(0.028) | ||
0.710(0.057) | 0.025(0.058) | 0.331(0.138) | 0.205(0.138) | 0.396(0.124) | 0.017(0.045) | 0.026(0.063) | ||
0.715(0.057) | 0.028(0.062) | 0.330(0.134) | 0.206(0.134) | 0.385(0.124) | 0.021(0.046) | 0.030(0.066) | ||
0.714(0.057) | 0.032(0.063) | 0.328(0.132) | 0.208(0.134) | 0.382(0.117) | 0.020(0.047) | 0.030(0.065) |
Note: The standard errors of and are denoted inside the parentheses.
correlation | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|
Average | ||||||||||
0.2 | 2.544 | 2.522 | 2.921 | 2.522 | 2.465 | 2.482 | 2.515 | 2.477 | ||
(0.187) | (0.187) | (0.216) | (0.179) | (0.251) | (0.254) | (0.257) | (0.252) | |||
0.4 | 1.644 | 1.630 | 1.938 | 1.652 | 1.606 | 1.594 | 1.629 | 1.576 | ||
(0.123) | (0.123) | (0.140) | (0.124) | (0.122) | (0.123) | (0.130) | (0.122) | |||
0.6 | 1.211 | 1.201 | 1.488 | 1.212 | 1.181 | 1.153 | 1.190 | 1.117 | ||
(0.089) | (0.089) | (0.110) | (0.087) | (0.089) | (0.085) | (0.089) | (0.083) | |||
0.8 | 0.901 | 0.894 | 1.205 | 0.904 | 0.881 | 0.828 | 0.871 | 0.769 | ||
(0.073) | (0.073) | (0.095) | (0.072) | (0.073) | (0.067) | (0.082) | (0.061) | |||
Winning Ratio | ||||||||||
0.2 | 0.0% | 0.0% | 0.0% | 1.0% | 49.5% | 13.0% | 5.0% | 28.5% | ||
0.4 | 0.0% | 0.0% | 0.0% | 1.0% | 17.5% | 13.5% | 6.5% | 61.5% | ||
0.6 | 0.0% | 0.0% | 0.0% | 0.0% | 2.5% | 6.0% | 4.5% | 87.0% | ||
0.8 | 0.0% | 0.0% | 0.0% | 0.0% | 0.0% | 0.5% | 3.0% | 96.5% | ||
Loss to | ||||||||||
0.2 | 88.5% | 77.0% | 100.0% | 69.5% | 41.0% | 54.0% | 77.0% | NA | ||
0.4 | 96.5% | 91.5% | 100.0% | 91.0% | 74.5% | 74.5% | 86.0% | NA | ||
0.6 | 99.5% | 98.5% | 100.0% | 99.5% | 94.5% | 91.5% | 94.5% | NA | ||
0.8 | 100.0% | 100.0% | 100.0% | 100.0% | 100.0% | 99.5% | 97.0% | NA | ||
Average | ||||||||||
0.2 | 2.528 | 2.507 | 2.874 | 2.513 | 2.462 | 2.486 | 2.520 | 2.487 | ||
(0.200) | (0.202) | (0.219) | (0.184) | (0.185) | (0.203) | (0.207) | (0.205) | |||
0.4 | 1.604 | 1.591 | 1.878 | 1.606 | 1.563 | 1.564 | 1.591 | 1.551 | ||
(0.117) | (0.118) | (0.143) | (0.115) | (0.116) | (0.116) | (0.118) | (0.118) | |||
0.6 | 1.156 | 1.146 | 1.394 | 1.161 | 1.127 | 1.110 | 1.141 | 1.085 | ||
(0.099) | (0.099) | (0.097) | (0.100) | (0.099) | (0.097) | (0.102) | (0.094) | |||
0.8 | 0.798 | 0.792 | 1.074 | 0.802 | 0.780 | 0.742 | 0.776 | 0.707 | ||
(0.063) | (0.063) | (0.083) | (0.063) | (0.062) | (0.058) | (0.062) | (0.056) | |||
Winning Ratio | ||||||||||
0.2 | 0.0% | 0.0% | 0.0% | 2.0% | 58.5% | 11.5% | 6.5% | 18.5% | ||
0.4 | 0.0% | 0.0% | 0.0% | 1.0% | 36.5% | 13.0% | 6.0% | 43.0% | ||
0.6 | 0.0% | 0.0% | 0.0% | 0.0% | 12.1% | 12.1% | 4.0% | 71.7% | ||
0.8 | 0.0% | 0.0% | 0.0% | 0.0% | 0.5 % | 1.0% | 3.5% | 95.0% | ||
Loss to | ||||||||||
0.2 | 82.0% | 69.5% | 100.0% | 62.5% | 34.0% | 47.0% | 76.0% | NA | ||
0.4 | 90.5% | 85.5% | 100.0% | 86.0% | 57.0% | 65.0% | 82.0% | NA | ||
0.6 | 100.0% | 98.0% | 100.0% | 98.0% | 82.8% | 82.8% | 93.9% | NA | ||
0.8 | 100.0% | 100.0% | 100.0% | 99.5% | 98.5% | 97.0% | 96.5% | NA | ||
Average | ||||||||||
0.2 | 2.570 | 2.548 | 2.895 | 2.546 | 2.498 | 2.485 | 2.529 | 2.491 | ||
(0.202) | (0.201) | (0.220) | (0.194) | (0.194) | (0.196) | (0.200) | (0.194) | |||
0.4 | 1.646 | 1.632 | 1.920 | 1.646 | 1.603 | 1.539 | 1.588 | 1.542 | ||
(0.128) | (0.129) | (0.155) | (0.129) | (0.126) | (0.124) | (0.132) | (0.128) | |||
0.6 | 1.227 | 1.217 | 1.445 | 1.234 | 1.199 | 1.086 | 1.137 | 1.080 | ||
(0.091) | (0.092) | (0.117) | (0.090) | (0.090) | (0.085) | (0.094) | (0.084) | |||
0.8 | 0.917 | 0.910 | 1.139 | 0.922 | 0.899 | 0.726 | 0.799 | 0.710 | ||
(0.064) | (0.064) | (0.102) | (0.065) | (0.065) | (0.055) | (0.065) | (0.057) | |||
Winning Ratio | ||||||||||
0.2 | 0.0% | 0.0% | 0.0 % | 0.5 % | 35.0 % | 30.0 % | 8.5 % | 23.5% | ||
0.4 | 0.0% | 0.0% | 0.0% | 0.0% | 10.5% | 39.0% | 11.5% | 39.0% | ||
0.6 | 0.0% | 0.0% | 0.0% | 0.0% | 0.5% | 39.0% | 4.0% | 56.5% | ||
0.8 | 0.0% | 0.0% | 0.0% | 0.0% | 0.0% | 25.5% | 1.5% | 73.0% | ||
Loss to | ||||||||||
0.2 | 89.5% | 81.5% | 100.0% | 77.0% | 50.5 % | 37.5% | 76.0% | NA | ||
0.4 | 93.0% | 91.5% | 100.0% | 90.5% | 83.5% | 44.0% | 81.5% | NA | ||
0.6 | 100.0% | 99.0% | 100.0% | 99.5% | 98.5% | 59.5 % | 92.0% | NA | ||
0.8 | 100.0% | 100.0% | 100.0% | 100.0% | 100.0% | 74.0% | 98.5% | NA |
Note: The standard error of is denoted inside the parentheses.
error | |||||||||
---|---|---|---|---|---|---|---|---|---|
Average | |||||||||
case 1 | 1.170 | 1.162 | 1.134 | 1.132 | 1.119 | 1.095 | 1.157 | 0.964 | |
(0.160) | (0.161) | (0.152) | (0.158) | (0.158) | (0.149) | (0.168) | (0.119) | ||
case 2 | 1.298 | 1.287 | 1.271 | 1.255 | 1.238 | 1.221 | 1.279 | 1.100 | |
(0.164) | (0.164) | (0.155) | (0.159) | (0.158) | (0.156) | (0.173) | (0.138) | ||
case 3 | 1.241 | 1.234 | 1.186 | 1.198 | 1.183 | 1.167 | 1.230 | 1.049 | |
(0.144) | (0.146) | (0.148) | (0.145) | (0.145) | (0.138) | (0.152) | (0.120) | ||
case 4 | 1.222 | 1.212 | 1.255 | 1.177 | 1.161 | 1.145 | 1.195 | 1.015 | |
(0.136) | (0.138) | (0.153) | (0.135) | (0.136) | (0.130) | (0.152) | (0.114) | ||
case 5 | 1.159 | 1.151 | 1.221 | 1.121 | 1.107 | 1.083 | 1.151 | 0.944 | |
(0.144) | (0.145) | (0.159) | (0.142) | (0.141) | (0.134) | (0.152) | (0.113) | ||
case 6 | 1.177 | 1.169 | 1.386 | 1.136 | 1.121 | 1.101 | 1.159 | 0.971 | |
(0.146) | (0.145) | (0.178) | (0.141) | (0.140) | (0.137) | (0.157) | (0.113) | ||
Winning Ratio | |||||||||
case 1 | 0.0% | 0.0% | 0.0% | 0.0% | 3.5% | 1.0% | 0.5% | 95.0% | |
case 2 | 0.0% | 0.0% | 0.0% | 0.0% | 4.0% | 2.5% | 0.5% | 93.0% | |
case 3 | 0.0% | 0.0% | 0.0% | 0.0% | 5.0% | 1.5% | 1.0% | 92.5% | |
case 4 | 0.0% | 0.0% | 0.0% | 0.0% | 2.0% | 1.0% | 0.5% | 96.5% | |
case 5 | 0.0% | 0.0% | 0.0% | 0.0% | 2.0% | 0.5% | 0.5% | 97.0% | |
case 6 | 0.0% | 0.0% | 0.0% | 0.0% | 3.0% | 1.5% | 1.5% | 94.0% | |
Loss to | |||||||||
case 1 | 99.5% | 99.5% | 100.0% | 97.0% | 96.0% | 97.0% | 98.0% | NA | |
case 2 | 99.5% | 99.5% | 100.0% | 97.0% | 94.0% | 95.0% | 99.0% | NA | |
case 3 | 98.5% | 97.5% | 100.0% | 95.5 % | 94.0% | 95.0% | 98.5% | NA | |
case 4 | 99.0% | 98.5% | 100.0% | 98.5% | 97.5% | 97.0% | 99.0% | NA | |
case 5 | 98.0% | 98.0% | 100.0% | 98.0% | 97.5% | 98.0% | 98.5% | NA | |
case 6 | 99.0% | 98.5% | 100.0% | 97.0% | 96.0% | 96.0% | 97.0% | NA | |
Average | |||||||||
case 1 | 1.213 | 1.209 | 1.237 | 1.143 | 1.125 | 1.153 | 1.203 | 1.000 | |
(0.147) | (0.151) | (0.162) | (0.148) | (0.147) | (0.145) | (0.152) | (0.126) | ||
case 2 | 1.343 | 1.339 | 1.411 | 1.267 | 1.243 | 1.284 | 1.353 | 1.140 | |
(0.156) | (0.163) | (0.172) | (0.154) | (0.155) | (0.156) | (0.167) | (0.143) | ||
case 3 | 1.276 | 1.271 | 1.315 | 1.205 | 1.184 | 1.217 | 1.281 | 1.062 | |
(0.151) | (0.156) | (0.173) | (0.148) | (0.150) | (0.149) | (0.162) | (0.129) | ||
case 4 | 1.278 | 1.276 | 1.388 | 1.208 | 1.186 | 1.221 | 1.288 | 1.069 | |
(0.151) | (0.155) | (0.173) | (0.144) | (0.142) | (0.147) | (0.155) | (0.124) | ||
case 5 | 1.182 | 1.182 | 1.359 | 1.116 | 1.095 | 1.127 | 1.190 | 0.973 | |
(0.144) | (0.147) | (0.177) | (0.140) | (0.141) | (0.139) | (0.153) | (0.113) | ||
case 6 | 1.215 | 1.211 | 1.465 | 1.149 | 1.129 | 1.156 | 1.222 | 1.001 | |
(0.137) | (0.138) | (0.197) | (0.133) | (0.134) | (0.131) | (0.150) | (0.110) | ||
Winning Ratio | |||||||||
case 1 | 0.0% | 0.0% | 0.0% | 0.5% | 4.0 % | 0.5% | 0.0% | 95.0% | |
case 2 | 0.0% | 0.0% | 0.0% | 0.0% | 9.0% | 0.0% | 0.0% | 91.0% | |
case 3 | 0.0% | 0.0% | 0.0% | 0.0% | 6.5% | 0.0% | 0.0% | 93.5% | |
case 4 | 0.0% | 0.0% | 0.0 % | 0.0% | 7.0% | 0.0% | 0.5% | 92.5% | |
case 5 | 0.0% | 0.0% | 0.0% | 0.0% | 7.5% | 0.0% | 0.5% | 92.0% | |
case 6 | 0.0% | 0.0% | 0.0% | 0.0% | 8.0% | 0.0% | 0.0% | 92.0% | |
Loss to | |||||||||
case 1 | 99.5% | 99.0% | 100.0% | 97.0% | 95.5% | 98.5% | 99.5% | NA | |
case 2 | 100.0% | 100.0% | 100.0 % | 97.5% | 91.0% | 100.0 % | 100.0% | NA | |
case 3 | 100.0% | 100.0% | 100.0 % | 98.0% | 93.5% | 99.5% | 100.0% | NA | |
case 4 | 99.5% | 99.5% | 100.0% | 95.5% | 93.0 % | 98.0% | 99.5% | NA | |
case 5 | 99.5% | 98.5% | 100.0% | 94.0% | 92.5% | 97.0 % | 99.0% | NA | |
case 6 | 98.5% | 99.0% | 100.0% | 94.0% | 92.0% | 96.5 % | 99.5% | NA |
Note: The standard error of is denoted inside the parentheses.
To implement our procedure, we need to determine the degree of B-spline and the number of knots, which play important roles in numerical studies. Recent researching findings (Huang et al., (2004) and Kim, (2007)) have showed that lower order splines might be better choice, such as linear splines (). It is well-known that higher order splines would induce complicated interactions and collinearity among the variables in the model as the effect of the splines on the model is multiplicative. Therefore, we suggest using linear splines in our simulations because of its desirable properties such as optimality Koenker et al., (1994). Moreover, we set the number of interior knots as with being the largest integer not greater than . In addition, it is a natural question whether our proposed method is sensitive to the choice of basis set . So we conduct a sensitivity analysis for the choice of . Similar to Frumento and Bottai, (2016) and Yang et al., (2017), we consider the following three types of basis functions
where denotes the distribution function of the standard normal distribution. Table 1 lists the average of and estimated model weight vector for JQPLMA with different basis functions for and in example 1. Table 1 shows the second, third and fourth sub-models carry almost all the weights, and the combination of the three models is indeed the true model, which indicates the proposed cross-validation based method works very well for selection of the weights in the model averaging prediction. Furthermore, it is easy to see that there is little difference for and among different basis functions, indicating that our proposal is not sensitive to the selection of . The results of and for other settings are also not sensitive to the selection of . To save space, we don’t report the results for other settings. Therefore, we fix in the simulation studies.
The simulation results over all designs are reported in Tables 2–3. We might obtain the following conclusions. Firstly, we check the performance over different signals () and different levels of dependency among covariates (). From Table 2, we confirm that our proposed approach JQPLMA yields the smallest and the highest winning ratio when varies form 0.4 to 0.8. When , JCQLMA and EW are slightly better than JQPLMA. One possible explanation is that our procedure requires the estimation of more parameters and nonparametric functions and thus might result in poor estimators for the relatively small signal . It is also interesting that the winning ratio of our method and the loss to JQPLMA increase quickly as increases, indicating that the superiority of JQPLMA is increasingly apparent for the large . Secondly, we study the performance over different sparsity levels and error distributions. Table 3 reveals that JQPLMA outperforms all competing methods uniformly. It may not be surprising to understand that traditional model-based approaches (QLRM, QRCM, QPAM and QPL) have poor prediction performance because they adopt a single misspecified model structure to make predictions. Furthermore, although all model averaging approaches (JQLMA, JCQLMA, EW and JQPLMA) employ misspecified candidate models, the proposed JQPLMA optimally combines useful information from more flexible semiparametric sub-models, and thus produces more accurate prediction performance.
In sum, simulation studies show that our proposed procedure has satisfactory finite sample properties for various settings.
References
- Ando and Li, (2014) Ando, T. and Li, K.-C. (2014). A model-averaging approach for high-dimensional regression. Journal of the American Statistical Association, 109:254–265.
- Belloni and Chernozhukov, (2011) Belloni, A. and Chernozhukov, V. (2011). L1 penalized quantile regression in high-dimensional sparse models. The Annals of Statistics, 39:82–130.
- de Boor, (2001) de Boor, C. (2001). A practical guide to splines. Springer, New York.
- Engle et al., (1986) Engle, R., Granger, C., Rice, J., and Weiss, A. (1986). Semiparametric estimates of the relation between weather and electricity sales. Journal of the American Statistical Association, 81:310–320.
- Feng and Zhu, (2016) Feng, X. and Zhu, L. (2016). Estimation and testing of varying coefficients in quantile regression. Journal of the American Statistical Association, 111:266–274.
- Feng et al., (2021) Feng, Y., Liu, Q., Yao, Q., and Zhao, G. (2021). Model averaging for nonlinear regression models. Journal of Business and Economic Statistics, https://doi.org/10.1080/07350015.2020.1870477.
- Frumento and Bottai, (2016) Frumento, P. and Bottai, M. (2016). Parametric modeling of quantile regression coefficient functions. Biometrics, 72:74–84.
- Frumento et al., (2021) Frumento, P., Bottai, M., and Fernandez-Val, I. (2021). Parametric modeling of quantile regression coefficient functions with longitudinal data. Journal of the American Statistical Association, 116:783–797.
- Hansen and Racine, (2012) Hansen, B. and Racine, J. (2012). Jackknife model averaging. Journal of Econometrics, 167:38–46.
- Hardle et al., (2000) Hardle, W., Liang, H., and Gao, J. (2000). Partially Linear Models. Springer Physica, Heidelberg.
- Huang et al., (2004) Huang, J., Wu, C., and Zhou, L. (2004). Polynomial spline estimation and inference for varying coefficient models with longitudinal data. Statistica Sinica, 14:763–788.
- Kai et al., (2011) Kai, B., Li, R., and Zou, H. (2011). New efficient estimation and variable selection methods for semiparametric varying-coefficient partially linear models. The Annals of Statistics, 39:305–332.
- Kim, (2007) Kim, M. (2007). Quantile regression with varying coefficients. The Annals of Statistics, 35:92–108.
- Koenker, (2005) Koenker, R. (2005). Quantile regression. Cambridge University Press, Cambridge.
- Koenker et al., (1994) Koenker, R., Ng, P., and Portnoy, S. (1994). Quantile smoothing splines. Biometrika, 81:673–680.
- Kong and Xia, (2014) Kong, E. and Xia, Y. (2014). An adaptive composite quantile approach to dimension reduction. The Annals of Statistics, 42:1657–1688.
- Lee and Shin, (2021) Lee, J. and Shin, Y. (2021). Complete subset averaging for quantile regressions. Econometric Theory, in press.
- (18) Li, C., Li, Q., Racine, J., and Zhang, D. (2018a). Optimal model averaging of varying coefficient models. Statistica Sinica, 28:2795–2809.
- (19) Li, J., Xia, X., Wong, W., and Nott, D. (2018b). Varying-coefficient semiparametric model averaging prediction. Biometrics, 74:1417–1426.
- Liang and Li, (2009) Liang, H. and Li, R. (2009). Variable selection and inference in partially linear error-in-variable models. Journal of the American Statistical Association, 104:234–248.
- Liang et al., (2007) Liang, H., Wang, S., and Carroll, R. (2007). Partially linear models with missing response variables and error-prone covariates. Biometrika, 94:185–198.
- Lu and Su, (2015) Lu, X. and Su, L. (2015). Jackknife model averaging for quantile regressions. Journal of Econometrics, 188:40–58.
- Ma and He, (2016) Ma, S. and He, X. (2016). Inference for single-index quantile regression models with profile optimization. The Annals of Statistics, 44:1234–1268.
- Schumaker, (1981) Schumaker, L. (1981). Spline Functions: Basic Theory. Wiley, New York.
- Sherwood and Wang, (2016) Sherwood, B. and Wang, L. (2016). Partially linear additive quantile regression in ultra-high dimension. The Annals of Statistics, 44:288–317.
- Sun et al., (2017) Sun, Z., Sun, L., Lu, X., Zhu, J., and Li, Y. (2017). Frequentist model averaging estimation for the censored partial linear quantile regression model. Journal of Statistical Planning and Inference, 189:1–15.
- Wan et al., (2010) Wan, A., Zhang, X., and Zou, G. (2010). Least squares model averaging by mallows criterion. Journal of Econometrics, 156:277–283.
- Wang et al., (2012) Wang, L., Wu, Y., and Li, R. (2012). Quantile regression for analyzing heterogeneity in ultra-high dimension. Journal of the American Statistical Association, 107:214–222.
- Wang and Zou, (2019) Wang, M. and Zou, G. (2019). Jackknife model averaging for composite quantile regression. https://arxiv.org/abs/1910.12209v1.
- Xie and Huang, (2009) Xie, H. and Huang, J. (2009). Scad-penalized regression in high-dimensional partially linear models. The Annals of Statistics, 37:673–696.
- Yang et al., (2017) Yang, C., Chen, Y., and Chang, H. (2017). Composite marginal quantile regression analysis for longitudinal adolescent body mass index data. Statistics in Medicine, 36:3380–3397.
- Zhang et al., (2011) Zhang, H., Cheng, G., and Liu, Y. (2011). Linear or nonlinear? automatic structure discovery for partially linear models. Journal of the American Statistical Association, 106:1099–1112.
- Zhang and Liu, (2018) Zhang, X. and Liu, C.-A. (2018). Inference after model averaging in linear regression models. Econometric Theory, 35:816–841.
- Zhang and Wang, (2019) Zhang, X. and Wang, W. (2019). Optimal model averaging estimation for partially linear models. Statistica Sinica, 29:693–718.
- Zhang et al., (2020) Zhang, X., Zou, G., Liang, H., and Carroll, R. (2020). Parsimonious model averaging with a diverging number of parameters. Journal of the American Statistical Association, 115:972–984.
- Zhu et al., (2019) Zhu, R., Wan, A., Zhang, X., and Zou, G. (2019). A mallows-type model averaging estimator for the varying-coefficient partially linear model. Journal of the American Statistical Association, 114:882–892.