Volatility prediction comparison via robust volatility proxies: An empirical deviation perspective
Abstract
Volatility forecasting is crucial to risk management and portfolio construction. One particular challenge of assessing volatility forecasts is how to construct a robust proxy for the unknown true volatility. In this work, we show that the empirical loss comparison between two volatility predictors hinges on the deviation of the volatility proxy from the true volatility. We then establish non-asymptotic deviation bounds for three robust volatility proxies, two of which are based on clipped data, and the third of which is based on exponentially weighted Huber loss minimization. In particular, in order for the Huber approach to adapt to non-stationary financial returns, we propose to solve a tuning-free weighted Huber loss minimization problem to jointly estimate the volatility and the optimal robustification parameter at each time point. We then inflate this robustification parameter and use it to update the volatility proxy to achieve optimal balance between the bias and variance of the global empirical loss. We also extend this Huber method to construct volatility predictors. Finally, we exploit the proposed robust volatility proxy to compare different volatility predictors on the Bitcoin market data. It turns out that when the sample size is limited, applying the robust volatility proxy gives more consistent and stable evaluation of volatility forecasts.
Keywords: Volatility forecasting, Robust loss function, Huber minimization, Risk management, Crypto market.
1 Introduction
Volatility forecasting is a central task for financial practitioners, who care to understand the risk levels of their financial instruments or portfolios. There have been countless researches on improving the volatility modeling for financial time series, including the famous ARCH/GARCH model for better modeling volatility clustering, its many variants and more general stochastic volatility models (Engle, 1982; Bollerslev, 1986; Baillie et al., 1996; Taylor, 1994), and on proposing better volatility predictors under different model settings and objectives (Poon and Granger, 2003; Brailsford and Faff, 1996; Andersen et al., 2005; Brooks and Persand, 2003; Christoffersen and Diebold, 2000). This list of volatility forecasting literature is only illustrative and far from complete for the large body of researches on this topic.
The prediction ideas range from the simplest Exponentially Weighted Moving Average (EWMA) (Taylor, 2004), which is adopted by J. P. Morgan’s RiskMetrics, to more complicated time series models and volatility models including GARCH (Brandt and Jones, 2006; Park, 2002), to option-based or macro-based volatility forecasting (Lamoureux and Lastrapes, 1993; Vasilellis and Meade, 1996; Christiansen et al., 2012), and to the more advanced machine learning techniques such as the nearest neighbor truncation (Andersen et al., 2012) and Recurrent Neural Netowrks (RNN) (Guo et al., 2016). Correspondingly, the underlying model assumption ranges from only smoothness of nearby volatilities, to different versions of GARCH, to Black-Scholes model (Black and Scholes, 2019) and its complicated extensions. The data distribution assumption can also vary in whether data are normally distributed, or heavy-tailed distributed from a known distribution e.g. t-distribution, or generally non-normal. When data are generally non-normal, researchers have proposed to use the quasi maximum likelihood estimation (QMLE) (Bollerslev and Wooldridge, 1992; Charles and Darné, 2019; Carnero et al., 2012) and its robust standard error for inference, but the theoretical results are typically asymptotic. Albeit good theoretical guarantee, industry practitioners seldom apply QMLE and tend to employ the naive approach of truncating the returns by an ad-hoc level and then applying EWMA.
In this work, we consider a model assumption requiring only smoothness of volatilities. For simplicity, we also assume the volatility time series are given a-priori, and after conditioning on the volatilities, return innovations are independent. We choose this simple setting for the following reasons. Firstly, our main focus of study is on building effective robust proxies rather than testing volatility models and constructing fancy volatility predictors. Secondly, although we ignore the weak dependency between return innovations (think of ARMA models (Brockwell and Davis, 2009) for weakly dependency), the EWMA predictors and proxies can still have strong temporal dependency, due to data overlapping of a rolling window, so our analysis is still nontrivial. Also note that we allow the return time series to be non-stationary. Thirdly, the motivating example for us is volatility forecasting for the crypto market. Charles and Darné (2019) applied several versions of GARCH models characterized by short memory, asymmetric effect, or long-run and short-run movements and concluded that they all seem not appropriate to model Bitcoin returns. Therefore, starting from conditionally independent data without imposing a too detailed model e.g. GARCH may be a good general starting point for the study of robust proxies.
Besides the native EWMA predictor as our comparison benchmark, we consider a type of robust volatility predictor when the instrument returns present heavy tails in their distributions. Specifically, we only require the returns to bear finite fourth moment. We consider the weighted Huber loss minimization, which turns out to be a nontrivial extension from the equal-weighted Huber loss minimization. To achieve the desired rate of convergence, the optimal Huber truncation level for each sample should also depend on sample weight. In addition, we apply a tuning-free approach following Wang et al. (2020) to tune the Huber truncation level adaptively and automatically. Unlike QMLE, our results focusing on a non-asymptotic empirical deviation bound. Therefore, although the main contribution of the paper is on robust proxy construction, we also claim a separate contribution on applying Huber minimization in the EWMA fashion.
Now, given two volatility predictors, the evaluation of their performance is often quite challenging due to two things: (1) selection of loss functions, and (2) selection of proxies, since obviously we cannot observe the truth volatilities. The selection of loss functions have been studied by Patton (2011). In Patton’s insightful paper, he defined a class of robust losses with the ideal property that for any unbiased proxy, the ranking of two predictors using one of the robust losses will be always consistent in terms of the long-run expectation. The property is desired for it tells risk managers to select a robust loss, then not to worry much on designing proxies. As long as the proxy is unbiased, everything should just work out. Commonly used robust losses include the mean-squared error (MSE) and the quasi-likelihood loss (QL). However, there is one weakness of Patton’s approach which has not been emphasized much in previous literature: the evaluation has to be in long-run expectation. The deviation of the empirical loss, which is what people actually use in practice, from the expected loss may still cause a large variance due to a bad choice of volatility proxy. Put it in the other way, his theory did not tell the risk managers how much an empirical loss can differ from its expected counterpart.
In this work, we hope to bring the main message that besides the selection of a robust loss, the selection of a good proxy also matters for effective comparison of predictors, especially when the sample size is not large enough. For a single time point, we show that the probability of making false comparison could be very high. So the natural question is that by averaging the performance comparison over time points, are we able to get a faithful comparison of two predictors with high probability, so that the empirical loss ranking does reflect the population loss ranking? The answer is that we need robust proxies in order to have this kind of guarantee.
We propose three robust proxies and compare them. The first choice uses the clipped squared return at the single current time as the proxy. This may be the simplest practical choice of a robust proxy. However, it cannot achieve the desired convergence in terms of empirical risk comparison due to large variance of only using a single time point. The second option mimics the EWMA proxy, so now we clip and average at multiple time points close to . To find out the proper clipping, we first run EWMA tuning-free Huber loss minimization on local data for each time . This will give a truncation level adaptive to the unknown volatility. Then the clipping bound will be rescaled to reflect the total sample size. According to literature on Huber minimization Catoni (2012); Fan et al. (2016); Sun et al. (2020), the truncation level needs to scale with the square root of the sample size to balance the bias and variance optimally. Therefore, it is natural to rescale the clipping bound by square root of the ratio of the total sample size over the local effective sample size. The third proxy exactly solves the EWMA Huber minimization, again with the rescaled truncation. Compared to the first and second proxies, this gives further improvement on the deviation bound of the proxy, depending on the central kurtosis rather than the absolute kurtosis. We will illustrate the above claims in more detail in later sections.
The Huber loss minimization has been proposed by Huber (1964) under Huber’s -contamination model and its asymptotic properties have been studied in Huber (1973). At that time, the truncation level was set as fixed according to the asymptotic efficiency rule and “robustness” means achieving minimax optimality under the -contamination model (Chen et al., 2018). But recently, Huber’s M-estimator has been revisited in the regression setting under the assumption of general heavy-tailed distributions (Catoni, 2012; Fan et al., 2016). Here “robustness” slightly changes its meaning to achieving sub-Gaussian non-asymptotic deviation bound under the heavy-tailed data assumption. In this setting, the truncation level grows with sample size and the resultant M-estimator is still asymptotically unbiased even when data distribution is asymmetric. Huber’s estimator fits the goal of robust volatility prediction and robust proxy construction very well, as squared returns indeed have asymmetric distributions. Since Catoni (2012), new literature to reveal deeper understanding on Huber’s M-estimator sprung up. For example, Sun et al. (2020) proved the necessity of finite fourth moment for volatility estimation if we hope to achieve a sub-Gaussian type of deviation bound; Wang et al. (2020) proposed the tuning-free Huber procedure; Chen et al. (2018); Minsker (2018) extended the Huber methodology to robust covariance matrix estimation.
Robustness issue is indeed an important concern for real data volatility forecasting. It has been widely observed that financial returns have fat tails. When it comes to the crypto markets e.g. Bitcoin product (BTC), the issue gets more serious, as crypto traders frequently experience huge jumps in the BTC price. For example, BTC plummeted more than 20% in a single day in March 2020. The lack of government regulation probably leaves the market far from efficient. This posts a stronger need for robust methodology to estimate and forecast volatility for crypto markets. Some recent works include Catania et al. (2018); Trucíos (2019); Charles and Darné (2019).
With the BTC returns, we will compare the non-robust EWMA predictor with the robust Huber predictor, with different decays, and evaluate their performance using the non-robust forward EWMA proxy and the robust forward Huber proxy. Both the predictors and proxies will be rolled forward and compared at the end of each day. We apply two robust losses, MSE and QL, to evaluate their performance. Interestingly, we will see that when sample size is large, our proposed robust proxy will be very close to forward EWMA proxy, and both will lead to sensible and similar comparison. However, when is small, non-robust proxy could lead to higher probability of making wrong conclusions, whereas the robust proxy, which automatically adapts to the total sample size and the time-varying volatilities, can still work as expected. This matches with our theoretical findings and provides new insights about applying robust proxies for practical risk evaluation.
The rest of the paper is organized as follows. In Section 2, we first review the definition of robust loss by Patton (2011) and explain our analyzing strategy for high probability bound of the empirical loss. We bridge the empirical loss and the unconditional expected loss, by the conditional expected loss conditioning on proxies. In Section 3, we propose three robust proxies and prove that they can all achieve the correct ranking with high probability, if measured by the conditional expected loss. However, the proxy based on Huber loss minimization will have the smallest probability of making false comparison, if measured by the empirical loss. In Section 4, we will discuss robust predictors and see why the above claim is true and why comparing robust predictors with non-robust predictors can be a valid thing to do. Simulation studies as well as an interesting case study on BTC volatility forecasting are presented in Section 5. We finally conclude the paper with some discussions in Section 6. All the proofs are relegated to the appendix.
2 Evaluation of volatility forecast
In this section, we first review the key conclusions of Patton (2011) on robust loss functions for volatility forecast comparison. We then use examples to see why we also care about the randomness from proxy deviation beyond picking a robust loss.
2.1 Robust loss functions
Suppose we have a time series of returns of a financial instrument. Let denote the -algebra generated from . Consider a volatility predictor , computed at time based on , that targets . We use a loss function to gauge the prediction error of . In practice, we never observe ; therefore, in order to evaluate the loss function , we have to substitute therein with a proxy , which is computed based on , the -algebra generated from the future returns .
Following Patton (2011), to achieve reliable evaluation of volatility forecasts, we wish to have the loss function satisfy the following three desirable properties:
-
(a)
Mean-pursuit: . This says that the optimal predictor is exactly the conditional expectation of the proxy.
-
(b)
Proxy-robust: Given any two predictors and and any unbiased proxy , i.e., , . This means that the forecast ranking is robust to the choice of the proxy.
-
(c)
Homogeneous: is a homogeneous loss function of order , i.e., for any . This ensures that the ranking of two predictors is invariant to the re-scaling of data.
Define the mean squared error (MSE) loss and quasi-likelihood (QL) loss as
(2.1) |
respectively. Here the QL loss can be viewed, up to an affine transformation, as the negative log-likelihood function of that follows when we observe that . Besides, QL is always positive and the Taylor expansion gives that when is around . Patton (2011) shows that among many commonly used loss functions, MSE and QL are the only two that satisfy all the three properties above. Specifically, Proposition 1 of Patton (2011) says that given that satisfies property (a) and some regularity conditions, further satisfies property (b) if and only if takes the form:
(2.2) |
where is the derivative function of and is monotonically decreasing. Proposition 2 in Patton (2011) establishes that MSE is the only proxy-robust loss that depends on and that QL is the only proxy-robust loss that depends on . Finally, Proposition 4 in Patton (2011) gives the entire family of proxy-robust and homogeneous loss functions, which include QL and MSE (MSE and QL are homogeneous of order 2 and 0 respectively). Given such nice properties of MSE and QL, we mainly use MSE and QL to evaluate and compare volatility forecasts throughout this work.
2.2 The empirical deviation perspective
Besides selecting a robust loss as Patton (2011) suggested, one has to also nail down the proxy selection for prediction loss computation. Patton (2011)’s framework did not separate the randomness from the predictors and the proxies, and the proxy-robust property (b) compares two predictors in long-term unconditional expectation, which averages both randomnesses. However, it is not clear from Patton (2011) that for a given selected proxy, what is the probability that we end up a wrong comparison of two predictors. How does the random deviation of a proxy affect the comparison? Can some proxies outperform others in terms of less probability to make mistakes in finite sample?
In practice, one has to use empirical risk to approximate the expected risk to evaluate volatility forecasts. This implies one important issue that property (b) neglects: Property (b) concerns only the expected risk and ignores the deviation of the empirical risk from its expectation. Such empirical deviation is further exacerbated by replacing the true volatility with its proxies, jeopardizing accurate evaluation of volatility forecasts. Our strategy of analysis is as follows: we first link the empirical risk to the conditional risk (conditioning on the selected proxy), claiming that they are close with high probability (see formal arguments in Section 4), and then study the relationship of comparing the unconditional risk and conditional risk.
Specifically, we are interested in comparing the accuracy of two series of volatility forecasts and . For notational convenience, we drop the subscript “” when we refer to a time series unless specified otherwise. Define . Without loss of generality, suppose that outperforms in terms of expected loss, i.e.,
(2.3) |
The empirical loss comparison can be decomposed into the conditional loss comparison and the difference between empirical loss and conditional loss.
Therefore, we study the following two probabilities for any :
(2.4) |
(2.5) |
We aim to select stable proxies to make I small, so that the probability of obtaining false rank of the empirical risk is small. Meanwhile, with a selected proxy, we hope II can be well controlled for the predictors we care to compare. Note that only randomness from proxy matters in I. So we can focus on proxy design by studying this quantity. Then we would like to make sure the difference between the empirical risk and conditional risk are indeed small via studying II. The probability in II is with respect to both the proxy and the predictor. By following this analyzing strategy, we separate the randomness from predictor and proxy and eventually give results on empirical deviation rather than in expectation.
2.3 False comparison due to proxy randomness
To illustrate this issue, we first focus on a single time point . We compare two volatility forecasts and satisfying that . We are interested in the probability of having a reverse rank of forecast precision between and , conditioning on the selected proxy, i.e, . Note that this probability is with respect to the randomness of the proxy ; in the sequel, we show that it may not be small for a general proxy. But if we can select a good proxy to control this probability well, we can ensure a correct comparison with high probability.
Now consider MSE and QL as the loss functions, so that we can derive explicitly the condition for the empirical risk comparison to be consistent with the expected risk comparison. For simplicity, assume that and are independent. Recall that . We wish to calculate for some , i.e., the probability of having the forecast rank in conditional expectation be opposite of the rank in unconditional expectation. When is chosen to be MSE, we have
(2.6) |
and
Therefore,
For illustration purposes, consider a deterministic scenario where is the oracle predictor, and where (so that (2.6) holds). Then
Similarly, if , we have
We can see from the two equations above that a large deviation of from gives rise to inconsistency between forecast comparisons based on empirical risk and expected risk. When we choose to be QL, we have that
and that
Similarly, we consider a deterministic setup where , and where with a misspecified scale. To ensure that , we have when . In this case, we deduce that
Similarly, we can see that the volatility forecast rank will be flipped once the deviation of from is large.
Note again that in the derivation above, the probability of reversing the expected forecast rank is evaluated at a single time point , which is far from enough to yield reliable comparison between volatility predictors. The common practice is to compute the empirical average loss of the predictors over time for their performance evaluation. Two natural questions arise: Does the empirical average completely resolve the instability of forecast evaluation due to the deviation of volatility proxies? If not, how should we robustify our volatility proxies to mitigate their empirical deviation?
3 Robust volatility proxies
3.1 Problem setup
Our goal in this section is to construct robust volatility proxies to ensure that maintains empirical superiority with high probability, or more precisely, that is small, given . We first present our assumption on the data generation process.
Assumption 1.
Given the true volatility series , instrument returns are independent with and . The central and absolute fourth moments of , denoted by and , are both finite.
Now we introduce some quantities that frequently appear in the sequel. At time , define the smoothness parameters
(3.1) | ||||
where is the forward exponential-decay weight at time from time with rate , and where is the backward exponential-decay weight with rate . These smoothness parameters characterize how fast the distribution of volatility varies as time evolves, and our theory explicitly derives their impact. As we shall see, our robust volatility proxies yield desirable statistical performance as long as these smoothness parameters are small, meaning that the variation of the volatility distribution is slow. Besides, define the forward and backward effective sample sizes as
(3.2) |
respectively, and define the forward and backward exponential-weighted moving average (EWMA) of the central fourth moment as
(3.3) |
respectively. Similarly, we have and as the forward and backward EWMA of the absolute fourth moment.
Consider a mean-pursuit and proxy-robust loss function that takes the form (2.2):
where we write for any constant . When and (), is MSE. When and (), becomes QL. Under Assumption 1, and are independent for . Therefore, . Given (2.3), we wish to show that outperforms in conditional risk with high probability, i.e. I is small. Recall that
where is a deviation parameter that may exceed , and the last equation is due to the fact that .
3.2 Exponentially weighted Huber estimator
We first review the tuning-free adaptive Huber estimator proposed in Wang et al. (2020). Define the Huber loss function with robustification parameter as
Suppose we have independent observations of satisfying that and that . The Huber mean estimator is obtained by solving the following optimization problem:
Fan et al. (2017) show that when is of order , achieves the optimal statistical rate with a sub-Gaussian deviation bound:
(3.4) |
In practice, is unknown, and one therefore has to rely on cross validation (CV) to tune , which incurs loss of sample efficiency. Wang et al. (2020) propose a data-driven principle to estimate and the optimal jointly by iteratively solving the following two equations:
(3.5) |
where is the same deviation parameter as in (3.4). Specifically, we start with and solve the second equation for . We then plug into the first equation to get . We repeat these two steps until the algorithm converges and use the final value of as the estimator for . Wang et al. (2020) proved that (i) if in the second equation above, then its solution gives with probability approaching ; (ii) if we choose in the first equation above, its solution satisfies (3.4), even when is asymmetrically distributed with heavy tails. Note that Wang et al. (2020) call the above procedure tuning-free, in the sense that the knowledge of is not needed, but we still have the deviation parameter used to control the exception probability. The paper suggested to use in practice.
In the context of volatility forecast, always varies across time. The well known phenomenon of volatility clustering in the financial market implies that typically changes slowly, so that we can borrow data around time to help with estimating with little bias. A common practice in quantitative finance is to exploit an exponential-weighted average of to estimate , thereby discounting the importance of data that are distant from time . To accommodate such exponential-decaying weights, we now propose a sample-weighted variant of the Huber estimator for volatility estimation as follows:
(3.6) |
where are the sample weights. Note that the robustification parameters for the observations can be different: intuitively, the higher the sample weight is, the lower should be, so that we can better guard against heavy-tailed deviation of important data points. More technical justification on such choice of robustification parameters is given after Theorem 1. Correspondingly, to adaptively tune , we iteratively solve the following two equations for and until convergence:
(3.7) |
Our first theorem shows that the solution to the first equation of (3.7) yields a sub-Gaussian estimator of , provided that is well tuned and that the distribution evolution of the volatility series is sufficiently slow.
Theorem 1.
Remark 1.
Given that are forward exponential-decay weights, we have that
which converges to as , and which converges to as . Therefore, requires both that and that . As , when and , the exception probability is of order , which converges to as . Therefore, if we choose , then as .
Remark 2.
One crucial step of our proof is using Bernstein’s inequality to control the derivative of the weighted Huber loss with respect to , i.e.,
Through setting as the robustification parameter for the data at time , we can ensure that the corresponding summand in the derivative is bounded by in absolute value, which allows us to apply Bernstein’s inequality. This justifies from the technical perspective our choices of the robustification parameters for different sample weights.
Remark 3.
Our next theorem provides theoretical justification of the second equation of (3.7). It basically says that the solution to that equation gives an appropriate value of .
Theorem 2.
Remark 4.
Define the half-life parameter . If we fix for a universal constant , which is common practice in volatility forecast, then we can ensure that are all of order .
3.3 Average deviation of volatility proxies
We are now in position to evaluate I, which concerns average deviation of the volatility proxies over all the time points. To illustrate the advantage of the sample-weighted Huber volatility proxy as proposed in (3.6), we first introduce and investigate two benchmark volatility proxies that are widely used in practice. Then we present our average deviation analysis of the sample-weighted Huber proxy.
The first benchmark volatility proxy, which we denote by , is simply a clipped squared return:
(3.9) |
where is the clipping threshold, and where is a similar deviation parameter as in (3.7). Here is tuned similarly as in (3.7), except that now the second equation of (3.9) does not depend on and thus can be solved independently. Following Theorem 2, we can deduce that and thus that . The main purpose of choosing such a rate of is to balance the bias and variance of the average of over time points. The following theorem develops a non-asymptotic bound for the average relative deviation of .
Theorem 3.
The second benchmark volatility proxy, which we denote by , is defined as
(3.10) |
The second equation of (3.10) is the same as that of (3.9). The only difference between and is that exploits not only a single time point, but multiple data points in the near future to construct the volatility proxy. Accordingly, the clipping threshold is updated as . The following theorem characterizes the average relative deviation of .
Theorem 4.
Under Assumption 1, for any satisfying that
-
1.
;
-
2.
;
-
3.
, ;
and any bounded series such that , let , we have
where depends on .
Remark 5.
Let . To achieve the optimal rate of , Theorem 4 requires that is of order . We also require to be at least the order of .
Remark 6.
One technical challenge of proving Theorem 4 lies in the overlap of squared returns that are used to construct neighboring , which leads to temporal dependence across . To resolve this issue, we apply a more sophisticated variant of Bernstein’s inequality for time series data (Zhang, 2021). See Lemma 1 in the appendix.
We now move on to the Huber volatility proxy. At time , denote the solution to (3.7) by . Note that is tuned based on just data points. Given that now our focus is on the average deviation of volatility proxies over data points, we need to raise our robustification parameters to reduce the bias of our Huber proxies and rebalance the bias and variance of the average deviation. After all, averaging over a large mitigates the impact of possible tail events, so that we can relax the thresholding effect of the Huber loss. Specifically, let , which is of order according to Theorem 2. Then we substitute into the first equation of (3.7) and solve for therein to obtain the adjusted proxy; that is to say, the final satisfies the following:
(3.11) |
The inflation factor in implies that the larger sample size we have, the closer the corresponding Huber loss is to the square loss. This justifies the usage of vanilla EWMA proxy as the most common practice in financial industry when the total evaluation period is long. However, when is not sufficiently large, the Huber proxy yields more robust estimation of the true volatility. The following theorem characterizes the average relative deviation of the Huber proxies.
Theorem 5.
Under Assumption 1, for any satisfying that
-
1.
;
-
2.
;
-
3.
, ;
-
4.
For any , is -strongly convex for ;
and any bounded series such that , let , we have
where depends on and .
Remark 7.
Compared with the previous two benchmark proxies, the main advantage of the Huber volatility proxy is that its average deviation error depends on the central fourth moment of the returns instead of the absolute one.
Remark 8.
-strong convexity is a standard assumption that can be verified for Huber loss. For example, Proposition B.1 of Chen and Zhou (2020) shows that the equally weighted Huber loss enjoys strong convexity for in the region where with probability at least when . Such strong convexity paves the way to apply Lemma 1 to the Huber proxies, so that we can establish their Bernstein-type concentration. Please refer to Lemma 2 in the appendix for details.
Remark 9.
To achieve the optimal rate of convergence if we choose , we need to additionally assume is of a smaller order, which in practice requires certain volatility smoothness.
4 Robust predictors
In this section, we further take into account the randomness from predictors and study bounding II, the difference between empirical risk and conditional risk.
4.1 Robust volatility predictor
We essentially follow (3.6), the sample-weighted Huber mean estimator, to construct robust volatility predictors. The only difference is that now we cannot touch any data beyond time ; we can only look backward at time . Consider the following volatility predictor based on the past data points with backward exponential-decay weights:
(4.1) |
Similarly to (3.7), we iteratively solve the following two equations for and simultaneously:
(4.2) |
where we recall that . Theorem 1 showed concentration of around , i.e. the loss is MSE. More generally, we hope to give results on . According to (a) in Section 2.1, . Therefore, we hope to bound
This should be easy to control if we assume smoothness of the loss fucntion. We give the following theorem.
Theorem 6.
Assume there exist such that . If , under Assumption 1, for , we have
where is the solution of the first equation.
Remark 10.
Here for notational simplicity, we used the same estimation horizon , same exponential-decay for constructing both predictors and proxies. But in practice, of course they do not necessarily need to be the same. In our real data example, we will use a slower decay for constructing predictors while using a faster decay for proxies, which seems to be the common practice for real financial data, where we typically use more data for constructing predictors and less data for constructing proxies. We will stick to equals twice the half-life so that equivalently, we use a longer window for predictors and shorter window for proxies.
Remark 11.
In addition, here we also simplify the theoretical results by assuming the same for constructing proxies and predictors. We do not need to use the same controlling the tail probability for predictors and proxies practically. For predictors, as we focus on local performance, it is more natural to use following Wang et al. (2020). For proxies, as we focus on overall evaluation, for a given we can take . Sometimes, we want to monitor the risk evaluation as grows, then a changing may not be a good choice; we do not want to re-solve the local weighted tuning-free Huber problem every time changes. Therefore, we recommend to use for a slightly larger , e.g. as in our real data analysis.
4.2 Concentration of robust and non-robust predictors
Recall a robust loss satisfies . So we further bound II as follows:
II | |||
We wish to show that both and can achieve the desired rate of for a broad class of predictors and loss functions. When is the proposed robust predictor in Section 4.1, we can obtain sharp rates of and as expected. Moreover, for the vanilla (non-robust) EWMA predictor , we are also able to obtain the same sharp rates for and for Lipschitz robust losses. The third option is to truncate the predictor, i.e. with some large constant and we can control the two terms for general robust losses. The bottom line is that for non-robust predictors, we need to make sure the loss does not go too crazy either via shrinking predictor’s effect on the loss (bounded Lipschitz) or clipping the predictor directly.
The interesting observation is that bounding II, or the difference between empirical risk and conditional risk, requires minimal assumption such that most of the reasonable predictors, say in the M-estimator form, have no problem satisfying the concentration bound with proper choice of a robust proxy, although we do require the loss not to become too wild (see Theorem 7 for details). Technically, the concentration here only cares about controlling variance and does not care about the bias between and and between and . There is no need to carefully choose the truncation threshold to balance variance and bias optimally.
Theorem 7.
Remark 12.
The proof can be easily extended to more general robust or non-robust predictors of the M-estimator form. Theorem 7 tells us that comparing robust predictors with optimal truncation for a single time point (rather than adjusting the truncation as in constructing proxies) and non-robust predictors (either with rough overall truncation when using a general loss or without any truncation when using a truncated loss) is indeed a valid thing to do, when we employ proper robust proxies.
Remark 13.
Although the first proxy achieves the optimal rate of convergence for comparing average conditional loss in Theorem 3, we did not manage to show it is valid for comparing the average empirical loss in Theorem 7. The reason is that single time clipping has no concentration guarantee like Theorem 1 for a single time point, therefore cannot ensure to be bounded with high probability for all , which is important to make sure the sub-exponential tail in Bernstein inequality does not dominate the sub-Gaussian tail. Taking into consideration of central fourth moment versus absolute fourth moment (see Remark 7), we would recommend the third proxy as the best practical choice among our three proposals.
5 Numerical study
In this section, we first verify the advantage of the Huber mean estimator over the truncated mean in terms of estimating the variance through simulations. As illustrated by Theorem 5, the statistical error of the Huber mean estimator depends on the central moment, while that of the truncated mean depends on the absolute moment. Then we apply the proposed robust proxies for volatility forecasting comparison, using data from crypto currency market. Specifically, we focus on the returns of Bitcoin (BTC) quoted by Tether (USDT), a stable coin pegged to the US Dollar, in the years of 2019 and 2020, which witness dramatic volatility of Bitcoin.
5.1 Simulations
We first examine numerically the finite sample performance of the adaptive Huber estimator (Wang et al., 2020) for variance estimation, i.e., we solve (3.5) iteratively for given the data and until convergence. We first draw an independent sample of that follows a heavy-tailed distribution. We investigate the following two distributions:
-
1.
Log-normal distribution , that is, .
-
2.
Student’s t distribution with degree of freedom .
Given that , we estimate and separately and plug these mean estimators into the variance formula to estimate . Besides the Huber mean estimator, we investigate two alternative mean estimators as benchmarks: (a) the naïve sample mean; (b) the sample mean of data that are truncated at their upper and lower -percentile. We use MSE and QL (see (2.1) for the definitions) to assess the accuracy of variance estimation. In our simulation, we set and we repeat evaluating these three methods in 2000 independent Monte Carlo experiments.


Figure 1 compares the MSE of the truncated and Huber variance estimators under log-normal distribution (left) and t-distribution (right). The red curve represents the MSE of the truncated method with different values on the top x-axis, and the blue curve represents the MSE of the tuning-free Huber method with different values on the bottom x-axis. The error bars in both panels represent the standard errors of the MSE. For convenience of comparison, we focus on the ranges of and that exhibit the smile shapes of MSE of the two methods. Note that the MSE of the sample variance, which is under the log-normal distribution and under the t-distribution, is too large to be presented in the plot. Figure 1 shows that the Huber variance estimator outperforms the optimally tuned truncated method with roughly between under the log-normal distribution and between under the Student’s t distribution. The performance gap is particularly large under the t distribution, where the optimal Huber method achieves around less MSE than the optimal truncated method.


Figure 2 assesses the QL loss of the truncated and Huber estimators and displays a similar pattern as Figure 1. Again, the naïve sample variance is still much worse off with QL loss under the log-normal distribution and under the t-distribution; we therefore do not present it in the plots. The Huber approach continues to defeat the optimally tuned truncation method with any under both distributions of our focus. Together with the ranges where the Huber approach is superior in terms of MSE, our results suggest that can be a good practical choice, at least to start with. Such a universal practical choice of demonstrates the adaptivity of the tuning-free Huber method.
5.2 BTC/USDT volatility forecasting
We use the BTC/USDT daily returns to demonstrate the benefit of using robust proxies in volatility forecasting comparison.

Fig 3 presents the time series, histogram and normal QQ-plot of the daily BTC returns from 2019-01-01 to 2021-01-01. It is clear that the distribution of the returns is heavy-tailed, that the volatility is clustered and that there are extreme daily returns beyond 10% or even 20%. The empirical mean of the returns over this 2-year period is 38 basis points, which is quite close to zero compared with its volatility. We thus assume the population mean of the return is zero, so that the variance of the return boils down to the mean of the squared return. In the sequel, we focus on robust estimation of the mean of the squared returns.
5.2.1 Construction of volatility predictors and proxies
Let denote the daily return of BTC from the end of day to the end of day . We emphasize that a volatility predictor must be ex-ante. Here we construct based on in the backward window of size and evaluate it at the end of day . Our proxy for the unobserved variance of is instead based on in the forward window of size .
We consider two volatility prediction approaches: (i) the vanilla EWA of the backward squared returns, i.e., ; (ii) the exponentially weighted Huber predictor proposed in Section 4.1. Each approach is evaluated with half lives equal to days (1 weeks) and days (2 weeks), giving rise to four predictors, which are referred to as EWMA_HL7, EWMA_HL14, Huber_HL7, Huber_HL14. We always choose to be twice the corresponding half life and set for the two ’Huber predictors. As for volatility proxies, we similarly consider two methods: (i) the vanilla forward EWA proxy, i.e., ; (ii) the robust Huber proxy proposed in Section 3.3. We set the half life of the exponential decay weights to be always days, and . We evaluate the Huber approach on two time series of different lengths: or , which imply two different values that are used in (3.11). We refer to the two corresponding Huber proxies as Huber_ and Huber_. Given the theoretical advantages of the Huber proxy as demonstrated in Remarks 7 and 13, we do not investigate the first two proxies proposed in Section 3.3.
Crypto currency market is traded 24 hours every day nonstop, which gives us daily returns from 2019-01-01 to 2021-01-01. After removing the first days used for predictor priming and the last days used for proxy priming, we then have data points left. For each day, we compute the four predictors and three proxies that are previously described. We plot the series of squared volatility (variance) proxies in Fig 4. As we can see, the vanilla EWMA proxy (blue line) is obviously the most volatile one, reaching the peak variance of , or equivalently, volatility of in March, 2020, when the outbreak of COVID-19 in the US sparked a flash crash of the crypto market. In contrast, the Huber proxies react in a much milder manner, and the smaller we consider, the more truncation effect on the Huber proxies.

5.2.2 Volatility forecasting comparison with large
With the predictors and proxies computed, we are ready to conduct volatility prediction evaluation and comparison. Now we would like to emphasize one issue that is crucial to the evaluation procedure: the global scale of the predictors. Different loss functions may prefer different global scales of volatility forecasts. For example, QL penalizes underestimation much more than overestimation, as the predictor is in the denominator in the formula of QL. In other words, QL typically favors relatively high forecast values. To remove the impact of the scales and focus more on the capability of capturing relative variation of volatility, we also compute optimally scaled versions of our predictors and evaluate their empirical loss. Specifically, we first seek the optimal scale by solving for
and then use for prediction. By comparing the empirical risk of the optimally scaled predictors, we can completely eliminate the discrimination of the loss against different global scales. Some algebra yields that for MSE, the optimal , and for QL, the optimal . Table 1 reports the loss of the four predictors and their optimal scaled versions based on all the 691 time points with the non-robust EWMA proxy and the robust proxy Huber_. Several interesting observations are in order.
MSE | ||||||
EWMA Proxy | Huber_720 Proxy | |||||
Orig (1e-6) | Scaled (1e-6) | Orig (1e-6) | Scaled (1e-6) | |||
EWMA_HL14 | 4.115 | 3.365 | 0.55 | 3.285 | 2.386 | 0.50 |
Huber_HL14 | 3.162 | 3.161 | 1.03 | 2.233 | 2.228 | 0.94 |
EWMA_HL7 | 4.824 | 3.395 | 0.46 | 3.930 | 2.364 | 0.44 |
Huber_HL7 | 3.112 | 3.110 | 1.05 | 2.134 | 2.133 | 0.98 |
QL | ||||||
EWMA Proxy | Huber_720 Proxy | |||||
Orig | Scaled | Orig | Scaled | |||
EWMA_HL14 | 0.804 | 0.647 | 1.67 | 0.584 | 0.548 | 1.29 |
Huber_HL14 | 1.352 | 0.567 | 2.82 | 0.831 | 0.450 | 2.14 |
EWMA_HL7 | 1.239 | 0.792 | 2.26 | 0.720 | 0.595 | 1.59 |
Huber_HL7 | 2.382 | 0.702 | 4.09 | 1.396 | 0.532 | 2.94 |
-
•
Using the longer half life of days gives a smaller QL loss, regardless of whether the predictor is robust or non-robust, original or optimally scaled, and regardless of whether the proxy is robust or non-robust. In terms of MSE, the half-life comparison is mixed: Huber_HL7 is slightly better than Huber_HL14, but EWMA_HL14 is better than EWMA_HL7. We only focus on the longer half-life from now on.
-
•
If we look at the original predictors without optimal scaling, it is clear that MSE favors the robust predictor and QL favors the non-robust predictor, regardless of using robust or non-robust proxies. This confirms that different loss functions can lead to very different comparison results.
-
•
However, the above inconsistency between MSE and QL is mostly due to scaling, which is clearly demonstrated by the column of the optimal scaling . For MSE, the optimal scaling of the EWMA predictor is around , while that of the Huber predictor is around . In contrast, for QL, the optimal scaling needs to be much larger than and Huber needs a even larger scaling. If we look at the loss function values with optimally scaled predictors, it is interesting to see that the Huber predictor outperforms the EWMA predictor in terms of both MSE (slightly) and QL (substantially). This means that the Huber predictor is more capable of capturing the relative change of time-varying volatility than the non-robust predictor.
-
•
Last but not least, when the sample size is large compared with (here ), the difference between the EWMA and Huber proxies is small, which explains the reason they give consistent comparison results. When is not large enough in the next subsection, we can see that the robust proxies gives more sensible conclusions.
5.2.3 Volatility forecasting comparison with small






Now suppose we only have data points to evaluate and compare volatility forecasts. In Fig 5, we present the curve of 180-day rolling loss difference, i.e., with ranging from to , where can either be the EWMA proxy or Huber_. Positive loss difference at indicates that the EWMA predictor outperforms the Huber predictor in the past 180 days. We see that most of the time, Huber_HL14 defeats EWMA_HL14 (negative loss difference) in terms of MSE, while EWMA_HL14 defeats Huber_HL14 (positive loss difference) in terms of QL. In terms of MSE, robust proxies tend to yield more consistent comparison between the two predictors throughout the entire period of time. We can see from the upper panel of Figure 5 that the time period for EWMA_HL14 to outperform Huber_HL14 is much shorter with the robust proxies (orange curve) than that with the EWMA proxies (blue curve). In terms of QL, if we use the EWMA proxy, we can see from the lower panel of Figure 5 that the robust predictor is much worse than the non-robust predictor, especially towards the end of 2020. However, the small MSE difference at the end of 2020 suggests that the EWMA proxy should overestimate the true volatility and exaggerate the performance gap in terms of QL. With the Huber proxy, however, the loss gap between the two predictors is much narrower, suggesting that the Huber proxy is more robust against huge volatility.
Fig 6 presents the curve of 180-day rolling loss difference between optimally scaled Huber_HL14 and EWMA_HL14, based on robust and EWMA proxies respectively. For MSE, from the previous subsection, we know that when the optimal scaling is applied, two predictors do not differ much in terms of the overall loss, and the Huber predictor only slightly outperforms the EWMA predictor. In the upper panel of Fig 6, we see that the robust-proxy-based curve is closer to zero than the EWMA-proxy-based curve, displaying more consistency with our result based on large . For QL, the loss differences using the robust or the non-robust proxy look quite similar. We also plot and versus time based on robust and non-robust proxies in Fig 7. For both MSE and QL losses, using the robust proxy leads to more stable optimal scaling values, which are always preferred by practitioners.
In a nutshell, we have seen that how our proposed robust Huber proxy can lead to better interpretability and sensible comparison of volatility predictors. When total sample size is small compared to the local effective sample size, using a robust proxy is necessary and leads to a smaller probability of misleading forecast evaluation and comparison. When the total sample size is large enough, the proposed robust proxy automatically truncates less and resembles the EWMA proxy. This also provides justification for using a non-robust EWMA proxy when the sample size is large. But we still recommend the proposed robust proxy that can adapt to the sample size and the time-varying volatilities. Sometimes, even if the robust proxy only truncates data for a very small volatile period, in terms of risk evaluation, it could still cause significant difference.
6 Discussions
Compared with the literature of modeling volatility and predicting volatility, evaluating volatility prediction has not been given enough attention. Part of the reason is due to lacking a good framework for its study, which makes practical volatility forecast comparison quite subjective and less systematic in terms of loss selection and proxy selection. Patton (2011) is a pioneering work to provide one framework based on long term expectation and provides guidance for loss selection, while our work gives a new framework based on an empirical deviation perspective and further provides guidance on proxy selection. In our framework, we focus on predictors that can achieve desired probability bound for II so that the empirical loss is close to the conditional expected loss. Then the correct comparison of the conditional expected loss of two predictors with large probability rely on a good control for I, which imposes requirements for proxies.
With this framework, we proposed three robust proxies, each of which guarantees a good bound for I when the data only bears finite fourth moment. Among the three proxies, although they can all obtain the optimal rate of convergence for bounding I, we recommend the exponentially-weighted tuning-free Huber proxy. It is better than the clipped squared returns in that it leverages neighborhood volatility smoothness and it is better than the proxy based on direct truncation in that its improved constant in the deviation bound, depending only on the central moment. To construct this proxy, we need to solve the exponentially-weighted Huber loss, whose truncation level for each sample also needs to change with the weights surprisingly.
We then applied this proxy to the real BTC volatility forecasting comparison and reached some interesting observations. Firstly, robust predictors with better control in variance may use a faster decay to reduce the approximation bias. Secondly, different losses can lead to drastically different comparison, so even restricting to robust losses, loss selection is still a meaningful topic in practice. Thirdly, predictor rescaling according to the loss function is necessary and could further extract the value of robust predictors. Finally, the proposed robust Huber proxy adapts to both the time-varying volatility and the total sample size. When the overall sample size is much larger than the local effective sample size, the robust Huber proxy barely truncate, which provides justification for even using the EWMA proxy for prediction evaluation. However, the robust Huber proxy in theory still gives high probability of concluding the correct comparison.
There are still limitations of the current work and open questions to be addressed. Assumption 1 excludes the situation when depends on previous returns such as in GARCH models. We require for local performance guarantee, leading to a potentially slow decay. However, in practice, it is hard to know how fast the volatility changes. Also, we ignored the auto-correlation of returns and assumed temporal independence of the innovations for simplicity. Extensions of the current framework to time series models and more relaxed assumptions are of practical value to investment managers and financial analysts. Our framework may have nontrivial implications on how to conduct cross-validation with heavy-tailed data, where we use validation data to construct proxies for the unknown variable to be estimated robustly. Obviously, subjectively choosing a truncation level for proxy construction could favor certain truncation level used by a robust predictor. Motivated by our study, rescaling the optimal truncation level for one data splitting according to the total (effective) number of sample splitting sounds an interesting idea worth further investigation in the future.
References
- Andersen et al. (2005) Andersen, T. G., Bollerslev, T., Christoffersen, P. and Diebold, F. X. (2005). Volatility forecasting.
- Andersen et al. (2012) Andersen, T. G., Dobrev, D. and Schaumburg, E. (2012). Jump-robust volatility estimation using nearest neighbor truncation. Journal of Econometrics 169 75–93.
- Baillie et al. (1996) Baillie, R. T., Bollerslev, T. and Mikkelsen, H. O. (1996). Fractionally integrated generalized autoregressive conditional heteroskedasticity. Journal of econometrics 74 3–30.
- Black and Scholes (2019) Black, F. and Scholes, M. (2019). The pricing of options and corporate liabilities. In World Scientific Reference on Contingent Claims Analysis in Corporate Finance: Volume 1: Foundations of CCA and Equity Valuation. World Scientific, 3–21.
- Bollerslev (1986) Bollerslev, T. (1986). Generalized autoregressive conditional heteroskedasticity. Journal of econometrics 31 307–327.
- Bollerslev and Wooldridge (1992) Bollerslev, T. and Wooldridge, J. M. (1992). Quasi-maximum likelihood estimation and inference in dynamic models with time-varying covariances. Econometric reviews 11 143–172.
- Brailsford and Faff (1996) Brailsford, T. J. and Faff, R. W. (1996). An evaluation of volatility forecasting techniques. Journal of Banking & Finance 20 419–438.
- Brandt and Jones (2006) Brandt, M. W. and Jones, C. S. (2006). Volatility forecasting with range-based egarch models. Journal of Business & Economic Statistics 24 470–486.
- Brockwell and Davis (2009) Brockwell, P. J. and Davis, R. A. (2009). Time series: theory and methods. Springer Science & Business Media.
- Brooks and Persand (2003) Brooks, C. and Persand, G. (2003). Volatility forecasting for risk management. Journal of forecasting 22 1–22.
- Bubeck (2014) Bubeck, S. (2014). Convex optimization: Algorithms and complexity. arXiv preprint arXiv:1405.4980 .
- Carnero et al. (2012) Carnero, M. A., Peña, D. and Ruiz, E. (2012). Estimating garch volatility in the presence of outliers. Economics Letters 114 86–90.
- Catania et al. (2018) Catania, L., Grassi, S. and Ravazzolo, F. (2018). Predicting the volatility of cryptocurrency time-series. In Mathematical and statistical methods for actuarial sciences and finance. Springer, 203–207.
- Catoni (2012) Catoni, O. (2012). Challenging the empirical mean and empirical variance: a deviation study. Annales de l’Institut Henri Poincaré 48 1148–1185.
- Charles and Darné (2019) Charles, A. and Darné, O. (2019). Volatility estimation for bitcoin: Replication and robustness. International Economics 157 23–32.
- Chen et al. (2018) Chen, M., Gao, C. and Ren, Z. (2018). Robust covariance and scatter matrix estimation under huber’s contamination model. The Annals of Statistics 46 1932–1960.
- Chen and Zhou (2020) Chen, X. and Zhou, W.-X. (2020). Robust inference via multiplier bootstrap. The Annals of Statistics 48 1665–1691.
- Christiansen et al. (2012) Christiansen, C., Schmeling, M. and Schrimpf, A. (2012). A comprehensive look at financial volatility prediction by economic variables. Journal of Applied Econometrics 27 956–977.
- Christoffersen and Diebold (2000) Christoffersen, P. F. and Diebold, F. X. (2000). How relevant is volatility forecasting for financial risk management? Review of Economics and Statistics 82 12–22.
- Engle (1982) Engle, R. F. (1982). Autoregressive conditional heteroscedasticity with estimates of the variance of united kingdom inflation. Econometrica: Journal of the econometric society 987–1007.
- Fan et al. (2016) Fan, J., Li, Q. and Wang, Y. (2016). Robust estimation of high-dimensional mean regression. Journal of Royal Statistical Society, Series B .
- Fan et al. (2017) Fan, J., Li, Q. and Wang, Y. (2017). Estimation of high dimensional mean regression in the absence of symmetry and light tail assumptions. Journal of the Royal Statistical Society. Series B, Statistical methodology 79 247.
- Guo et al. (2016) Guo, T., Xu, Z., Yao, X., Chen, H., Aberer, K. and Funaya, K. (2016). Robust online time series prediction with recurrent neural networks. In 2016 IEEE International Conference on Data Science and Advanced Analytics (DSAA). Ieee.
- Huber (1964) Huber, P. J. (1964). Robust estimation of a location parameter. The Annals of Mathematical Statistics 35 73–101.
- Huber (1973) Huber, P. J. (1973). Robust regression: asymptotics, conjectures and monte carlo. The annals of statistics 799–821.
- Lamoureux and Lastrapes (1993) Lamoureux, C. G. and Lastrapes, W. D. (1993). Forecasting stock-return variance: Toward an understanding of stochastic implied volatilities. The Review of Financial Studies 6 293–326.
- Minsker (2018) Minsker, S. (2018). Sub-gaussian estimators of the mean of a random matrix with heavy-tailed entries. The Annals of Statistics 46 2871–2903.
- Park (2002) Park, B.-J. (2002). An outlier robust garch model and forecasting volatility of exchange rate returns. Journal of Forecasting 21 381–393.
- Patton (2011) Patton, A. J. (2011). Volatility forecast comparison using imperfect volatility proxies. Journal of Econometrics 160 246–256.
- Poon and Granger (2003) Poon, S.-H. and Granger, C. W. (2003). Forecasting volatility in financial markets: A review. Journal of economic literature 41 478–539.
- Sun et al. (2020) Sun, Q., Zhou, W.-X. and Fan, J. (2020). Adaptive huber regression. Journal of the American Statistical Association 115 254–265.
- Taylor (2004) Taylor, J. W. (2004). Volatility forecasting with smooth transition exponential smoothing. International Journal of Forecasting 20 273–286.
- Taylor (1994) Taylor, S. J. (1994). Modeling stochastic volatility: A review and comparative study. Mathematical finance 4 183–204.
- Trucíos (2019) Trucíos, C. (2019). Forecasting bitcoin risk measures: A robust approach. International Journal of Forecasting 35 836–847.
- Vasilellis and Meade (1996) Vasilellis, G. A. and Meade, N. (1996). Forecasting volatility for portfolio selection. Journal of Business Finance & Accounting 23 125–143.
- Wang et al. (2020) Wang, L., Zheng, C., Zhou, W. and Zhou, W.-X. (2020). A new principle for tuning-free huber regression. Statistica Sinica .
- Zhang (2021) Zhang, D. (2021). Robust estimation of the mean and covariance matrix for high dimensional time series. Statistica Sinica 31 797–820.
Appendix A Proofs
This section provides proof details for all the theorems in the main text.
Proof of Theorem 1.
The proof follows Theorem 5 of Fan et al. (2016). Denote , we have
Define , so is the solution to .
Define . The RHS can be further bounded by
Similarly, we can prove that . Define
By Chebyshev inequality,
Similarly, . Following the same argument with Fan et al. (2016), we can show that for large enough such that , the root of satisfies that
and the root of satisfies that
With the choice of given in Theorem 1, we have . And the requirement for the effective sample size is that .
∎
Proof of Theorem 2.
We extend Theorem 2.1 of Wang et al. (2020) to the weighted case. Note again we are solving the following equation for :
Also defined as the solution of the corresponding population equation:
We will first show that (a) and then (b) with probability approaching , for a small fixed . To prove (a), it is straightforward to see that
Furthermore,
Therefore, . Consider the solution to the equation of the variable . Note that the solution is unique. Since all are on the same order, we know that , . Let , the corresponding solution is . Define . So we have
Let , so we have showed that . Therefore, . From , we know that for any , , so is . Therefore, we have . Write for some .
So we have shown (a) holds, that is, .
Next, we need to show (b), so that the solution from the second equation gives us the desired optimal truncation rate. To this end, we still follow the proof of Theorem 1 of Wang et al. (2020) closely. Specifically, define , using their notations, we define
and their population versions
One important fact here is that and , which is key to prove Theorem 1 of Wang et al. (2020). The only difference of our setting here is that we do not assume ’s are identically distributed. So when applying Bernstein’s inequality as in (S1.8) of Wang et al. (2020), we need to use the version for non-identically distributed variables, and also bound the sum of individual variances. Specifically, define , we have , and hence . So we can indeed apply Bernstein’s inequality on . For more details, we refer the interested readers to Wang et al. (2020). ∎
Proof of Theorem 3.
Let .
To bound the first term, we can apply Bernstein inequality for where . Note that and , so we can choose to make the first term bounded by .
To bound the second term, note that
Here we can also choose for a large enough to make the second probability equal to . ∎
Lemma 1.
Let be a process such that . Define
for any where is an iid copy of and are independent random innovations satisfying Assumption 1. Assume for all and there exists constant such that
Also assume . We have for ,
where , with .
The proof of Lemma 1 follows closely with Theorem 2.1 of Zhang (2021). The only extension here is that we do not require or even to be identically distributed, so there is no assumption on the stationarity of the process . However, we requires a stronger assumption on the maximal perturbation of each . The entire proof of Zhang (2021) will go through with this new definition of and . We omit the details of the proof.
Proof of Theorem 4.
Let and define . It is not hard to see that for ,
And for , . Therefore,
for any fixed .
In addition, we claim with high probability. To prove this, we need the following result: when and ,
This can be shown following a similar proof as Theorem 1, thus we omit the details. Note that here is not chosen to optimize the error bound, as we have another average over to take care of the extra variance in . So here we only need to choose to make sure the error bound to be in the order of . and pick can indeed do the job, since the exception probability is under the assumption that . We require to hold for all time points, so the exception probability for all events is bounded by . When , this is further bounded by . Finally and lead to the requirements of and . Now conditioning on that , we are ready to call Lemma 1 for . We can choose in Lemma 1 to make the exception probability smaller than . So in total the exception probability is .
Next, in terms of the bias term . From the proof of Theorem 3, we know that . Therefore
Thus, the bias term will not affect the total error bound as shown in the theorem. ∎
Lemma 2.
Assume the weighted Huber loss is -strongly convex for some in some local neighborhood around . is the perturbed version of . If and and , we have
Proof of Lemma 2.
Besides strong convexity, we know that is -smooth with . That is
is obvious given the second derivative of is bounded by 1. From Lemma 3.11 of Bubeck (2014), for ,
Choose , so . Then
which concludes the proof. ∎
Proof of Theorem 5.
Let and define . In order to apply Lemma 1, we need to employ Lemma 2 to bound the perturbation of Huber loss minimizer via bounding the perturbation of Huber loss derivative. Similar to the proof of Theorem 4, we can show that for all with probability larger than . We can actually explicitly write out the bound for following the proof of Theorem 4: , which means the Huber loss minimizer does fall into the region we have strong convexity by our assumptions in Theorem 5. The bound on also implies that with exception probability of .
In addition, we would like to check . For , and for ,
Therefore for any fixed . We can indeed apply Lemma 1 to bound the sum of , which is of order , with exceptional probability of another .
Finally, we bound the bias term . Note that
Similar to the proof of Theorem 3, we know that the -th component of the third term can be bounded by . Therefore
Furthermore, it is not hard to show that is bounded using Lemma 2. Therefore, the bias is indeed of the order with the additional approximation error rate . The proof is now complete. ∎
Proof of Theorem 6.
Following the same proof as Theorem 1, we have that
Conditioning on this event, is Lipchitz in the second argument. Then the error bound is enlarged by times. ∎
Proof of Theorem 7.
Recall that
Now let us prove (i) first. We first bound . We still apply Lemma 1 to bound the concentration. Let . From the proof of Theorem 6, we know that
So with exception probability of . Applying the union bound, we get for all with probability . On this event, we have because . Similar to previous proofs, except that now we look data backward, for ,
The last inequality can be shown similar to the proof of Theorem 5 with the assumption that the Huber loss is -strongly convex locally. Therefore, for any fixed . We apply Lemma 1 on and again pick to make the exceptional probability . So we get .
Now let us try to apply Lemma 1 on . Let . Note that since is non-increasing, so is bounded. If we use the second and third proxies, in the proofs of Theorem 4 and 5, we have showed that for all with exception probability at most . Therefore, we conclude for all with exception probability at most . Now for bounding , note that actually is a function of with the first data constructing predictors and the remaining data constructing proxies. Hence, for , it is not hard to show for some and for , . So we have . Applying Lemma 1 will again give us .
Combining results for and and choose for large enough , we conclude (i) for bounding II.
Next we prove (ii) and (iii). The proof follows the exact same arguments as (i) except for a few bounding details.
Firstly, in bounding , we need to be bounded. In (iii), is not necessarily bounded, but we directly work with a bounded loss . In (ii), and we claim with probability , thus . To see why the claim holds, define . The robust predictor proposed in Section 4.1 achieves central fourth moment, while can achieve the same rate of convergence with absolute fourth moment. This is similar to the difference between the second and third proxy option. Similar to the proof of Theorem 6, we can show
So we know that with probability . Interestingly, . So we always have good control of the left tail due to the positiveness of squared data.
Secondly, in bounding , we need . In (iii), since loss is Lipchitz in the whole region, we can easily see that . In (ii), loss is Lipchitz in the local region of , but we know with high probability the clipped predictor indeed falls into the region, so we have . Therefore, we have no problem bounding .
Thirdly, in bounding , we require to be bounded. Note that since with high probability, we have even when we use non-robust predictors in (ii) and (iii). So we can bound as desired too.
Finally putting everything together and choose for large enough , we conclude (ii) and (iii) for bounding II.
∎