Spectral bootstrap confidence bands for Lévy-driven moving average processes111The support of German Science Foundation research grant (DFG Sachbeihilfe) 406700014 is gratefully acknowledged.
Abstract.
In this paper we study the problem of constructing bootstrap confidence intervals for the Lévy density of the driving Lévy process based on high-frequency observations of a Lévy-driven moving average processes. Using a spectral estimator of the Lévy density, we propose a novel implementations of multiplier and empirical bootstraps to construct confidence bands on a compact set away from the origin. We also provide conditions under which the confidence bands are asymptotically valid.
1. Introduction
The continuous-time Lévy-driven moving average processes are defined as
(1.1) |
where is a deterministic kernel and is a two-sided Lévy process with a Lévy triplet . The conditions which guarantee that this integral is well-defined are given in the pioneering work by Rajput and Rosinski [13]. For instance, if , it is sufficient to assume that .
Continuous-time Lévy-driven moving average processes (and slightly modified versions of them) are widely used for the construction of many popular models such as Lévy-driven Ornstein-Uhlenbeck processes, fractional Lévy processes, CARMA processes, Lévy semistationary processes and ambit fields, cf. Barndorff-Nielsen, Benth and Veraart [1], Podolskij [12]. Most of these models can be applied to financial and physical problems. For instance, the choice with and (known as Gamma-kernel) is used for modeling volatility and turbulence, see e.g. Barndorff-Nielsen and Schmiegel [2]. Otherwise, the choice (known as well-balanced Ornstein-Uhlenbeck process) can be used for the analysis of the SAP high-frequency data, see Schnurr and Woerner [15].
This paper is devoted to statistical inference for continuous-time Lévy-driven moving average processes. Assuming that the high-frequency equidistant observations of the process are given, we aim to estimate the characteristic triplet of the process . Recently, Belomestny, Panov and Woerner [4] considered the statistical estimation of the Lévy measure from the low-frequency observations of the process . The approach presented in [4] is rather general - in particular, it works well under various choices of . Nevertheless, this approach is based on the superposition of the Mellin and Fourier transforms of the Lévy measure, and therefore its practical implementation can meet some computational difficulties. In [3], another method was presented, which essentially uses the following theoretical observation. For any kernel , the characteristic function of the process and the characteristic exponent of the process are connected via the formula:
It was noted in [3] that under the choice this formula can be inverted without use of an additional integral transformations, that is, the function can be represented via and its derivatives. Therefore, the characteristic exponent can be estimated from the observations of the process and further application of the Fourier techniques leads to a consistent estimator of the Lévy triplet.
The current paper is devoted to the estimation of the Lévy measure in the same model as in [3] but based on high-frequency observations of the process Moreover, we are interested in uniform bootstrap confidence bands for We propose a novel implementation of the multiplier and empirical bootstrap procedures to construct confidence bands on a compact set away from the origin. We also provide conditions under which the confidence bands are asymptotically valid. Our approach can be viewed as an extension of the recent work [10] where bootstrap confidence bands are constructed for the case of high-frequency observations of the Lévy process itself.
The paper is organised as follows. In Section 2 we formulate our main statistical problem and propose an estimator for the underlying Lévy density . We also discuss how to construct confidence bands for . Section 3 contains a detailed description of the bootstrap procedure and results on the validity of the bootstrap confidence bands. Some numerical results on simulated data are shown in Section 4. Finally, in Section 5 all proofs are collected.
2. Set-up
We shall consider continuous-time Lévy-driven moving average processes of the form:
(2.1) |
where is a symmetric kernel given by
(2.2) |
for some is a two-sided Lévy process with the Lévy triplet . Note that as a limiting case for we get the exponential kernel . It follows from [13] that the process is well-defined and infinitely divisible with the characteristic function:
where
and
Furthermore, under our choice of the kernel function , we can represent the characteristic exponent of the Lévy-process via the characteristic function of the increments . We explicitly derive for (see Lemma 3),
(2.3) |
where the operator is defined as
for any locally bounded function and
(2.4) |
Moreover, if
(2.5) |
for some natural then the function satisfies (see Lemma 4)
(2.6) |
and as a result we have convergence
(2.7) |
for Furthermore by inverting the operator we get from (2.3)
with
(2.8) |
On the other hand, under the condition (2.5) with , we obtain from (2.4),
where Therefore, we can apply the inverse Fourier transform to get
(2.9) |
where
In view of (2.6) and (2.8), the term is of smaller order in than the first term in (2.9) and we can consider the limiting case (2.7) in (2.9).
In this work we assume that we observe a discretised (high-frequency) trajectory of the limiting Lévy process with characteristic function This assumption is mainly done to simplify analysis and avoid difficulties related to the time dependence structure of the process Still the main features of the underlying inverse problem (e.g. the structure of the inverse operator ) remains reflected in our statistical analysis. An extension to the case where one directly observes the process will also be discussed.
Let us now describe our estimation procedure. Let be an integrable kernel function such that
and suppose that the Fourier transform of is supported in Motivated by (2.9), we propose to estimate via the estimator:
(2.10) |
where with
is a sequence of positive numbers (bandwidths) such that as , and is an estimator of Our aim is to construct confidence bands for the transformed Lévy density on a compact set in and to prove validity of the proposed confidence bands. To this end, we shall use the Gaussian multiplier (or wild) bootstrap.
3. Main results
3.1. Construction of confidence bands
Using the equations (2.9) and (2.10), the difference can be represented as
(3.1) |
where
(3.2) |
Later we show that under suitable assumptions (Assumption 1), the terms and are asymptotically (as and ) smaller than and hence can be neglected when constructing the confidence interval for the transformed Lévy density Further note that
where and
(3.3) |
(3.4) |
(3.5) |
(3.6) |
With the above notations becomes
(3.7) |
or alternatively
(3.8) |
where the kernel functions are defined as
The representation (3.8) is crucial for our analysis. Consider now the process
(3.9) |
where is given by
(3.10) |
Under some conditions, we shall show that there exists a tight -sequence of Gaussian random variables with zero mean and the same covariance function as one of , and such that the distribution of asymptotically approximates the distribution of in the sense that
Accordingly, the construction of confidence bands reduces to estimating the quantiles of the r.v. . To this end we shall use bootstrap. Define
for then the -confidence band for is of the form:
Since for all means that
we can show that
as Hence is a valid confidence band for on with an approximate level . However, we still need to estimate the quantile In what follows we consider the Gaussian multiplier (or wild) bootstrap to estimate the quantile .
Gaussian multiplier bootstrap.
The main idea of the Gaussian multiplier bootstrap consists in reweighting estimated influence functions using mean zero and unit variance pseudo-random variables, see, e.g. [8] for more details. On the one hand, the advantage of this method compared to the conventional bootstrap is that we can avoid recomputing the estimator in each bootstrap repetition, and as a result we reduce the calculation time. On the other hand, one of the disadvantages of the Gaussian multiplier bootstrap is that it is necessary to obtain an analytical expression for the corresponding influence function. In our case, this method will be used as follows. First we simulate independent centred Gaussian random variables , independent of the data and construct the multiplier process of the form:
(3.11) |
where
(3.12) |
and is based on a bootstrapped version of the empirical characteristic function :
Furthermore, we estimate using quantile of the distribution of conditional on the data The latter quantity can be computed via simulations. As a result, the confidence band takes the form
(3.13) |
3.2. Validity of bootstrap confidence bands.
In this section, we will present the main result, which proves the validity of the confidence band .
Assumption 1.
We assume that the following conditions are fulfilled.
-
(i)
for some
-
(ii)
Let and let be an integer such that . The function is p-times differentiable, and is -Hölder222 The function is called Hölder continuous for , if continuous.
-
(iii)
It holds and .
-
(iv)
The estimator satisfies
Discussion
Condition (i) is a moment condition and is equivalent to finiteness of (6+)-th moment of the increments process (see Lemma 8 for more details). And finally, Condition (iv) guarantees that the term is of smaller order as compared to the order of the leading term in
Now we formulate the main theorem of this section, which shows the convergence of the proposed Gaussian approximation.
Theorem 1.
(Gaussian approximation)
Under our assumptions, for sufficiently large n, there exists a tight Gaussian random variable in with zero mean and covariance function of the form where
the integral operator is defined as has the form (5.22) and
Moreover it holds
as and
(3.14) |
Building on Theorem 2, the following result formally establishes the asymptotic validity of the multiplier bootstrap confidence band .
Theorem 2.
(Validity of bootstrap confidence bands). Under Assumption 1 we have that
as . Moreover the supremum width of the confidence band of is of order .
Discussion on choosing for , and
From the lemma 12 applies , which leads to the first assumption 1 (iii), namely . According to the representation 3.1 applies
Under the condition of the dominance of the convergence rate of the first term follows assumption 1 (iv), namely and assumption 1 (iii), namely . If the two terms are supposed to be significant, the last condition is represented in the form , which leads to a relationship between and . Let , where then applies
From the proof of the theorem 1 it follows that
and
further follows
We also find the relationship between and so that the error of the Gaussian approximation is comparable to the approximation error . Let , where , then applies
Furthermore, it should be noted that the bootstrap approximation of a Gaussian process according to the theorem 2 has the order
Therefore the rate of convergence of this approximation is faster than the one mentioned above in the theorem 1 order, namely applies
It is also important to note that according to the theorem 2, the the supremum width of the confidence band should also converge to 0:
Since then applies
if assumption satisfies. Since the expression has a slower order of convergence than the expression , then the expression should be specified:
where The supremum width of the confidence band is minimal if , da applies. Then the following applies to the relationship between and :
4. Numerical results
Consider the integral (2.1) with the kernel from the class (2.2) for some , and the Lévy process defined by
(4.1) |
where is a drift, , is a Brownian motion, , , are two Poisson processes with intensity , , ,… and , ,… are i.i.d. r.v’s with an absolutely continuous distribution, and all s, , , are jointly independent. For simulation study, we take and and aim to estimate the corresponding Lévy density of under different choices of the parameter namely 0.8 and 0.9.
Simulation. Recall that the Lévy-driven moving average process satisfying 2.1 is observed at discrete instants with regular sampling interval and our estimation procedure is based on the random variables which are independent, identically distributed, with common characteristic function . We assume that, as tends to infinity, tends to 0 and tends to infinity.
For denote the jump times of by , ,…., corresponding to the jump sizes , ,…. and are independent r.v’s with standard exponential distribution with parameter . Note that
(4.2) |
where
Finally, the limiting Lévy process is defined by
Typical trajectory of the of the limiting Lévy process is presented in Figure 4.1.

Estimation. Following the ideas from Section 2, we estimate the transformed Lévy measure by Equation (2.10) under different choices of . To show the convergence properties of the considered estimates, we provide simulations with different values of n. Figure 4.2 shows an estimate of the real part of the characteristic exponent of the Lévy process through discrete observations of the limit Lévy process

It is important to note that a good estimate of the characteristic exponent is obtained when . Figure 4.3 shows the estimator of the transformed Lévy density through discrete observations of the Limit-Lévy process .

The estimation of the Lévy densities based on 25 simulation runs are presented in Figures 4.4.

On the one hand, a priori choice for the parameter can be found using the interval for where the characteristic function of the process can be approximated by empirical characteristic function. On the other hand, a priori choice of the parameter has to consider the assumption 1 (iii). Note that the parameter is chosen by numerical optimization. Namely, for each choice of , we first estimate the Lévy densities for each from an equidistant grid (from to with step ), and then analyze the quality of estimation in terms of the minimal mean square error. Because the best results are obtained for from 0.1 to 0.2, we reproduce the estimation procedure for from another grid (from to with step ). After several iterations, we stop the procedure. It is important to note that in the real-life examples, the aforementioned strategy for choosing should be changed, because the comparison with respect to the mean square error is not possible. One should rather use adaptive methods. The simulation results illustrated in the figure 4.4 show that the convergence rates significantly depend on the parameter . More precisely, it turns out that the quality of estimation increases with growing , and the best rates correspond to the case when is close to 1. This can be explained by the fact that observations become less dependent as increases. Let us remark that in Figure 4.4 we show the real parts of the estimate . The imaginary part of the considered estimate is quite small (of order ) and is shown in the Figure 4.5.

Finally, following the ideas from Section 2, we construct the confidence interval for the transformed Lévy density via the Gaussian multiplier bootstrap method with parameters , and the confidence level . The dashed line in Figure 4.6 represents the estimator of the transformed Lévy density (red line).

5. Proofs
Lemma 3.
We have
where the operator is defined as
for any locally bounded function and
Furthermore, has the form
(5.1) |
Proof.
In the previously described scenario, the characteristic function of the increment process has the form
where
The last expression can be obtained using Lemma 5.5 in Sato [14] and taking into account the fact that
We also calculate the first two derivatives of ,
Then the characteristic function of the increment process has the form:
(5.2) |
where
(5.3) |
∎
Note that the distribution of is infinitely divisible. Next, we prove (2.6).
Lemma 4.
Proof.
Next, we formulate and prove some auxiliary lemmas that we need to prove the main results. In the sequel we assume that as and stands for inequality up to a constant not depending on and
Lemma 5.
We have for any with
Proof.
Recall that
By using the Taylor expansion we obtain for any
so that
Then the infimum of can under the condition be estimated by
This completes the proof. ∎
Lemma 6.
Define , then , where denotes the convolution.
Proof.
Using the change of variables, we may rewrite the term as
By the Tailor’s expansion we obtain for any
for some ). Since Hölder continuous, we can assert that
Furthermore, since we conclude that for any
This completes the proof. ∎
Lemma 7.
Suppose that for some and let
(5.8) |
then we have 333The supremum is found on the interval so that supported in
-
(i)
-
(ii)
Proof.
Under the assumption we have
and the assertion follows. ∎
Lemma 8.
For we have
where the increment process of the limiting Lévy process .
Proof.
Since we have
Note that
where for . This observation completes the proof. ∎
Lemma 9.
For we have
where
Proof.
Lemma 10.
Let be the form
where and
Then applies
Proof.
Lemma 11.
Let denote the distribution of the r.v. . The measures for have Lebesgue densities . For any compact set in and any , let , where .
We have
-
(i)
-
(ii)
for some sufficiently small such that .
Proof.
Note that
where It also follows that
From infinite divisibility of the process it follows that . Then
Furthermore
Hence
(5.9) |
For any it holds due to the Markov inequality
Then it follows and since we have
Then the first claim follows from (5.9),
Lemma 12.
Proof.
We have
In order to determine the infimum of the variance, we compute the supremum of the expected value and the infimum of for Note that
Further we get
Furthermore
(5.10) | ||||
for Analogously we get
(5.11) |
Then due to (5.10) and (5.11) we get
(5.12) |
Analogously
(5.13) |
(5.14) |
(5.15) |
To estimate the infimum for , we consider
Since , we have according to the Plancherel’s theorem:
where
Furthermore applies
Since for all it follows for
(5.16) |
The same argument applies to
Furthermore
Similarly and we have
(5.17) |
(5.18) |
(5.19) |
Taking into account Lemma 11, we get
(5.20) |
Finally, by combining (5.20) with (5.12)-(5.15), we prove the claim. ∎
5.1. Proof of Theorem 1
Using the equations (3.1) and (3.2), the difference can be represented as
where
Under assumptions 1 (iv) and lemma 6, the terms and are asymptotically (as and ) smaller than and hence can be neglected when constructing the confidence interval for the transformed Lévy density With the notations 3.8 for , namely
where the kernel functions are defined as
consider the process
(5.21) |
where is given by
(5.22) |
Futher we show that exists a tight -sequence of Gaussian random variables with zero mean and the same covariance function as one of , and such that the distribution of asymptotically approximates the distribution of in the sense that
In what follows, we always assume Assumption 1. The proofs rely on modern empirical process theory. For a probability measure on a measurable space and a class of measurable functions on such that , let denote the -covering number for with respect to the -seminorm . See Section 2.1 in [16] for details. Let denote the equality in distribution. Consider the function class
According to Lemma 12
and we have
(5.23) |
uniformly in . Under condition (iii) of Assumption 1, the expression converges to 0. Further we approximate by the supremum of a tight Gaussian random variable in with expected value zero and the same covariance function as for the random variable . Using Theorem 2.1 in [6], which proves the existence of such random variable , we consider the empirical process:
Note that according lemma 11 the increment process has the distribution , so that with and for Hence
Since , we have according to the Plancherel’s theorem,
Using (5.16), (5.17), (5.18) and (5.19), we get that
(5.24) |
holds and it also follows that
Furthermore
(5.25) |
Hence and we have
(5.26) |
Due Theorem 2.1 in [6] with and sufficiently large, we derive that there exist a random variable with the same distribution as such that
(5.27) |
Taking into account Assumption 1 (iii), the expression (5.23) converges to zero slower than (5.27). Then the statement 3.14 follows. In addition, for
we define , and we observe, that there exists a tight Gaussian random variable in with expected value zero and the same covariance function as for . The following concentration inequality holds (see Theorem 2.1 in [9] for any
(5.28) |
According to the Corollary 2.1 in [9], Theorem 3 in [7] and the representation 5.27 for we can claim that there exists a sequence such
Since and we have that
for all . In this way we have
for all .
5.2. Proof of Theorem 2
The proof scheme of the validity of bootstrap confidence bands was introduced by Kato and Kurisu [10] and can be represented as follows.
Step 1: Conditional distribution of the supremum of the multiplier process consistently estimates the distribution of the Gaussian supremum in the sense that
Step 2: In addition together with Theorem 1 we have that
Step 3: Combining steps 1 and 2 leads to the conclusion of Theorem 2. For an precise proof of Theorem 2 we need the following technical lemma.
Lemma 13.
We have
This Lemma can be proved using the technique in [10] (see Lemma 8.10) together with Corollary 5.1 and A.1 in [5].
Proof.
First using Lemma 9 note that
Furthermore
and
Analogously
Since we have according to (5.25),
Next we have for
(5.29) |
and analogously
(5.30) |
By the previous statement, we conclude that
uniformly in , where
Moreover, since we obtain that
Finally let us prove that It follows from (5.12), (5.13), (5.14) und (5.15),
Note that
and
Together with Corollary 5.1 in [5] and Theorem 2.14.1 in [16] we get
Finally we have This completes the proof. ∎
Hence
Since using Lemma 13 we obtain
(5.31) |
Applying Theorem 2.2 in [6] with and sufficiently large we conclude that there exists a random variable whose conditional distribution given is identical to the distribution of , that is, for all almost surely, and such that
(5.32) |
This in turn implies that there exists a sequence of constants such that
The condition (iii) of Assumption 1 guarantees that the expression (5.32) converges to and with speed faster than one of the expression (3.14). Since , we get together with the bound and the anti-concentration inequality (5.28),
For the same reason, we conclude that
This argument shows together with (5.31) that
(5.33) |
To conclude the proof, it remains to show that
(5.34) |
Let us recall that it follows from Theorem 1 together with the bound that
and we have Let us remark that
Now if we recall the conclusion of Theorem 1 and the anti-concentration inequality (5.28), we get
(5.35) |
Note that due to (5.33) together with argument similar to Step 3 in the proof of Theorem 2 in [11], we can find a sequence of constants such that
(5.36) |
with probability approaching one. This implies that
For the same reason, we have upper bound for the probability, which has the form . Due the Borell-Sudakov-Tsirelson inequality (see Lemma A.2.2 in [16] for more details) we have
If we combine this with (5.36), we get with the supremum width of the confidence band bounded as
This observation completes the proof of Theorem 2 for the multiplier bootstrap case.
References
- [1] Barndorff-Nielsen, Ole E., and Benth, F. E. and Veraart, A. Cross-commodity modelling by multivariate ambit fields. In Commodities, energy and environmental finance, volume 74 of Fields Inst. Commun., pages 109–148. Fields Inst. Res. Math. Sci., Toronto, ON, 2015.
- [2] Barndorff-Nielsen, Ole E. and Schmiegel, J. Brownian semistationary processes and volatility/intermittency. In Advanced financial modelling, volume 8 of Radon Ser. Comput. Appl. Math., pages 1–25. Walter de Gruyter, Berlin, 2009.
- [3] Denis Belomestny, Tatiana Orlova, and Vladimir Panov. Statistical inference for moving-average lévy-driven processes: Fourier-based approach. Statistica Neerlandica, 73(1):100–117, 2019.
- [4] Belomestny, D., Panov, V., and Woerner, J. Low frequency estimation of continuous-time moving average Lévy processes. arXiv: 1607.00896, 2016.
- [5] V. Chernozhukov, D. Chetverikov, and K. Kato. Gaussian approximation of suprema of empirical processes. The Annals of Statistics, pages 1564–1597, 2014a.
- [6] V. Chernozhukov, D. Chetverikov, and K. Kato. Empirical and multiplier bootstraps for suprema of empirical processes of increasing complexity, and related gaussian couplings. Stoch. Anal. Appl., pages 3632–3651, 2016.
- [7] Victor Chernozhukov, Denis Chetverikov, and Kengo Kato. Comparison and anti-concentration bounds for maxima of gaussian random vectors. Probability Theory and Related Fields, 162(1-2):47–70, 2015.
- [8] Victor Chernozhukov, Denis Chetverikov, Kengo Kato, et al. Gaussian approximations and multiplier bootstrap for maxima of sums of high-dimensional random vectors. The Annals of Statistics, 41(6):2786–2819, 2013.
- [9] Victor Chernozhukov, Denis Chetverikov, Kengo Kato, et al. Anti-concentration and honest, adaptive confidence bands. The Annals of Statistics, 42(5):1787–1818, 2014.
- [10] Kengo Kato and Daisuke Kurisu. Bootstrap confidence bands for spectral estimation of l’evy densities under high-frequency observations. arXiv preprint arXiv:1705.00586, 2017.
- [11] Kengo Kato and Yuya Sasaki. Uniform confidence bands in deconvolution with unknown error distribution. Journal of Econometrics, 207(1):129–161, 2018.
- [12] Podolskij, Mark. Ambit fields: survey and new challenges. In XI Symposium on Probability and Stochastic Processes, pages 241–279. Springer, 2015.
- [13] Rajput, B. and Rosiński, J. Spectral representations of infinitely divisible processes. Probability Theory and Related Fields, 82(3):451–487, 1989.
- [14] Ken-iti Sato and Sato Ken-Iti. Lévy processes and infinitely divisible distributions. Cambridge university press, 1999.
- [15] Schnurr, A. and Woerner, J. H. C. Well-balanced Lévy driven Ornstein-Uhlenbeck processes. Stat. Risk Model., 28(4):343–357, 2011.
- [16] A Van der Vaart and J Wellner. Weak convergence and empirical processes: With applications to statistics springer series in statistics. Springer, 8:10, 1996.