Simultaneous Inference for Time Series Functional Linear Regression
Abstract
We consider the problem of joint simultaneous confidence band (JSCB) construction for regression coefficient functions of time series scalar-on-function linear regression when the regression model is estimated by roughness penalization approach with flexible choices of orthonormal basis functions. A simple and unified multiplier bootstrap methodology is proposed for the JSCB construction which is shown to achieve the correct coverage probability asymptotically. Furthermore, the JSCB is asymptotically robust to inconsistently estimated standard deviations of the model. The proposed methodology is applied to a time series data set of electricity market to visually investigate and formally test the overall regression relationship as well as perform model validation.
Keywords: Convex Gaussian approximation; Functional time series; Joint simultaneous confidence band; Multiplier bootstrap; Roughness penalization.
1 Introduction
It is increasingly common to encounter time series that are densely observed over multiple oscillation periods or natural consecutive time intervals. To address statistical issues with respect to data structures such as the aforementioned, functional (or curve) time series analysis has undergone unprecedented development over the last two decades. See [27] and [6] for excellent book-length treatments of the topic. We also refer the readers to [26], [3, 2], [43], [42] and [16] among many others for articles that address various modelling, estimation, forecasting and inference aspects of functional time series analysis from both time and spectral domain perspectives.
The main purpose of this article is to perform simultaneous statistical inference for time series scalar-on-function linear regression. Specifically, consider the following time series functional linear model (FLM):
(1) |
where is a -variate stationary time series of known functional predictors observed on , is a univariate stationary time series of responses, and is a centered stationary time series of regression errors satisfying for all and . Observe that and could be dependent and can be viewed as the best linear forecast of based on . We are interested in constructing asymptotically correct joint simultaneous confidence bands (JSCB) for the regression coefficients ; that is, we aim to find random functions and , , such that
for a pre-specified coverage probability . The need for JSCB arises in many situations when one wants to, for instance, rigorously investigate the overall magnitude and pattern of the regression coefficient functions, test various assumptions on the regression relationship and perform diagnostic checking and model validation of (1) without multiple hypothesis testing problems.
To date, results for the time series scalar-on-function regression (1) are scarce and are mainly on the consistency of functional principle component (FPC) based estimators ([26], [25]). To our knowledge, there is no literature on asymptotically correct JSCB construction for under time series dependence. On the other hand, there is a wealth of statistics literature dealing with estimation, convergence rate investigation, prediction, and application of FLM (1) when the data are independent and identically distributed (i.i.d.). See, for instance, [8], [15], [34], [7] and [31] for a far from exhaustive list of references. We also refer the readers to [9], [40], [56] and [46] for excellent recent reviews of the topic and more references. Meanwhile, the last two decades also witnessed an increase in statistics literature on the inference of FLM (1) for independent data. Since a JSCB is primarily an inferential tool, we shall review this literature in more detail. The main body of the aforementioned literature consists of results related to -type tests on whether or a fixed known function; see for instance [24], [33], [32] and [52], among others. Other contributions include confidence interval construction for the conditional mean and hypothesis testing for functional contrasts ([50]) and goodness of fit tests for (1) versus possibly nonlinear alternatives ([28], [21], [38]). On the other hand, however, to our knowledge for independent observations there are few results discussing confidence band construction for . [30] proposed a simple methodology to construct a conservative confidence band for the slope function of scalar-on-function linear regression for independent data which covers “most” of points; Recently, [17] constructed an asymptotically correct simultaneous confidence band for function-on-function linear regression of independent data and the authors also discussed the scalar-on-function case briefly. The implementation of the latter paper requires estimating the convergence rate of the regression which could be a relatively difficult task in moderate samples.
There are two major challenges involved with JSCB construction of for the time series FLM (1). Firstly, estimation of model (1) is related to an ill-posed inverse problem ([9], Section 2.2) and often estimators of are not tight on . As a result, it has been a difficult problem to investigate the large sample distributional behavior of estimators of uniformly across . Secondly, the rates of convergence for various estimators of depend sensitively on the smoothness of and , and the penalization parameter used in the regression. Consequently, in practice it is difficult to determine the latter rates of convergence and hence the appropriate normalizing constants for the uniform inference of . In this article, we address the aforementioned challenges by proposing a simple and unified multiplier bootstrap methodology for the JSCB construction. For the roughness penalized estimators of , the multiplier bootstrap will be shown to well approximate their weighted maximum deviations on uniformly across all quantiles and a wide class of smooth weight functions under quite general conditions in large samples. As a result, the bootstrap allows to construct asymptotically correct JSCB for in a simple and unified way without having to derive the uniform distributional behavior or rates of convergence. Furthermore, the JSCB constructed by the multiplier bootstrap remains asymptotically correct when the weight function is estimated inconsistently under some mild conditions, adding another layer of robustness to the methodology. Theoretically, validation of the bootstrap depends critically on uniform Gaussian approximation and comparison results over all Euclidean convex sets for sums of stationary and weakly dependent time series of moderately high dimensions which we will establish in this paper. These results extend the corresponding findings for independent or -dependent data investigated in [5], [20] and [19] among others.
The multiplier/weighted bootstrap technique has attracted much attention recently. Among others, [10] derived asymptotic consistency of the generalized bootstrap technique for estimating equations. [37] established the validity of the weighted bootstrap technique based on a weighted -estimation framework. Non-asymptotic results on the multiplier bootstrap validity with applications to high-dimensional inference of independent data were established in [12]. Later, [51] considered a multiplier bootstrap procedure in the construction of likelihood-based confidence sets under possible model misspecifications.
The paper is organized as follows. In Section 2, we propose the methodology of the JSCB construction based on roughness penalization approach. The theoretical result on the multiplier bootstrap for estimators of by roughness penalization estimation is discussed in Section 3. In particular, we introduce in Section 3.1 a general class of stationary functional time series models based on orthonormal basis expansions and nonlinear system theory. Section 4 investigates finite sample accuracy of the bootstrap methodology for various basis functions and weighting schemes using Monte Carlo experiments. We analyze a time series dataset on electricity demand curves and daily total demand in Spain in Section 5 and conclude the article in Section 6 with discussions on several issues including the effects of pre-smoothing, practical choices of the basis functions and extensions to regression models with functional response. Additional simulations, examples, theoretical results and the proofs of all theoretical results are deferred to the supplemental material.
2 Methodology
Hereafter, for simplicity we shall assume that and are centered and hence . Let be the Hilbert space of all square integrable functions on with inner product . We also denote by the collection of functions that are -times continuously differentiable with absolutely continuous -th derivative on .
2.1 Roughness Penalization Estimation
In order to facilitate the formulation of roughness penalization estimation, we first prepare some notations. Throughout this paper, we assume that for is continuous on a.s. and hence admits the following expansion where is a set of pre-selected orthonormal basis functions of . From Theorem 1 of [49], has the standard Karhunen-Loève type expansion as follows
(2) |
where with denoting standard deviation and if . Set if . Notice that captures the decay speed of as increases and the random coefficient remains at the same magnitude with variance as increases if . Similarly, write The following assumption restricts the decay speed of the basis expansion coefficients of and .
Assumption 2.1.
For some non-negative integers and with , assume that and a.s.. Suppose that , and is some finite constant, the random coefficient for .
Let be an integer. It is well-known that for a general function the fastest decay rate for its -th basis expansion coefficient is for a wide class of basis functions ([11]). For instance, the Fourier basis (for periodic functions), the weighted Chebyshev polynomials ([53]) and the orthogonal wavelets with degree ([39]) admit the latter decay rate under some extra mild assumptions on the behavior of the function’s -th derivative. Hence Assumption 2.1 essentially requires that the basis expansion coefficients of and decay at the fastest rate. On the other hand, we remark that the basis expansion coefficients may decay at slower speeds for some orthonormal bases. An example is the Legendre polynomials where the coefficients decay at an speed ([55]). For basis functions whose coefficients decay at slower rates, following the proofs of this paper it is obvious to see that the multiplier bootstrap method for the JSCB construction is still asymptotically valid under the corresponding restrictions on the tuning parameters. However, in this case the estimates of will converge at a slower speed and the bootstrap approximation may be less accurate. For the sake of brevity we shall stick to the fastest decay Assumption 2.1 for our theoretical investigations throughout this paper.
The roughness penalization approach to the FLM (1) solves the following penalized least squares problem
(3) |
see [50] and references therein. On the other hand, in functional data analysis, researchers usually work on the optimization problem on a finite-dimensional subspace ([34]), which makes the procedure easily implementable. Following the method proposed by [45], we truncate the coefficient functions to finite (but diverging) dimensional spans of a priori set of basis functions while involving a smoothness penalty. Specifically, assume the truncation number of equals (). Then the right hand side of (3) can be approximated by
where . Simplifying the above expression, the estimation can be achieved by minimizing the following penalized least squares criterion function
(4) |
where () is a common smoothing parameter that measures the rate of exchange between fit to the data and smoothness of the estimator, as measured by the residual sum of squares in the first term, and variability of the functions in the second term. Here we impose a roughness penalty associated with the same smoothing parameter for every , which is commonly used in practice, see [18].
In (4), and is a -dimensional block vector where is the -th block with for . Furthermore, is the block design matrix, is a block diagonal matrix with as its diagonals and is a matrix with elements . Then the penalized least squares estimator turns out to be
Consequently, it is easy to find the matrix representation of the roughness penalized estimator where is a matrix and is a -dimensional block vector with as its -th block and other elements being 0.
2.2 JSCB Construction
JSCB construction of boils down to evaluating the distributional behavior of the weighted maximum deviation
where for any function and weight function . The weight function is assumed to belong to a class with
For a given , denote by the -th quantile of . Then a JSCB with coverage probability can be constructed as , , .
Observe that the width of the JSCB is proportional to the weight function . In practice one could simply choose some fixed weight functions such as which yields equal JSCB width at each and . Alternatively, when the sample size is sufficiently large and temporal dependence is weak or moderately strong, we recommend choosing There are two advantages for this data-driven choice of weights. First, the resulting width of the JSCB reflects the standard deviation of which gives direct visual information on the estimation uncertainty at every and . Second, this choice of weight function yields much smaller average width of the JSCB compared to some fixed choices such as ; see our simulations in Section 4 for a finite-sample illustration. On the other hand, has to be estimated in practice. Later in this article, we shall discuss its estimation and also asymptotic robustness of our multiplier bootstrap methodology when the weight function is inconsistently estimated.
To motivate the multiplier bootstrap, denote . Note that under mild conditions. For simplicity, we assume that each has the same degree of smoothness, here we let each have the same rate of divergence. By elementary calculations and basis expansions, we have
(5) |
where . Hence if is sufficiently large and is relatively small such that is under-smoothed, i.e., the standard deviation of the estimation (captured by the first term on the right hand side of (5)) dominates the bias asymptotically, Eq.(5) reveals that the maximum deviation of on is determined by the uniform probabilistic behavior of , where and with being the -th column of .
There are two major difficulties in the investigation of uniformly in . Firstly, is typically a moderately high-dimensional time series whose dimensionality diverges slowly with and is not a tight sequence of stochastic processes on . Consequently, deriving the explicit limiting distribution of the maximum deviation of is a difficult task. Second, the convergence rate of depends on many nuisance parameters such as the smoothness of and , and the diverging rate of the truncation parameters , which are difficult to estimate in practice. To circumvent the aforementioned difficulties, one possibility is to utilize certain bootstrap methods to avoid deriving and estimating the limiting distributions and nuisance parameters explicitly. In this article, we resort to the multiplier/wild/weighted bootstrap ([57]) to mimic the probabilistic behavior of the process uniformly over . In the literature, the multiplier bootstrap has been used for high dimensional inference on hyper-rectangles and certain classes of simple convex sets that can be well approximated by (possibly higher dimensional) hyper-rectangles after linear transformations; see for instance [12] and [14] for independent data and [64] for functional time series. The inference of uniformly over all quantiles and weight functions in can be transformed into investigating the probabilistic behavior of over a large class of moderately high-dimensional convex sets. However, these convex sets have complex geometric structures for which results that are based on approximations on hyper-rectangles and their linear transformations are not directly applicable. As a result, in this article we shall extend the uniform Gaussian approximation and comparison results over all high-dimensional convex sets for sums of independent and -dependent data established in, for instance, [5], [20] and [19] to sums of stationary and short memory time series in order to validate the multiplier bootstrap. These results may be of wider applicability in other moderately high-dimensional time series problems.
To be more specific, we will consider the bootstrapped sum given a block size : where is a sequence of i.i.d. standard normal random variables which is independent of . Define , then we will show that, conditional on the data, approximates in distribution in large samples with high probability. Naturally, we can use the conditional distribution of to approximate the law of uniformly over all quantiles and weight functions in .
2.3 Tuning Parameter Selection and The Implementation Algorithm
In this subsection, we will first discuss the issue of tuning parameter selection for the roughness penalization regression. We have three parameters to choose, that is the auxiliary truncation parameters , the smoothing parameter and the window size . We recommend choosing where can be selected via the cumulative percentage of total variance (CPV) criterion. Specifically, we choose such that the quantity exceeds a pre-determined high percentage value (e.g., 85% used in the simulations), where are the eigenvalues of . The rationale is that with the aid of the roughness penalization, can be chosen at a relatively large value to reduce the sieve approximation bias without blowing up the variance of the estimation.
In addition, the generalized cross validation (GCV) method can be used to choose , see for examples [8, 47]. To be more specific, the GCV criterion for the smoothing parameter is defined as where and is the -th element of the vector . Thus one can select over a range by minimizing the above function.
For the window size , we suggest using the minimum volatility (MV) method, which was proposed by [44]. Denote the estimated conditional covariance matrix The rationale behind the MV method is that the estimator becomes stable as a function of when is in an appropriate range. Let the grid of candidate window sizes be . The MV criterion selects window size such that it minimizes the function , where denotes the standard error
with .
Next, we will describe the detailed steps of the multiplier bootstrap procedure for JSCB construction when the weight function , . Note that in probability under some mild conditions.
-
(a)
Select the window size , such that .
-
(b)
Choose the number of basis expansion for each and choose the smoothing parameter .
-
(c)
Estimate FLM (1) and obtain the residuals .
-
(d)
Generate (say 1000) sets of i.i.d. standard normal random variables , . For each , calculate , where with , have similar definitions to and with replaced by its estimate .
-
(e)
Estimate the bootstrap sample standard deviation of , , where denotes the -th component of , . Calculate an estimator of as and obtain , with .
-
(f)
For a given level , let the -th sample quantile of the sequence be . Then the JSCB of can be constructed as for , .
In the rare case where is close to 0 at some , one can lift up to a certain threshold (say, ) while keeping the weight function continuous such that . As we will show in Section 4, the above manipulations do not influence the asymptotic validity of the bootstrap.
If one is interested in constructing JSCB for a group of parameter functions, say , then one just need to focus on the th, th, , th elements of the bootstrap process , , to conduct simultaneous inference of those parameter functions. The implementation procedure is very similar to the above and we shall omit the details.
3 Theoretical Results
In this section, we first model the functional time series from a basis expansion and nonlinear system ([58]) point of view and then investigate the multiplier bootstrap theory.
3.1 Functional Time Series Models
Based on the basis expansion (2), we aim at utilizing a general time series model for from a nonlinear system point of view as follows, which will serve as a preliminary for our theoretical investigations.
Definition 1.
Assume that satisfy , , where for a random variable . We say that admits a physical representation if for each fixed and , the stationary time series can be written as where is a measurable function and with being i.i.d. random elements. For , define the -th physical dependence measure for the functional time series with respect to the basis and moment as where with being an i.i.d. copy of .
Note that in the above definition does not depend on . The above formulation of time series can be viewed as a physical system where functions are the underlying data generating mechanisms and are the shocks or innovations that drive the system. Meanwhile, measures the temporal dependence of by quantifying the corresponding changes in the system’s output uniformly across all basis expansion coefficients when the shock of the system steps ahead is changed to an i.i.d. copy. We refer to [58] for more discussions of the physical dependence measures with examples on how to calculate them for a wide range of linear and nonlinear time series models.
Definition 1 is related to the class of functional time series formulated in [64]. The difference is that in (2) we separate the standard deviation from and the functional time series model in [64] is formulated without this extra step. Standardization of the basis expansion coefficients is needed in the fitting of the FLM (1) to avoid near singularity of the design matrix. Furthermore, Definition 1 is also related to the concept of -approximable functional time series introduced in [26] as both formulations utilize the concepts of Bernoulli shifts and coupling. The difference lies in our adaptation of the basis expansion similar to that in [64] which separates the functional index and time index and hence makes it easier technically to investigate the behavior of various estimators of uniformly over . Now, we impose an assumption on the speed of decay for the dependence measures .
Assumption 3.1.
There exists some constant such that for some finite constant , the physical dependence measure satisfies
Assumption 3.1 is a mild short-range dependence assumption which asserts that the temporal dependence of the functional time series decays at a sufficiently fast polynomial rate. For independent functional data, the condition is automatically satisfied as there is no temporal dependence (). In Section C of the online supplemental material, we will provide two examples on how to calculate for a class of functional MA and functional AR(1) processes, respectively.
3.2 Validating The Multiplier Bootstrap
Adapting the nonlinear system point of view ([58]), we model the stationary time series of errors as for some measurable function . Observe that both and are generated by the same set of shocks and hence they could be statistically dependent. To establish the main results, we need the following conditions:
Assumption 3.2.
Suppose that the smallest eigenvalue of is greater or equal to some constant .
Assumption 3.3.
The stationary process satisfies and there exist constants and such that its physical dependence measure achieves .
Assumption 3.4.
and some constant .
The above assumptions are mild and are needed for establishing a Gaussian approximation and comparison theory for roughness-penalized estimators. 3.2 ensures positive definiteness of the design matrix in order to avoid multi-colinearity. 3.3 is a short range dependent condition on in accordance with 3.1. Finally, 3.4 puts some moment restrictions on the random variable .
Let be a matrix with its element . The following additional assumptions are needed for the validation of the multiplier bootstrap.
Assumption 3.5.
For , we assume , where is a positive constant depending on the basis function, is the spectral norm (largest singular value) of a matrix .
Assumption 3.6.
For each , there exists some constant such that for some positive constant , where denotes the uniform norm of a bounded function, i.e, if then In addition, for any and , there exists a nonnegative constant and some finite constant such that
Assumption 3.7.
For sufficiently large and , , where is some finite constant.
Assumption 3.5 is mild and can be easily checked for many frequently-used basis functions, such as the Fourier basis and the Legendre polynomial basis . Meanwhile, Assumption 3.6 is satisfied by most frequently-used sieve bases. For instance, the pair for the trigonometric polynomial series, for the polynomial spline basis functions and for the normalized Legendre polynomial basis. We refer to Section D.2 of the supplementary material for a detailed discussion of the above claims on Assumptions 3.5 and 3.6. Assumption 3.7 is frequently adopted in the FLM literature, for example [22], which imposes a lower bound on the decay rate of . Similar to our discussion of Assumption 2.1, for functions the fastest decay speed of their basis expansion coefficients is of the order for a wide class of basis functions. Hence Assumption 3.7 is mild. We remark that Assumptions 3.6–3.7 are required for controlling the bias of and are not needed for the Gaussian approximation and comparison results.
Define . Recall the definition of in Step (e) of Section 2.3. Denote the following Kolmogorov distance
where . Then we have the following theorem on the consistency of the proposed multiplier bootstrap method.
Theorem 1.
Suppose Assumptions 2.1–3.7 hold true, the smallest eigenvalue of is bounded below by some constant and . Define where represents the element in the probability space, indicates Frobenius norm, diverges to infinity at an arbitrarily slow rate and is a finite constant, then . Under the event , we have
(6) |
where is some finite constant. Further assume
-
(i)
and ,
-
(ii)
almost surely,
-
(iii)
as ,
then the JSCB achieves
(7) |
Theorem 1 states that the JSCB achieves the correct coverage probability asymptotically under the corresponding regularity conditions. In particular, (6) establishes the rate of the bootstrap approximation to the weighted maximum deviation of . More specifically, the first term on the right hand side of (6) captures the magnitude of Gaussian approximation error, the second term represents the multiplier bootstrap approximation error and the last term is related to the estimation error. Condition (i) in Theorem 1 imposes a lower bound on and upper bound on in order to obtain an under-smoothed estimator (hence the estimation bias is asymptotically negligible). The constraints on and in Condition (i) are mild. For example, if , based on the normalized Legendre polynomials and , the parameter should be chosen as and such that the approximation error goes to 0. Condition (ii) is a mild assumption on the weight function. The rate is the optimal one that balances the bias and variance of the bootstrapped covariance matrices. Thanks to the fact that Theorem 1 in Section B of the supplemental material and the above Theorem 1 are established uniformly over all weight functions in , the JSCB achieves asymptotically correct coverage probability without assuming that is a uniformly consistent estimator of as long as almost surely. Hence the multiplier bootstrap is asymptotically robust to inconsistently estimated weight functions. The price one has to pay for inconsistently estimated weight functions is that the average width of the JSCB may be inflated.
3.3 Data-Driven Basis Functions Based on Functional Principal Components
A popular data-driven orthonormal basis in functional data analysis is the FPC. Observe that FPCs have to be estimated from the data which inevitably causes estimation errors. When one employs data-driven basis functions such as the FPCs to fit model (1), the additional estimation error must be taken into account. Note that are the eigenvalues of the corresponding covariance operator. Throughout this subsection, for any given , we assume that . This assumption implies that the first eigenvalues are separated, which is commonly used in the theoretical investigation for FPC-based methods. The next proposition establishes the asymptotic validity of the bootstrap for the FPC basis functions under some extra conditions.
Proposition 1.
This proposition imposes an extra constraint (8) on the smoothness of and to make the additional bias term resulted from FPC estimation negligible compared to the standard deviation.
4 Simulation Studies
Throughout this section, we focus on the case where . Three basis functions will be considered, i.e., the Fourier bases (Fou.), the Legendre polynomial bases (Leg.) and the functional principal components (FPC). Due to page constraints, we refer the readers to Section A of the supplementary material for more numerical studies for and statistical powers for .
Recall model (1) and restate the basis expansions as
, when . Next, denote , and is an infinite-dimensional tridiagonal coefficient matrix with on the diagonal and on the offdiagonal. We will investigate the following models:
FMA(1) model. , we choose the MA coefficient or . The entries of are independent random variables.
FAR(1) model. , we choose the AR coefficient or to represent weak to moderately strong dependencies. The entries of are chosen as independent random variables.
Basis | |||||
---|---|---|---|---|---|
1 | Fou. | 0.943(1.46) | 0.936(1.46) | 0.884(1.28) | 0.886(1.29) |
Leg. | 0.952(2.95) | 0.945(2.86) | 0.907(2.55) | 0.895(2.48) | |
FPC | 0.960(1.65) | 0.932(1.69) | 0.903(1.45) | 0.886(1.49) | |
Fou. | 0.936(1.18) | 0.936(1.22) | 0.875(1.07) | 0.879(1.10) | |
Leg. | 0.947(1.34) | 0.940(1.33) | 0.884(1.21) | 0.885(1.21) | |
FPC | 0.938(1.37) | 0.924(1.40) | 0.870(1.25) | 0.858(1.27) | |
Basis | |||||
1 | Fou. | 0.957(1.16) | 0.958(1.12) | 0.903(1.01) | 0.914(0.99) |
Leg. | 0.953(2.20) | 0.951(2.08) | 0.900(1.90) | 0.890(1.81) | |
FPC | 0.945(1.17) | 0.948(1.19) | 0.893(1.04) | 0.894(1.06) | |
Fou. | 0.949(0.92) | 0.940(0.94) | 0.883(0.84) | 0.903(0.86) | |
Leg. | 0.943(0.98) | 0.941(0.97) | 0.887(0.89) | 0.888(0.88) | |
FPC | 0.943(1.00) | 0.934(1.01) | 0.874(0.90) | 0.871(0.92) |
The following basis expansion coefficients of and the error process in model (1) are considered:
. and for .
. are dependent on . Let follow an AR(1) process where is i.i.d. -distributed and set where is the first FPC score of .
. and for .
. are independent of . Let follow an AR(1) process where is i.i.d. standard normally distributed.
For the case based on FPC expansions, we also use the above settings except that we will modify the component-wise dependence structure of in the sense that is diagonal with all diagonal elements being . This guarantees the true FPCs of are .
In the simulation studies, the bootstrap procedures discussed in Section 2.3 are employed with to find the critical values at levels and . The simulation results are based on 1000 Monte Carlo experiments. Tables 1–2 reports simulated coverage probabilities and average JSCB widths with aforementioned three types of basis functions and two types of weight functions; i.e., and .
Basis | |||||||
---|---|---|---|---|---|---|---|
1 | Fou. | 0.945(1.77) | 0.944(1.79) | 0.935(1.72) | 0.898(1.55) | 0.883(1.56) | 0.884(1.51) |
Leg. | 0.942(3.10) | 0.947(3.16) | 0.940(3.05) | 0.901(2.67) | 0.888(2.72) | 0.877(2.64) | |
FPC | 0.939(1.91) | 0.941(1.91) | 0.947(1.90) | 0.893(1.67) | 0.889(1.66) | 0.892(1.66) | |
Fou. | 0.938(1.47) | 0.932(1.47) | 0.923(1.43) | 0.884(1.32) | 0.876(1.32) | 0.864(1.29) | |
Leg. | 0.934(1.50) | 0.923(1.53) | 0.922(1.56) | 0.883(1.35) | 0.870(1.37) | 0.860(1.40) | |
FPC | 0.938(1.58) | 0.927(1.57) | 0.923(1.56) | 0.871(1.41) | 0.869(1.41) | 0.851(1.40) | |
Basis | |||||||
1 | Fou. | 0.960(1.25) | 0.944(1.27) | 0.940(1.23) | 0.903(1.10) | 0.896(1.11) | 0.891(1.08) |
Leg. | 0.959(2.12) | 0.946(2.12) | 0.940(2.14) | 0.908(1.83) | 0.894(1.83) | 0.888(1.85) | |
FPC | 0.952(1.28) | 0.952(1.30) | 0.955(1.26) | 0.909(1.12) | 0.893(1.14) | 0.914(1.10) | |
Fou. | 0.947(1.04) | 0.940(1.06) | 0.930(1.02) | 0.890(0.93) | 0.878(0.95) | 0.871(0.91) | |
Leg. | 0.940(1.02) | 0.944(1.04) | 0.938(1.04) | 0.895(0.91) | 0.886(0.93) | 0.886(0.93) | |
FPC | 0.942(1.06) | 0.939(1.08) | 0.934(1.04) | 0.889(0.95) | 0.882(0.97) | 0.887(0.93) |
From Tables 1–2, we find that most of the results for are close to the nominal levels. When and the data-adaptive weights are used, the coverage probabilities are reasonably close to the nominal levels for most cases under weaker dependence (). However, under stronger dependence the performances of the three bases weaken slightly when . The decrease in estimation accuracy and coverage probability in finite samples under stronger temporal dependence is well-known in time series analysis. This decrease seems to be universal across various inferential tools (such as subsampling, block bootstrap, multiplier bootstrap, and self-normalization) though some methods may be less sensitive to stronger dependence. One explanation is that the variances of the estimators tend to be higher under stronger dependence which leads to less accurate estimators. This reduced accuracy then results in deteriorated coverage probabilities in small to moderately large samples.
Furthermore, we find that the performances of the JSCB for dependent predictors and errors are similar to those for independent case, which supports our theoretical result that the multiplier bootstrap is robust to dependence between the predictors and errors. We observe from the simulation results that the JSCB is narrower when the weights are selected proportional to the standard deviations of the estimators. Therefore we would like to recommend such weights provided that there is no sacrifice in the coverage accuracy.
5 Empirical Illustrations
We consider the daily curves of electricity real demand (MWh) in Spain from January 1st 2015 to December 31st 2017. These data can be obtained from the Red Eléctrica de Espãna system operator (https://www.esios.ree.es/en). Since the daily electricity demand on weekdays and weekends have different behaviors, in this paper we focus on the weekday curves (from Monday to Friday) with days. The hourly records of electricity demand in year 2011–2012 have been investigated in [1].

The original dataset are recorded by 10-minute intervals from 00:00–23:50 on each day, which consists of 144 observations. We consider the daily log-transformed real demand curves by smoothing and rescaling them to a continuous interval . The plot of the smoothed functional time series is shown in Fig. 1. The stationarity test of [29] is also implemented and it turns out that the test does not reject the stationarity hypothesis at level during the considered period. Next, we aim to investigate the relationship between daily electricity real demand curves and future daily total demand. To this end we explore the FLM:
with being the daily total real demand on the -th weekday.
The Legendre polynomial and FPC bases are used in this example. Firstly, we select by CVP criterion to explain at least 95% of the variability of the data and set . The block size is chosen as by MV method proposed in Section 2.3 for aforementioned bases, the smoothing parameter is selected as based on Legendre polynomial bases and based on FPC bases according to GCV criterion. The JSCBs are constructed based on 10000 bootstrap replications.






We exhibit the plots of JSCBs and point-wise confidence bands for and in Fig. 2. From it, one finds that both and are significantly non-zero, is insignificant under the JSCB construction for both bases. In particular, for both bases is significantly positive at the morning and evening peak times, significantly negative at off-peak time. While is only significantly positive in the afternoon period and significantly negative in the late evening period, contributing to a slightly weaker impact on the response compared to . On the other hand, the point-wise confidence bands for based on both bases do not fully cover the horizontal axis , which will lead to a spurious significance. These findings illustrate that, the electricity demand behavior over the last two weekdays (especially the peak time from the last weekday) is highly correlated with the total demand of the next weekday. Moreover according to the the JSCB fluctuations for the coefficients and , it turns out apparently that the constancy and linearity coefficient hypotheses are rejected.
For a further investigation, we fit the model and find that the of the regression is 0.8032 (0.8044) for the Legendre polynomial (FPC) bases. Therefore, we conclude that the majority of the variability of the total weekday electricity demand in Spain can be explained/predicted by the demand curves of the past two weekdays.
6 Discussions
We would like to conclude this article by discussing some issues related to the practical implementation of the regression as well as possible extensions. Firstly, the predictors are typically only discretely observed with noises in practice and hence pre-smoothing is required to transfer the discretely observed predictors into continuous curves which inevitably produces some smoothing errors. In this article, for the purposes of brevity and keeping the discussions on the main focus, we assume that the smooth curves of are observed. It can be seen from the proofs that the results of the paper hold under the densely observed functional data scenario as long as the smoothing error is of the order uniformly. The smoothing effects for time series or densely observed functional data have been intensively investigated in the literature; see for instance [23], [61] and [59], among many others. Though the aforementioned references are not exactly intended for functional time series, their results can be extended to the functional time series setting which we will pursue in a separate future work. On the other hand, we do not expect that our theory and methodology will directly carry over to the sparsely observed functional data setting ([60]) and the corresponding investigations are beyond the scope of the current paper.
Secondly, we note that the choice of the basis function is a non-trivial task in practice. In the literature, there are several discussions with respect to the choices of basis functions or methods for the regression (1) in general. See for instance Section 6.1 in [46] and the references therein. Here we shall add some additional notes. For functional time series whose observation curves are clearly periodic such as the yearly temperature curves, the Fourier basis is a natural choice. Similar choices can be made based on prior knowledge of the shapes of the observation curves in various scenarios. From our limited simulation studies and data analysis in Sections 4 and 5, it is found that many popular classes of basis functions produce similar estimates of the regression curves and inference results, which demonstrates a certain level of robustness towards the basis choices.
Finally, in some real data applications the response time series may be function-valued as well. One prominent example is the functional auto-regression [6]. We hope that our multiplier bootstrap methodology as well as the underlying Gaussian approximation and comparison results will shed light on the simultaneous inference problem for FLM with functional responses. Indeed, using the basis decomposition technique, FLM with functional responses could be viewed as multiple FLMs with scalar responses at the basis expansion coefficient level where the regression errors may be cross-correlated and the set of regressors are identical. We will investigate this direction in a future research endeavor.
References
- Aneiros et al. [2016] Aneiros, G., J. Vilar, and P. Raña (2016). Short-term forecast of daily curves of electricity demand and price. Electrical Power & Energy Systems 80, 96–108.
- Aue et al. [2017] Aue, A., L. Horváth, and D. F. Pellatt (2017). Functional generalized autoregressive conditional heteroskedasticity. Journal of Time Series Analysis 38(1), 3–21.
- Aue et al. [2015] Aue, A., D. D. Norinho, and S. Hörmann (2015). On the prediction of stationary functional time series. Journal of the American Statistical Association 110, 378–392.
- Belloni et al. [2015] Belloni, A., V. Chernozhukov, D. Chetverikov, and K. Kato (2015). Some new asymptotic theory for least squares series: Pointwise and uniform results. Journal of Econometrics 186(2), 345–366.
- Bentkus [2003] Bentkus, V. (2003). On the dependence of the Berry–Esseen bound on dimension. Journal of Statistical Planning and Inference 113(2), 385–402.
- Bosq [2012] Bosq, D. (2012). Linear Processes in Function Spaces: Theory and Applications, Volume 149. Springer Science & Business Media.
- Cai and Hall [2006] Cai, T. T. and P. Hall (2006). Prediction in functional linear regression. The Annals of Statistics 34(5), 2159–2179.
- Cardot et al. [2003] Cardot, H., F. Ferraty, and P. Sarda (2003). Spline estimators for the functional linear model. Statistica Sinica 13(3), 571–591.
- Cardot and Sarda [2011] Cardot, H. and P. Sarda (2011). Functional linear regression. In The Oxford Handbook of Functional Data Analysis. Oxford University Press.
- Chatterjee and Bose [2005] Chatterjee, S. and A. Bose (2005). Generalized bootstrap for estimating equations. The Annals of Statistics 33(1), 414–436.
- Chen [2007] Chen, X. (2007). Large sample sieve estimation of semi-nonparametric models. Handbook of Econometrics 6(B), 5549–5632.
- Chernozhukov et al. [2013] Chernozhukov, V., D. Chetverikov, and K. Kato (2013). Gaussian approximations and multiplier bootstrap for maxima of sums of high–dimensional random vectors. The Annals of Statistics 41(6), 2786–2819.
- Chernozhukov et al. [2015] Chernozhukov, V., D. Chetverikov, and K. Kato (2015). Comparison and anti-concentration bounds for maxima of gaussian random vectors. Probability Theory and Related Fields 162(1), 47–70.
- Chernozhukov et al. [2017] Chernozhukov, V., D. Chetverikov, and K. Kato (2017). Central limit theorems and bootstrap in high dimensions. The Annals of Probability 45(4), 2309–2352.
- Christophe and Sarda [2009] Christophe, Crambes., A. K. and P. Sarda (2009). Smoothing splines estimators for functional linear regression. The Annals of Statistics 37(1), 35–72.
- Dette et al. [2020] Dette, H., K. Kokot, and S. Volgushev (2020). Testing relevant hypotheses in functional time series via self-normalization. Journal of the Royal Statistical Society: Series B: Statistical Methodology 82, 629–660.
- Dette and Tang [2021] Dette, H. and J. Tang (2021). Statistical inference for function-on-function linear regression. arXiv preprint arXiv:2109.13603.
- Fan and Li [2001] Fan, J. and R. Li (2001). Variable selection via nonconcave penalized likelihood and its oracle properties. Journal of the American Statistical Association 96(456), 1348–1360.
- Fang [2016] Fang, X. (2016). A multivariate CLT for bounded decomposable random vectors with the best known rate. Journal of Theoretical Probability 29(4), 1510–1523.
- Fang and Röllin [2015] Fang, X. and A. Röllin (2015). Rates of convergence for multivariate normal approximation with applications to dense graphs and doubly indexed permutation statistics. Bernoulli 21(4), 2157–2189.
- García-Portugués et al. [2014] García-Portugués, E., W. González-Manteiga, and M. Febrero-Bande (2014). A goodness-of-fit test for the functional linear model with scalar response. Journal of Computational and Graphical Statistics 23(3), 761–778.
- Hall and Horowitz [2007] Hall, P. and J. L. Horowitz (2007). Methodology and convergence rates for functional linear regression. The Annals of Statistics 35(1), 70–91.
- Hall et al. [2006] Hall, P., H.-G. Müller, and J.-L. Wang (2006). Properties of principal component methods for functional and longitudinal data analysis. The Annals of Statistics 34(3), 1493–1517.
- Hilgert et al. [2013] Hilgert, N., A. Mas, and N. Verzelen (2013). Minimax adaptive tests for the functional linear model. The Annals of Statistics 41(2), 838–869.
- Hörmann and Kidziński [2015] Hörmann, S. and Ł. Kidziński (2015). A note on estimation in Hilbertian linear models. Scandinavian Journal of Statistics 42(1), 43–62.
- Hörmann and Kokoszka [2010] Hörmann, S. and P. Kokoszka (2010). Weakly dependent functional data. The Annals of Statistics 38(3), 1845–1884.
- Horváth and Kokoszka [2012a] Horváth, L. and P. Kokoszka (2012a). Inference for Functional Data with Applications. Springer Science & Business Media.
- Horváth and Kokoszka [2012b] Horváth, L. and P. Kokoszka (2012b). A test of significance in functional quadratic regression. In Inference for Functional Data with Applications, pp. 225–232. Springer.
- Horváth et al. [2014] Horváth, L., P. Kokoszka, and G. Rice (2014). Testing stationarity of functional time series. Journal of Econometrics 179(1), 66–82.
- Imaizumi and Kato [2019] Imaizumi, M. and K. Kato (2019). A simple method to construct confidence bands in functional linear regression. Statistica Sinica 29(4), 2055–2081.
- James et al. [2009] James, G. M., J. Wang, and J. Zhu (2009). Functional linear regression that’s interpretable. The Annals of Statistics 37(5A), 2083–2108.
- Kong et al. [2016] Kong, D., A.-M. Staicu, and A. Maity (2016). Classical testing in functional linear models. Journal of Nonparametric Statistics 28(4), 813–838.
- Lei [2014] Lei, J. (2014). Adaptive global testing for functional linear models. Journal of the American Statistical Association 109(506), 624–634.
- Li and Hsing [2007] Li, Y. and T. Hsing (2007). On rates of convergence in functional linear regression. Journal of Multivariate Analysis 98(9), 1782–1804.
- Li and Hsing [2010] Li, Y. and T. Hsing (2010). Uniform convergence rates for nonparametric regression and principal component analysis in functional/longitudinal data. The Annals of Statistics 38(6), 3321–3351.
- Liu and Lin [2009] Liu, W. and Z. Lin (2009). Strong approximation for a class of stationary processes. Stochastic Processes and their Applications 119(1), 249–280.
- Ma and Kosorok [2005] Ma, S. and M. R. Kosorok (2005). Robust semiparametric M-estimation and the weighted bootstrap. Journal of Multivariate Analysis 96, 190–217.
- McLean et al. [2015] McLean, M. W., G. Hooker, and D. Ruppert (2015). Restricted likelihood ratio tests for linearity in scalar-on-function regression. Statistics and Computing 25(5), 997–1008.
- Meyer [1992] Meyer, Y. (1992). Ondelettes et operateurs I: Ondelettes. Hermann, Paris.
- Morris [2015] Morris, J. S. (2015). Functional regression. Annual Review of Statistics and Its Application 2, 321–359.
- Newey [1997] Newey, W. K. (1997). Convergence rates and asymptotic normality for series estimators. Journal of Econometrics 79(1), 147–168.
- Panaretos and Tavakoli [2013] Panaretos, V. M. and S. Tavakoli (2013). Fourier analysis of stationary time series in function space. The Annals of Statistics 41(2), 568–603.
- Paparoditis [2018] Paparoditis, E. (2018). Sieve bootstrap for functional time series. The Annals of Statistics 46(6B), 3510–3538.
- Politis and Wolf [1999] Politis, D.N., R. J. and M. Wolf (1999). Subsampling. Springer Science & Business Media.
- Ramsay and Silverman [2005] Ramsay, J. and B. Silverman (2005). Functional Data Analysis (2nd ed.). New York: Springer.
- Reiss et al. [2017] Reiss, P. T., J. Goldsmith, H. L. Shang, and R. T. Ogden (2017). Methods for scalar-on-function regression. International Statistical Review 85(2), 228–249.
- Reiss and Ogden [2007] Reiss, P. T. and R. T. Ogden (2007). Functional principal component regression and functional partial least squares. Journal of the American Statistical Association 102(479), 984–996.
- Schmitt [1992] Schmitt, B. A. (1992). Perturbation bounds for matrix square roots and pythagorean sums. Linear Algebra and its Applications 174, 215–227.
- Shang [2014] Shang, H. L. (2014). A survey of functional principal component analysis. AStA Advances in Statistical Analysis 98(2), 121–142.
- Shang and Cheng [2015] Shang, Z. and G. Cheng (2015). Nonparametric inference in generalized functional linear models. The Annals of Statistics 43(3), 1742–1773.
- Spokoiny and Zhilova [2015] Spokoiny, V. and M. Zhilova (2015). Bootstrap confidence sets under model misspecification. The Annals of Statistics 43(6), 2653–2675.
- Su et al. [2017] Su, Y.-R., C.-Z. Di, and L. Hsu (2017). Hypothesis testing in functional linear models. Biometrics 73(2), 551–561.
- Trefethen [2008] Trefethen, L. N. (2008). Is Gauss quadrature better than Clenshaw–Curtis? SIAM review 50(1), 67–87.
- Tropp [2012] Tropp, J. A. (2012). User-friendly tail bounds for sums of random matrices. Foundations of computational mathematics 12(4), 389–434.
- Wang and Xiang [2012] Wang, H. and S. Xiang (2012). On the convergence rates of Legendre approximation. Mathematics of Computation 81(278), 861–877.
- Wang et al. [2016] Wang, J.-L., J.-M. Chiou, and H.-G. Müller (2016). Functional data analysis. Annual Review of Statistics and Its Application 3, 257–295.
- Wu [1986] Wu, C.-F. J. (1986). Jackknife, bootstrap and other resampling methods in regression analysis. The Annals of Statistics 14(4), 1261–1295.
- Wu [2005] Wu, W. B. (2005). Nonlinear system theory: Another look at dependence. Proceedings of the National Academy of Sciences 102(40), 14150–14154.
- Wu and Zhao [2007] Wu, W. B. and Z. Zhao (2007). Inference of trends in time series. Journal of the Royal Statistical Society: Series B: Statistical Methodology 69(3), 391–410.
- Yao et al. [2005] Yao, F., H.-G. Müller, and J.-L. Wang (2005). Functional data analysis for sparse longitudinal data. Journal of the American Statistical Association 100(470), 577–590.
- Zhang and Chen [2007] Zhang, J.-T. and J. Chen (2007). Statistical inferences for functional data. The Annals of Statistics 35(3), 1052–1079.
- Zhang and Cheng [2018] Zhang, X. and G. Cheng (2018). Gaussian approximation for high dimensional vector under physical dependence. Bernoulli 24(4A), 2640–2675.
- Zhou [2013] Zhou, Z. (2013). Heteroscedasticity and autocorrelation robust structural change detection. Journal of the American Statistical Association 108(502), 726–740.
- Zhou and Dette [2020] Zhou, Z. and H. Dette (2020). Statistical inference for high dimensional panel functional time series. arXiv preprint arXiv:2003.05968.