Statistical Inference in High-dimensional Generalized Linear Models with Streaming Data
Statistical Inference in High-dimensional Generalized Linear Models with Streaming Data
Abstract
In this paper we develop an online statistical inference approach for high-dimensional generalized linear models with streaming data for real-time estimation and inference. We propose an online debiased lasso (ODL) method to accommodate the special structure of streaming data. ODL differs from offline debiased lasso in two important aspects. First, in computing the estimate at the current stage, it only uses summary statistics of the historical data. Second, in addition to debiasing an online lasso estimator, ODL corrects an approximation error term arising from nonlinear online updating with streaming data. We show that the proposed online debiased estimators for the GLMs are consistent and asymptotically normal. This result provides a theoretical basis for carrying out real-time interim statistical inference with streaming data. Extensive numerical experiments are conducted to evaluate the performance of the proposed ODL method. These experiments demonstrate the effectiveness of our algorithm and support the theoretical results. A streaming dataset from the National Automotive Sampling System-Crashworthiness Data System is analyzed to illustrate the application of the proposed method.
Keywords: Adaptive tuning, confidence interval, high-dimensional data, online debiased lasso, online error correction.
1 Introduction
Due to the explosive growth of data from non-traditional sources such as sensor networks, security logs and web applications, streaming data is becoming a core component in big data analyses. By streaming data, we refer to data that is generated continuously over time, typically in high volumes and at high velocity. It includes a wide variety of data types such as log files generated by mobile or web applications, ecommerce purchases, information from social networks, and financial trading floors. To reduce the demand on computing memory and achieve real-time processing, the nature of streaming data calls for the development of algorithms which require only one pass over the data. Furthermore, a lot of data streams are high-dimensional in nature such as genomic data for explaining the variability in the phenotypes of an organism and its genetic profile (Csala et al. ,, 2017; Wang et al. ,, 2017), and neuroimaging data for predicting neural activity patterns given various stimulus (Wen et al. ,, 2018; Zhang et al. ,, 2020). At an initial stage of data collection, it is routine to encounter a case where the number of predictors exceeds the number of observations. Moreover, even if sample size grows along the data streams, the candidate set of predictors may contain a large portion of sparse redundancy features. Therefore, to improve interpretability of the results, we need to differentiate predictors in terms of their signal strength and one of the classical approaches is the penalized regression method. In this paper, we focus on a high-dimensional regression setting in generalized linear models (GLM) with non-Gaussian outcomes. Our goal is to develop a real-time estimation and inference procedure that is highly scalable with respect to fast growing data volumes, but with no loss of efficiency in statistical inference in existence of a large number of features.
Streaming data processing essentially falls into the field of online learning. This line of research may be dated back five decades or so when Robbins & Monro, (1951) proposed a stochastic approximation algorithm that laid a foundation for the popular stochastic gradient descent (SGD) algorithm (Sakrison,, 1965). Later on, the SGD algorithm and its variants have been extensively studied for online estimation and prediction (Toulis & Airoldi,, 2017), but the work of developing online statistical inference remains unexplored. A recent paper by Fang, (2019) proposed a perturbation-based resampling method to construct confidence intervals for SGD, but it does not achieve desirable statistical efficiency and may produce misleading inference in high-dimensional settings. In addition to the SGD types of recursive algorithms, several online updating methods have been proposed to specifically perform sequential updating of regression coefficient estimators, including the online least squares estimator (OLSE) for the linear model, the cumulative estimating equation (CEE) estimator, the cumulatively updated estimating equation (CUEE) estimator by Schifano et al. , (2016) and the renewable estimator by Luo & Song, (2020) for nonlinear models.
Most of the aforementioned online algorithms are developed under the low-dimensional settings where the number of features is far less than the total sample size. However, as pointed out above, a prominent concern in high-dimensional streaming data analysis is that only a subset of the variables have nonzero coefficients. Besides small sample size issue at the early stage of data collection, processing such data stream without properly accounting for the sparsity in feature set may introduce significant bias and invalid statistical inference. It is worth noting that even if the cumulative sample size exceeds the number of features as time goes by, traditional estimation methods in low-dimensional settings such as maximum likelihood estimation (MLE) may still incur large bias especially in GLMs (Sur & Candès,, 2019). Therefore, current state-of-art online learning algorithms in low-dimensional settings may be insufficient for processing high-dimensional data streams.
In traditional offline settings, many methods have been developed for analyzing high-dimensional static data. Most of the work on variable selection in high-dimensional regression problems is along the line of lasso (Tibshirani,, 1996), the Smoothly Clipped Absolute Deviation (SCAD) penalty (Fan & Li,, 2001), the adaptive lasso (Zou,, 2006), and the minimax convex penalty (MCP) (Zhang,, 2010). However, variable selection methods focus on point estimation without quantifying the uncertainty in estimates. Later on, statistical inference problems in high-dimensional settings, including interval estimation and hypothesis testing, have attracted much attention since the pioneering works of Zhang & Zhang, (2014), van de Geer et al. , (2014), Javanmard & Montanari, (2014), Belloni et al. , (2015), among others. Other important methods on inference for high-dimensional linear models include Cai & Guo, (2017, 2020), Belloni et al. , (2019), etc. van de Geer et al. , (2014) extended the de-sparsified lasso to high-dimensional GLMs. Ning & Liu, (2017) proposed to construct confidence intervals for high-dimensional M-estimators based on decorrelated score statistic. Recently, a novel splitting and smoothing inference approach for high-dimensional GLMs was proposed by Fei & Li, (2021).
While significant progress has been made on statistical inference for high-dimensional regression problems under the traditional offline settings, variable selection and statistical inference for high-dimensional models with streaming data is still at its infancy stage. Sun et al. , (2020) introduced a systematic framework for online variable selection based on some popular offline methods such as MCP, elastic net (Zou & Hastie,, 2005). But their focus is not on statistical inference. Different from this work, there are some existing methods considering the problem of inference. For example, Deshpande et al. , (2019) proposed a class of online estimators for high-dimensional auto-regressive models. One of the most relevant works is a novel inference procedure in GLMs based on recursive online-score estimation (Shi et al. ,, 2020). However, in both works, the entire dataset is assumed to be available at an initial stage for computing an initial estimator, e.g. the lasso estimator; thereafter, a recursively forward bias correction procedure is conducted along sequentially arrived data points. However, the availability of the entire dataset at an initial stage is not a natural setup in online learning. To address this issue, Han et al. , (2021) proposed an online debiased lasso method for statistical inference in high-dimensional linear models with streaming data.
Unlike the case of high-dimensional linear models where the loss function depends on data only through sufficient statistics (Han et al. ,, 2021), parameters and data are not linearly separable in GLMs. Motivated by the renewable estimation method in low-dimensional GLMs (Luo & Song,, 2020), we start off by taking a first-order Taylor expansion on the quadratic loss function to bypass the need of historical individual-level data. The key idea centers around using “approximate summary statistics” resulting from Taylor expansions. However, this is not a trivial extension of the methods developed under low-dimensional settings. In high-dimensional settings where predictors are spuriously correlated, a data-splitting strategy is typically used for decorrelation where variable selection and estimation are conducted using two different sub-datasets (Ning & Liu,, 2017; Shi et al. ,, 2020; Fei & Li,, 2021). A prominent concern of using such approximate summary statistics that involve previous estimates is that it may incur dependency in the corresponding estimating equation. Theoretically speaking, the dependency among recursively updated estimators poses extra technical challenge in establishing the non-asymptotic error bound. In our proposed online method for real-time confidence interval construction, we aim to address the following questions: (i) what types of approximate summary statistics to be stored to carry out an online debiasing procedure? (ii) will the error accumulate along the updating steps if we use the approximate summary statistics? (iii) will the online debiasing procedure maintain similar oracle properties to its offline counterpart? and (iv) how to choose the tuning parameter adaptively in an online setting where cross-validation that relies on splitting the entire dataset is not feasible.
The focus of this paper is to develop an online debiased lasso (ODL) estimator in high-dimensional generalized linear models with streaming datasets for real-time estimation and inference. Our new contributions include: (i) we propose a two-stage online estimation and debiasing framework that aligns with streaming data collection scheme; (ii) ODL accounts for sparsity feature in a candidate set of predictors and provides valid statistical inference results; and (iii) ODLs for the GLMs are shown to be consistent and asymptotically normal. This result provides a theoretical basis for carrying out real-time interim statistical inference with streaming data. ODL is inspired by the offline debiased lasso method (Zhang & Zhang,, 2014; van de Geer et al. ,, 2014; Javanmard & Montanari,, 2014), however, it differs from the offline debiased lasso in two important aspects. First, in computing the estimate at the current stage, it only uses summary statistics of the historical data. Second, in addition to debiasing an online lasso estimator, ODL corrects an approximation error term arising from online updating with streaming data. This correction is crucial to guarantee the asymptotic normality of the ODL estimator.
This paper is organized as follows. Section 2 introduces the model formulation followed by our proposed online two-stage debiasing method to process high-dimensional streaming data. Section 3 includes some large sample properties concerning the theoretical guarantees for our proposed method. Simulation experiments are given in Section 4 to evaluate the performance of our proposed method in comparison to both MLE and offline debiased estimator. We illustrate the proposed ODL method and apply it to analyze a real data example in Section 5. Finally, we make some concluding remarks in Section 6. All technical proofs are deferred to the Appendix.
1.1 Notation
For a matrix , we let and denote the -th row, -th column and -element of matrix is a sub-vector of with the -th element deleted and is a sub-matrix of with the -th rows and the -th column deleted while other elements remain unchanged. For a vector , we define its -norm as . For a sequence of random variables and a corresponding sequence of constants . We say that if for any , there exist two finite numbers such that for any Generally speaking, denotes is stochastically bounded. means that converges to zero in probability. With the consideration of the streaming data, we use and to stand for and , namely explanatory variable and explained variables, arriving in -th batch respectively. In addition, and (with star index) are the cumulative variables of and . For example,
2 Methodology
In this section, we describe the proposed estimation method with streaming data, including the online lasso estimation and online debiased lasso estimation. We also provide an adaptive tuning method to select the regularization parameter. A rundown of our algorithm is summarized at the end of this section.
Consider up to a time point , there is a total of samples arriving in a sequence of data batches, denoted by , and each contains samples, . Assume each observation is independently from an exponential dispersion model with density function
where is the unit deviance function involving the mean parameter , and is a normalizing factor depending only on the dispersion parameter . The systematic component of a GLM takes the form where is a known link function. The underlying regression coefficient is of our interest, which is assumed to be sparse with nonzero elements. Specifically, we let be the active set of variables and its cardinality is . Our main goal is to conduct a point-wise statistical inference for the components of the parameter vector upon the arrival of every new data batch . The log-likelihood function for the cumulative dataset is
Based on , the standard offline lasso estimator is defined as
(1) |
where is the cumulative sample size and is the regularization parameter However, as discussed in Luo & Song, (2020) and Han et al. , (2021), the optimization of (1) requires the historical data which is not observed in the streaming data. So the standard lasso estimator cannot be computed. Therefore, new online method to find the lasso estimator and the debiased lasso tailored for streaming data is desired. The latter is for the purpose of providing a statistical inference. For the sake of clarity, we refer to the lasso estimator in (1) as the offline lasso estimator. Sections 2.1 and 2.2 are devoted to the construction of the online lasso estimator and the online debiased method, respectively.
2.1 Online lasso estimator
We first consider the online lasso estimator through the gradient descent method. Define the score function . The negative gradient of the first term in (1) is
(2) |
Let be the score function pertaining to the data batch . Then in (2) can be rewritten as a linear aggregation form:
To pursue an online estimator upon the arrival of , a key step is to update to without re-accessing the whole historical raw data .
To illustrate the idea, we consider a simple case with two data batches, i.e., arrives subsequent to . The lasso estimator based on the first batch data is denoted by , which is the offline estimator according to (1). To avoid using individual-level raw data in , we approximate through a first-order Taylor expansion at . Note that satisfies , which implies
where denotes the information matrix. Next, we propose to approximate by
(3) |
Apparently, calculating through (3) only requires access to the summary statistics rather than the full dataset .
Now we are ready to generalize the above approximation to an arbitrary data batch . Let denote the aggregated information matrix. The approximation procedure in (3) becomes
The aggregated gradient depends only on . Hence, can be computed through the following algorithm.
-
•
Step 1: update through
where is the learning rate in the gradient descent;
-
•
Step 2: for , apply the soft-thresholding operator to the -th component of obtained in Step 1:
Note that is the regularization parameter that is chosen adaptively for step . More details on the adaptive tuning procedure is presented in Section 2.3.
The above two steps are carried out iteratively till convergence to obtain the online lasso estimator . In the implementation, we set the stopping criterion to be . In summary, our proposed online estimator can be defined as
(4) |
In contrast to the standard offline lasso estimator in (1), our proposed online estimator in (4) depends on the data only through the summary statistics instead of .
2.2 Online debiased lasso
We now proceed to study the online statistical inference and construct confidence intervals for the -th component of the regression parameter vector, . We first define a low-dimensional projection
(5) |
where and are the same as in (4). Letting be the diagonal matrix with diagonal elements and be the weighted design matrix , then (5) can be recast into
(6) |
With this view, can be computed in a similar fashion as the online lasso estimator. Specifically, (6) has the same form as (1) if we choose the deviance function as the square error. Then, it is straightforward to propose the online algorithm to find by following the procedure in Section 2.1. At this time, the summary statistics is which has been already stored in As a result, we find a solution of (5). After obtaining in (5), we compute
Denote , whose -th element is . Then, upon the arrival of the batch data , we define our proposed online debiased lasso estimator as
(7) |
It is worth noting that the debiased lasso estimator involves the initial lasso estimator defined in (4), as well as two additional terms: a debiasing term and an online error correction term. van de Geer et al. , (2014) studied the offline version of debiased lasso for GLM. Our debiased term could be viewed as an online generalization of the offline counterpart in van de Geer et al. , (2014). However, they are rather different in the sense that, the debiased term alone will not suffice to ensure the asymptotic normality. As is used to approximate the information matrix , the approximation error accumulates even though each is consistent for . The correction term in (2.2) is used to eliminate the approximation error arising from the online update.
Meanwhile, the proposed debiased lasso estimator with the online error correction term aligns with the online learning framework, as (2.2) only requires the following summary statistics rather than the entire dataset :
(8) |
which keep the same size when new data arrive, and can be easily updated.
2.3 Adaptive tuning
In an offline setting, the regularization parameter is typically determined by cross-validation where the entire dataset is split into training and test sets multiple times. However, since the full dataset is not accessible in an online setting, such procedure is not feasible. To align with the nature of streaming datasets, we use the “rolling-original-recalibration” procedure with mean squared prediction error (MSE) being the cross-validation criterion (Han et al. ,, 2021). Specifically, at time point , cumulative dataset up to time point serves as the training set while the new data batch is the test set. It is worth noting that instead of re-accessing raw data , we plug in to evaluate the prediction error for a sequence of in a candidate set :
(10) |
and choose such that . The initial is selected by the classical offline cross-validation.
2.4 Algorithm
We summarize the results in Sections 2.1-2.3 in Figure 1 and Algorithm 1. It consists of two main blocks: one is online lasso estimation and the other is online low-dimensional projection. Outputs from both blocks are used to compute the online debiased lasso estimator as well as the construction of confidence intervals in real-time. In particular, when a new data batch arrives, it is first sent to the online lasso estimation block, where the summary statistics are used to compute . Then we use gradient descent to update the lasso estimator to at a sequence of tuning parameters values without retrieving the whole dataset. At the same time, regarding the cumulative dataset that produces the old lasso estimate as training set and the newly arrived as testing set, we can choose the tuning parameter that gives the smallest prediction error. Now, the selected and sub-matrices of are passed to the low-dimensional projection block for the calculation of . The resulting projection and from the low-dimensional projection block together with the lasso estimator will be used to compute the debiased lasso estimator and its estimated standard error .

Remark 1.
Algorithm 1 computes the estimators in the chronological order, namely
. Since involves , there are dependency among the estimators. For instance, both
and depend on the previous lasso estimation
Remark 2.
When is large, it may be challenging to implement Algorithm 1 since the space complexity to store the aggregated information matrix is To reduce memory usage, we can compute the eigenvalue decomposition (EVD) of , where is the columns orthogonal matrix of the eigenvectors, is the diagonal matrix whose diagonal elements are the eigenvalues of . We only need to store and Since rank we can use an incremental EVD approach (Cardot & Degras,, 2018) to update and . Then the space complexity reduces to . The space complexity can be further reduced by setting a threshold. For example, select the principal components which explain most of the variations in the predictors. However, incremental EVD could increase the computational cost since it requires additional computational complexity. Indeed, there is a trade-off between the space complexity and computational complexity. How to balance this trade-off is an important computational issue and deserves careful analysis, but is beyond the scope of this study.
3 Asymptotic properties
In this section, we state our main theoretical results: the consistency of lasso estimators and defined in (4) and (5) respectively, as well as the asymptotic normality of the online debiased estimator
Before stating the main results, some notations are needed. Recall that is the true coefficient. Consider a random design matrix with i.i.d rows. Let denote the population version of the information matrix in GLM, i,e., and let be its inverse. Then, the ground truth of given in (5) is defined as
Recall that and . In addition, and for The following assumptions are needed to establish the consistency of the lasso estimators and defined in (4) and (5) respectively.
Assumption 1.
Suppose that
(A1) The pairs of random variables are i.i.d.. The covariates are bounded by some finite constant , i.e.,
(A2) and where is the sub-vector of with -th element deleted.
(A3) The derivative exists. For some -neighborhood , is locally Lipschitz with constant , that is,
In addition, for all , and
(A4) The smallest eigenvalue of is bounded away from zero.
The above assumptions are regular conditions in studying the properties of lasso (van de Geer,, 2008; van de Geer et al. ,, 2014). (A1) assumes the streaming data is homogeneous and the covariates are sampled from some bounded distribution. (A2) is imposed to avoid some extreme cases. (A3) requires the local Lipschitz property of the derivative of the mean function around the truth value. It can be easily verified that the popular logistic regression, a special case of GLMs, satisfies this condition. (A4) is needed to ensure that the compatibility condition holds.
Theorem 1.
Remark 3.
Theorem 1 provides upper bounds of the estimation error and the prediction error of the online lasso estimator . The constants and possibly depend on the batch step . Recall that the lasso estimator depends on . So the estimation error in the previous step will be carried onto the updated estimators. As a result, it is inevitable that some constants in the oracle inequality depend on ; nonetheless, they are well under control as long as
The next corollary shows the consistency of the proposed online lasso estimator in (4).
Corollary 1.
Similarly, we present the oracle inequality for in the next theorem.
Theorem 2.
Combining the results in Theorem 1 and Theorem 2, we are ready to establish the asymptotic normality of the ODL estimator.
Theorem 3.
According to Theorem 3, the asymptotic expression of is a sum of and , where converges in distribution to a normal distribution by the martingale central limit theorem and diminishes as goes to infinity.
Remark 4.
Theorem 3 implies that the total data size could be as small as the logarithm of the dimensionality , which is a common condition for offline debiased lasso in the literature (Zhang & Zhang,, 2014; van de Geer et al. ,, 2014). However, the setting of the offline debiased lasso is quite different from the online framework considered in our paper. Due to the lack of access to the whole dataset, it is increasingly difficult to derive the asymptotic property of online debiased lasso. One major difficulty is that, we approximate the information matrix by , which involves estimators obtained in the previous steps. As a result, there is dependence among Another difficulty is to deal with the approximation error accumulates in the online updating, especially under high-dimensional settings. In contrast, the classical offline lasso does not have these two problems. Even for the online debiased lasso in linear model (Han et al. ,, 2021), the above two problems can be bypassed by making use of the special structure of the least squares in linear model.
4 Simulation studies
4.1 Setup
In this section, we conduct simulation studies to examine the finite-sample performance of the proposed ODL procedure in high-dimensional GLMs. We randomly generate a total of samples arriving in a sequence of data batches, denoted by , from the logistics regression model. Specifically,
where and is a -dimensional sparse parameter vector. Recall that is the number of non-zero components of . We set half of the nonzero coefficients to be (relatively strong signals), and another half to be (weak signals). We consider the following settings: (i) , , for , and ; (ii) , , for , and . Under each setting, two types of are considered: (a) ; (b) . We set the step size in gradient descent .
The objective is to conduct both estimation and inference along the arrival of a sequence of data batches. The evaluation criteria include: averaged absolute bias in estimating (A.bias); averaged estimated standard error (ASE); empirical standard error (ESE); coverage probability (CP) of the 95% confidence intervals; averaged length of the 95% confidence interval (ACL). These metrics will be evaluated separately for three groups: (i) , (ii) and (iii) . Comparison is made among (i) the maximum likelihood estimator obtained by fitting the conventional GLM at the terminal point where , (ii) the offline debiased -penalized estimator at the terminal point which is also the benchmark method, and (iii) our proposed ODL estimator at several intermediate points from . We include two offline methods in comparison using R packages hdi (Dezeure et al. ,, 2015) and glm, respectively. The results are reported in Tables 1-4.
4.2 Bias and coverage probability
It can be seen from Tables 1-4 that the estimation bias of the ODL estimator decreases rapidly as the number of data batches increasing from to . Both the estimated standard errors and averaged length of 95% confidence intervals exhibit similar decreasing trend over time, and almost coincide with those by the offline benchmark method at the terminal points. Moreover, ODL enjoys great computation efficiency and it is almost 16 times faster than its offline counterpart.
It is worth noting that even though at the terminal point where the cumulative sample size is slightly larger than , we can still fit the conventional GLM to obtain the MLE. It fails to provide reliable coverage probabilities due to severely large biases and estimated standard errors. In particular, the estimation bias of MLE becomes 50 times that of offline or online debiased lasso when as shown in Tables 1 and 2, and it further increases to 80 times the debiased estimators when . Furthermore, as clearly indicated by the large empirical standard errors, MLE under this setting suffers from severe instability. Such an invalid estimation and inference results by MLE further demonstrates the advantage of our proposed online debiased method under the high-dimensional sparse logistic regression setting with streaming datasets.
MLE | offline | ODL | |||||||
data batch index | 2 | 4 | 6 | 8 | 10 | 12 | |||
0.05 | 0.05 | 0.05 | 0.05 | ||||||
A.bias | 0 | 2.290 | 0.035 | 0.080 | 0.056 | 0.049 | 0.038 | 0.036 | 0.033 |
0.01 | 1.150 | 0.038 | 0.131 | 0.036 | 0.035 | 0.033 | 0.020 | 0.034 | |
1 | 17.87 | 0.057 | 0.176 | 0.125 | 0.125 | 0.125 | 0.120 | 0.109 | |
ASE | 0 | 0.595 | 1.336 | 0.949 | 0.777 | 0.679 | 0.609 | 0.559 | |
0.01 | 0.591 | 1.323 | 0.939 | 0.771 | 0.674 | 0.604 | 0.555 | ||
1 | 0.598 | 1.341 | 0.950 | 0.778 | 0.680 | 0.609 | 0.561 | ||
ESE | 0 | 34.90 | 0.591 | 1.367 | 0.962 | 0.786 | 0.681 | 0.611 | 0.559 |
0.01 | 36.25 | 0.581 | 1.341 | 0.996 | 0.792 | 0.686 | 0.597 | 0.553 | |
1 | 33.49 | 0.585 | 1.327 | 0.903 | 0.755 | 0.664 | 0.593 | 0.548 | |
CP | 0 | 1.000 | 0.953 | 0.951 | 0.949 | 0.948 | 0.949 | 0.950 | 0.950 |
0.01 | 1.000 | 0.960 | 0.943 | 0.938 | 0.943 | 0.947 | 0.960 | 0.953 | |
1 | 1.000 | 0.965 | 0.955 | 0.957 | 0.962 | 0.953 | 0.942 | 0.953 | |
ACL | 0 | 2.487 | 5.239 | 3.721 | 3.047 | 2.663 | 2.385 | 2.190 | |
0.01 | 2.478 | 5.187 | 3.681 | 3.022 | 2.641 | 2.366 | 2.176 | ||
1 | 2.479 | 5.258 | 3.725 | 3.051 | 2.666 | 2.387 | 2.197 | ||
C.Time (s) | 0.03 | 29.97 | 2.026 |
MLE | offline | ODL | |||||||
data batch index | 2 | 4 | 6 | 8 | 10 | 12 | |||
0.05 | 0.05 | 0.05 | 0.05 | ||||||
A.bias | 0 | 2.467 | 0.053 | 0.119 | 0.093 | 0.084 | 0.078 | 0.073 | 0.068 |
0.01 | 1.546 | 0.048 | 0.138 | 0.070 | 0.093 | 0.068 | 0.065 | 0.064 | |
1 | 17.54 | 0.026 | 0.122 | 0.122 | 0.113 | 0.087 | 0.095 | 0.084 | |
ASE | 0 | 0.674 | 1.341 | 0.955 | 0.781 | 0.680 | 0.614 | 0.567 | |
0.01 | 0.673 | 1.332 | 0.948 | 0.780 | 0.679 | 0.612 | 0.565 | ||
1 | 0.667 | 1.344 | 0.962 | 0.783 | 0.682 | 0.616 | 0.567 | ||
ESE | 0 | 45.93 | 0.671 | 1.358 | 0.953 | 0.781 | 0.679 | 0.608 | 0.556 |
0.01 | 45.47 | 0.667 | 1.387 | 0.935 | 0.765 | 0.644 | 0.595 | 0.557 | |
1 | 45.87 | 0.657 | 1.334 | 0.940 | 0.774 | 0.659 | 0.592 | 0.544 | |
CP | 0 | 1.000 | 0.951 | 0.951 | 0.951 | 0.948 | 0.947 | 0.947 | 0.949 |
0.01 | 1.000 | 0.950 | 0.945 | 0.960 | 0.953 | 0.958 | 0.950 | 0.947 | |
1 | 1.000 | 0.963 | 0.953 | 0.960 | 0.945 | 0.947 | 0.960 | 0.955 | |
ACL | 0 | 2.643 | 5.258 | 3.745 | 3.063 | 2.667 | 2.406 | 2.222 | |
0.01 | 2.640 | 5.222 | 3.717 | 3.057 | 2.661 | 2.398 | 2.214 | ||
1 | 2.614 | 5.267 | 3.773 | 3.068 | 2.672 | 2.414 | 2.222 | ||
C.Time (s) | 0.03 | 29.02 | 2.166 |
MLE | offline | ODL | |||||||
data batch index | 2 | 4 | 6 | 8 | 10 | 12 | |||
0.05 | 0.01 | 0.01 | 0.01 | ||||||
A.bias | 0 | 1.101 | 0.014 | 0.034 | 0.024 | 0.020 | 0.018 | 0.015 | 0.014 |
0.01 | 1.553 | 0.011 | 0.038 | 0.014 | 0.012 | 0.009 | 0.009 | 0.010 | |
1 | 11.99 | 0.069 | 0.168 | 0.152 | 0.152 | 0.141 | 0.131 | 0.121 | |
ASE | 0 | 0.259 | 0.573 | 0.412 | 0.341 | 0.297 | 0.268 | 0.245 | |
0.01 | 0.259 | 0.572 | 0.411 | 0.340 | 0.297 | 0.268 | 0.246 | ||
1 | 0.261 | 0.573 | 0.411 | 0.341 | 0.297 | 0.268 | 0.246 | ||
ESE | 0 | 19.94 | 0.254 | 0.578 | 0.041 | 0.034 | 0.030 | 0.028 | 0.025 |
0.01 | 19.72 | 0.257 | 0.572 | 0.397 | 0.325 | 0.287 | 0.264 | 0.251 | |
1 | 20.37 | 0.262 | 0.579 | 0.407 | 0.334 | 0.297 | 0.262 | 0.244 | |
CP | 0 | 1.000 | 0.955 | 0.947 | 0.947 | 0.949 | 0.948 | 0.950 | 0.950 |
0.01 | 1.000 | 0.953 | 0.945 | 0.959 | 0.954 | 0.950 | 0.956 | 0.947 | |
1 | 1.000 | 0.943 | 0.933 | 0.935 | 0.933 | 0.925 | 0.927 | 0.928 | |
ACL | 0 | 1.016 | 2.246 | 1.613 | 1.335 | 1.163 | 1.050 | 0.962 | |
0.01 | 1.016 | 2.241 | 1.610 | 1.334 | 1.164 | 1.050 | 0.962 | ||
1 | 1.022 | 2.246 | 1.613 | 1.335 | 1.164 | 1.050 | 0.963 | ||
C.Time (min) | 0.05 | 117.5 | 7.08 |
MLE | offline | ODL | |||||||
data batch index | 2 | 4 | 6 | 8 | 10 | 12 | |||
0.05 | 0.05 | ||||||||
A.bias | 0 | 1.594 | 0.018 | 0.042 | 0.034 | 0.031 | 0.029 | 0.027 | 0.025 |
0.01 | 1.554 | 0.016 | 0.044 | 0.031 | 0.026 | 0.012 | 0.010 | 0.010 | |
1 | 11.85 | 0.077 | 0.192 | 0.168 | 0.159 | 0.147 | 0.137 | 0.134 | |
ASE | 0 | 0.307 | 0.576 | 0.416 | 0.347 | 0.304 | 0.274 | 0.251 | |
0.01 | 0.307 | 0.577 | 0.416 | 0.347 | 0.304 | 0.274 | 0.251 | ||
1 | 0.308 | 0.576 | 0.416 | 0.347 | 0.304 | 0.275 | 0.251 | ||
ESE | 0 | 27.18 | 0.300 | 0.576 | 0.411 | 0.341 | 0.298 | 0.269 | 0.246 |
0.01 | 27.68 | 0.296 | 0.576 | 0.403 | 0.330 | 0.287 | 0.266 | 0.240 | |
1 | 27.49 | 0.291 | 0.559 | 0.395 | 0.326 | 0.295 | 0.257 | 0.240 | |
CP | 0 | 1.000 | 0.955 | 0.947 | 0.949 | 0.949 | 0.949 | 0.948 | 0.947 |
0.01 | 1.000 | 0.957 | 0.949 | 0.958 | 0.951 | 0.958 | 0.951 | 0.956 | |
1 | 1.000 | 0.951 | 0.946 | 0.939 | 0.929 | 0.925 | 0.931 | 0.926 | |
ACL | 0 | 1.202 | 2.259 | 1.631 | 1.361 | 1.193 | 1.076 | 0.982 | |
0.01 | 1.203 | 2.261 | 1.632 | 1.361 | 1.193 | 1.076 | 0.983 | ||
1 | 1.206 | 2.257 | 1.629 | 1.361 | 1.193 | 1.077 | 0.983 | ||
C.Time (min) | 0.06 | 112.3 | 6.99 |
5 Real data analysis
Graduated driver licensing programs are designed to restrict the driving privileges of new drivers. They typically include restrictions such as nighttime, expressway, and unsupervised driving. To monitor the effectiveness of such systems in reducing fatal teen crashes, we apply our proposed ODL procedure to analyze streaming data from the National Automotive Sampling System-Crashworthiness Data System (NASS CDS). Our primary interest was to evaluate the effectiveness of graduated driver licensing, which is a nationwide legislature on novice drivers of age 21 or younger under various conditions of vehicle operation. In contrast, there are no restrictions on vehicle operation for older drivers (say, older than 65) in the current policy. To assess the effect of driver’s age on driving safety, we compared age groups with respect to the risk of fatal crash when an accident happened. We first categorized the “Age” variable into three groups: “Age21” representing the young group under a restricted graduate driver licensing, and “Age65” for the older group with a regular full driver’s license, and those in between were treated as the reference group. Extent of “Injury” in a crash is a binary outcome of interest, 1 for a moderate or severe injury, and 0 for minor or no injury. This outcome variable was created from the variable of Maximum Known Occupant Ais (MAIS), which indicates the single most severe injury level reported for each occupant. Other potential risk factors were also included in the model, including seat belt use, alcohol, speed limit, vehicle weight, air bag system deployed, drug involvement in this accident. Some interaction terms are also included such as driver’s age group and alcohol, age group and sex etc.
Streaming data were formed by monthly accident data from the period of 7 years from January, 2009 to December, 2015, with data batches and a total of crashed vehicles with complete records on both outcome and predictors. The number of predictors is including an intercept, 46 main effects and 15 interaction terms. Note that the first batch size , and the smallest batch only contains 6 samples. We applied our proposed method to sequentially updated parameter estimates and standard errors for the regression coefficients in a penalized logistic regression model.
As shown in Figure 2, the 95% pointwise confidence bands over the 84 batches became narrower for the old group but not the young one. Such phenomenon clearly differentiate the significance levels of these two age groups with respect to the severity of injury. Starting at almost the same estimated risk, the young age group gradually moves up and stay around 0 over the 84-month period. The trace plot ends at , meaning that the young age group has a slightly lower adjusted odds of fatal crash in comparison to the middle age group. This finding is consistent with the reported results in the literature that GDL is an effective policy to protect novice drivers from fatal injuries (Chen et al. ,, 2014). In contrast, the trace plot for the older age group shows an upward trend at around year 2013 and get stabilized at a positive value 0.51 since 2014. This observation indicates that the adjusted odds of fatality in a vehicle crash for the older group becomes significantly higher than the middle age group as more data accumulates over time. This may suggest a need on policy modification on restrictive vehicle operation for old drivers.
Figure 3 shows the trends of , -values of the Wald test constructed using the online debiased lasso estimator and its standard error in the 10-base logarithm, for nine of the risk factors over 84 months. The trajectory of each covariate effect estimate show an evidence against the null as data accumulates over time. The interaction term between “Age” and “Alcohol” turns out to have the strongest association to the odds of fatality in a crash among all covariates shown in this plot. This is an overwhelming confirmation to the findings that driving under the influence of alcohol is significantly associated with the fatal crash even with the existence of graduate driver licensing to protect young drivers. This also explains why the main effect of young age group does not show a significant association with the odds of injury while the interaction term does. Moreover, some spikes appeared in the trajectories of most risk factors between year 2010 and 2014, but they get smoothed and stabilized thereafter. This might be related to small batch sizes or unbalanced categorical values in some monthly data batches. In addition, to characterize the overall significance level for each risk factor over the 84-month period, we propose to calculate a summary statistic as the area under the -value curve. These values are included in the brackets after the name of each covariate in Figure 3. They provide a clear summary of the overall significance level in existence of those spikes, and most of them are well aligned with the ranking of -values at the end of year 2015.
Applying the proposed online debiased lasso procedure to the above CDS data analysis enabled us to visualize time-course patterns of data evidence accrual as well as stability and reproducibility of inference. As shown clearly in both Figures 2 and 3, at the early stage of data streams, due to limited sample sizes and possibly sampling bias, both parameter estimates and test power may be unstable and even misleading. These potential shortcomings can be convincingly overcome when estimates and inferential quantities were continuously updated along with data streams, which eventually reached stability and reliable conclusions.


6 Discussion
In this paper, we propose an online debiased lasso method for statistical inference in high-dimensional GLMs. The method is applicable to streaming data, that is, only the historical summary statistics, not the raw historical data, are needed in updating the estimate at the current stage. Under regularity conditions similar to those in the offline setting and mild conditions on the batch sizes, we prove the online debiased lasso (with an online correction term) is asymptotically normal. The numerical studies further demonstrate the effectiveness of our algorithm and support the theoretical results.
There are still many open questions in the area of online inference. First, the method we develop focuses on homogeneous data, where the streaming data is assumed to be i.i.d. sampled. It will be interesting to determine if we can obtain similar results under the non-homogeneous setting. Second, the loss function we consider in this paper is from the negative log-likelihood function. It is unclear whether other loss functions, including nonsmooth robust loss functions such as the Huber’s loss (Huber,, 1964), could be used in the online inference. Third, we did not address the issue about the online variable selection. The major difficulty in this problem is how to recover the significant variables which may be dropped at the early stages of the stream. We hope to address these interesting questions in the future.
Appendix
Proof of Theorem 1.
For the prior data batch , we have where is the offline lasso estimator defined in (1). Since the consistency of is well-established in van de Geer et al. , (2014), (11) holds when Now we prove the consistency of for an arbitrary by the mathematical induction.
Suppose that satisfies the statement in (11) with constant and . We claim that Otherwise, we consider the following linear combination,
(13) |
Then Note that if and only if Therefore, it suffices to show .
Let , . Since the objective function defined in (4) is the convex function, we have
(14) | |||||
where . Recall that . A Taylor’s expansion gives that
where and for some Then
Taking the above equation back into (14), we obtain that
(15) |
We introduce two notations,
as the excess risk and the empirical process. Note that does not depend on since the data is i.i.d. sampled. With the above notation, (15) could be further written as
Recall that . The next lemma provides the upper bound of whose proof is given at the end of Appendix.
Lemma 1.
Note that and the upper bound of could be absorbed in the upper bound of . According to Lemma 1,
Consequently,
(16) | |||||
The left part follows the standard argument. Recall that . For , we define that where Then, From (16), we have
Here we need a discussion about the value of .
Case I. We suppose that
. Then,
which implies . Then, we can adopt the empirical compatibility condition, that is,
where is the compatibility constant. Thus,
Let Based on Lemma 1, we have
which is also known as the margin condition (van de Geer,, 2008). Next we apply the arithmetic mean-geometric mean inequality and obtain
(17) |
Then, it follows that
(18) |
On the one hand, since , we obtain that
With the suitable choice of ,
we have
Here we require that , and On the other hand, we combine (17) and (18), and obtain
Again, since , we obtain
(19) |
Case II. We suppose that . Then,
Then, by the margin condition, it is straightforward to show that
(20) |
Note that in both cases, we obtain According to (13), we show our claim The remaining step is to repeat the above argument and obtain the upper bound of the estimation error at (19) or (20). It is worth pointing out that and where and does not depend on The proof is finished by taking a union bound on the events considered in Lemma 1. ∎
Lemma 2.
(Han et al. ,, 2021) Let and be the batch size and the cumulative batch size respectively when the -th data arrives, Then,
(21) | |||||
(22) |
Proof of Theorem 2.
Recall that
is a standard lasso estimator. The proof follows the standard argument as long as estimated information matrix satisfies the compatibility condition. Therefore, it suffices to demonstrate the compatibility condition holds. Recall that Then we have
According to the error bound of provided in Theorem 1 and (22), we obtain
where is defined in (A1) of Assumption 1. Meanwhile, based on the Hoeffding’s inequality, it follows
In summary,
Consequently, the compatibility condition holds through Corollary 6.8 in Bühlmann & van de Geer, (2011). ∎
The remaining part is to establish the asymptotic normality of the online debiased lasso estimator.
Proof of Theorem 3.
We first deal with the debiased term. Recall that is the extension of . Then
Note that for we let denote -th row in , namely the covariates of -th data in -th batch. According to the mean value theorem,
where Let and is diagonal matrix with the diagonal element . As a result,
where . Now, we focus on the online debiased lasso estimator defined in (2.2). According to the above results,
where
Therefore, the remaining part is to demonstrate that
First of all, according to the Lipschitz condition of
Then,
where the last inequality is from Theorem 1 and the bounded assumption of Meanwhile, the boundedness of could be shown by in Assumption 1 and Theorem 2. Therefore, we find an upper bound for :
Next, we adopt the Karush-Kuhn-Tucker (KKT) conditions. Let denote the zero-vector except that the -th element is one.
Finally, we apply Theorem 2 and (22) in Lemma 2.
Based on the conditions in Theorem 3, we conclude In summary,
where we could further refer the terms in the right parts as and in Theorem 3.
∎
For the sake of completeness, we provide the proof of Lemma 1 at the end of Appendix.
Proof of Lemma 1.
We start from Recall that
Let , and denote the diagonal matrix with diagonal element Then,
where and . As a result,
The left part is to find the upper bound of , that is,
For ease of understanding, we could assume that since the first data batch containing the least information may lead to the largest estimation error. Besides that, this assumption will not affect the outcome. The upper bound of could always be absorbed in the upper bound of due to .
For , the proof is similar as the above one and we omit it.
Recall that . Note that could be written as the sum of random variables, that is, Since and , according to Hoeffding’s inequality, with probability at least ,
The result of is straightforward according to Theorem 14.5 in Bühlmann & van de Geer, (2011).
∎
References
- Belloni et al. , (2015) Belloni, A., Chernozhukov, V., & Kato, K. 2015. Uniform post-selection inference for least absolute deviation regression and other Z-estimation problems. Biometrika, 102(1), 77–94.
- Belloni et al. , (2019) Belloni, A., Chernozhukov, V., & Kato, K. 2019. Valid post-selection inference in high-dimensional approximately sparse quantile regression models. Journal of the American Statistical Association, 114(526), 749–758.
- Bühlmann & van de Geer, (2011) Bühlmann, P., & van de Geer, S. 2011. Statistics for high-dimensional data: methods, theory and applications. Springer Science & Business Media.
- Cai & Guo, (2017) Cai, T. T., & Guo, Z. 2017. Confidence intervals for high-dimensional linear regression: Minimax rates and adaptivity. The Annals of Statistics, 45(2), 615–646.
- Cai & Guo, (2020) Cai, T. T., & Guo, Z. 2020. Semisupervised inference for explained variance in high dimensional linear regression and its applications. Journal of the Royal Statistical Society: Series B, 82(2), 391–419.
- Cardot & Degras, (2018) Cardot, H., & Degras, D. 2018. Online Principal Component Analysis in High Dimension: Which Algorithm to Choose? International Statistical Review, 86, 29 – 50.
- Chen et al. , (2014) Chen, Y., Berrocal, V. J., Bingham, C. R., & Song, P. X.-K. 2014. Analysis of spatial variations in the effectiveness of graduated driver’s licensing (GDL) program in the state of Michigan. Spatial and Spatio-temporal Epidemiology, 8, 11–22.
- Csala et al. , (2017) Csala, A., Voorbraak, F. P., Zwinderman, A. H., & Hof, M. H. 2017. Sparse redundancy analysis of high-dimensional genetic and genomic data. Bioinformatics, 33(20), 3228 – 3234.
- Deshpande et al. , (2019) Deshpande, Y., Javanmard, A., & Mehrabi, M. 2019. Online debiasing for adaptively collected high-dimensional data. arXiv preprint arXiv:1911.01040.
- Dezeure et al. , (2015) Dezeure, R., Bühlmann, P., Meier, L., & Meinshausen, N. 2015. High-dimensional inference: confidence intervals, -values and R-software hdi. Statistical Science, 30(4), 533 – 558.
- Fan & Li, (2001) Fan, J., & Li, R. 2001. Variable selection via nonconcave penalized likelihood and its oracle properties. Journal of the American Statistical Association, 96(456), 1348–1360.
- Fang, (2019) Fang, Y. 2019. Scalable statistical inference for averaged implicit stochastic gradient descent. Scandinavian Journal of Statistics, 1–16.
- Fei & Li, (2021) Fei, Z., & Li, Y. 2021. Estimation and inference for high dimensional generalized linear models: a splitting and smoothing approach. Journal of Machine Learning Research, 22(58), 1–32.
- Han et al. , (2021) Han, R., Luo, L., Lin, Y., & Huang, J. 2021. Online debiased lasso. arXiv preprint arXiv:2106.05925.
- Huber, (1964) Huber, P. J. 1964. Robust estimation of a location parameter. The Annals of Mathematical Statistics, 35, 73–101.
- Javanmard & Montanari, (2014) Javanmard, A., & Montanari, A. 2014. Confidence intervals and hypothesis testing for high-dimensional regression. Journal of Machine Learning Research, 15(1), 2869–2909.
- Luo & Song, (2020) Luo, L., & Song, P. X.-K. 2020. Renewable estimation and incremental inference in generalized linear models with streaming datasets. Journal of the Royal Statistical Society: Series B, 82, 69–97.
- Ning & Liu, (2017) Ning, Y., & Liu, H. 2017. A general theory of hypothesis tests and confidence regions for sparse high dimensional models. The Annals of Statistics, 45(1), 158–195.
- Robbins & Monro, (1951) Robbins, H., & Monro, S. 1951. A stochastic approximation method. The Annals of Mathematical Statistics, 22(3), 400–407.
- Sakrison, (1965) Sakrison, D. J. 1965. Efficient recursive estimation: application to estimating the parameter of a covariance function. International Journal of Engineering Science, 3(4), 461–483.
- Schifano et al. , (2016) Schifano, E. D., Wu, J., Wang, C., Yan, J., & Chen, M. H. 2016. Online updating of statistical inference in the big data setting. Technometrics, 58(3), 393–403.
- Shi et al. , (2020) Shi, C., Song, R., Lu, W., & Li, R. 2020. Statistical inference for high-dimensional models via recursive online-score estimation. Journal of the American Statistical Association, 1–12.
- Sun et al. , (2020) Sun, L., Wang, M., Guo, Y., & Barbu, A. 2020. A novel framework for online supervised learning with feature selection. arXiv preprint arXiv:1803.11521.
- Sur & Candès, (2019) Sur, P., & Candès, E. J. 2019. A modern maximum-likelihood theory for high-dimensional logistic regression. Proceedings of the National Academy of Sciences, 116(29), 14516 – 14525.
- Tibshirani, (1996) Tibshirani, R. 1996. Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society: Series B, 58(1), 267–288.
- Toulis & Airoldi, (2017) Toulis, P., & Airoldi, E. M. 2017. Asymptotic and finite-sample properties of estimators based on stochastic gradients. The Annals of Statistics, 45(4), 1694–1727.
- van de Geer, (2008) van de Geer, S. 2008. High-dimensional generalized linear models and the lasso. The Annals of Statistics, 36(2), 614 – 645.
- van de Geer et al. , (2014) van de Geer, S., Bühlmann, P., Ritov, Y. A., & Dezeure, R. 2014. On asymptotically optimal confidence regions and tests for high-dimensional models. The Annals of Statistics, 42(3), 1166–1202.
- Wang et al. , (2017) Wang, D., Fong, S., Wong, R. K., Mohammed, S., Fiaidhi, J., & Wong, K. K. 2017. Robust high-dimensional bioinformatics data streams mining by ODR-ioVFDT. Scientific Report, 7(43167).
- Wen et al. , (2018) Wen, H., Shi, J., Chen, W., & Liu, Z. 2018. Transferring and generalizing deep-learning-based neural encoding models across subjects. NeuroImage, 176, 152 – 163.
- Zhang, (2010) Zhang, C. H. 2010. Nearly unbiased variable selection under minimax concave penalty. The Annals of Statistics, 38(2), 894–942.
- Zhang & Zhang, (2014) Zhang, C. H., & Zhang, S. S. 2014. Confidence intervals for low dimensional parameters in high dimensional linear models. Journal of the Royal Statistical Society: Series B, 76(1), 217 – 242.
- Zhang et al. , (2020) Zhang, Y., Han, K., Worth, R., & Liu, Z. 2020. Connecting concepts in the brain by mapping cortical representations of semantic relations. Nature Communications, 11(1), 1877.
- Zou, (2006) Zou, H. 2006. The adaptive lasso and its oracle properties. Journal of the American Statistical Association, 101(476), 1418–1429.
- Zou & Hastie, (2005) Zou, H., & Hastie, T. 2005. Regularization and variable selection via the elastic net. Journal of the Royal Statistical Society: Series B, 67, 301–320.