Bayesian Inference for Consistent Predictions
in Overparameterized Nonlinear Regression
Abstract.
The remarkable generalization performance of large-scale models has been challenging the conventional wisdom of the statistical learning theory. Although recent theoretical studies have shed light on this behavior in linear models and nonlinear classifiers, a comprehensive understanding of overparameterization in nonlinear regression models is still lacking. This study explores the predictive properties of overparameterized nonlinear regression within the Bayesian framework, extending the methodology of the adaptive prior considering the intrinsic spectral structure of the data. Posterior contraction is established for generalized linear and single-neuron models with Lipschitz continuous activation functions, demonstrating the consistency in the predictions of the proposed approach. Moreover, the Bayesian framework enables uncertainty estimation of the predictions. The proposed method was validated via numerical simulations and a real data application, showing its ability to achieve accurate predictions and reliable uncertainty estimates. This work provides a theoretical understanding of the advantages of overparameterization and a principled Bayesian approach to large nonlinear models.
1. Introduction
In recent years, the field of machine learning has witnessed a paradigm shift in understanding generalization performance. Traditional statistical theory emphasizes the importance of controlling model complexity and avoids overfitting by balancing the bias-variance trade-off [56]. However, with the emergence of large-scale deep neural networks, which often have considerably more parameters than the number of training data samples, conventional wisdom has been challenged. These models have demonstrated the ability to achieve high accuracy on test data by completely interpolating the training data. This discrepancy between empirical observations and traditional learning theory has necessitated a new explanation [62].
A groundbreaking study by [6] provides a theoretical foundation for overparameterized linear regression models through a non-asymptotic analysis of the generalization error of a minimum norm interpolator. This discovery has reshaped our understanding of generalization performance in large statistical models, suggesting that overparameterization does not necessarily lead to poor prediction, even in simple models. Instead, the key to good performance lies in the model’s expressiveness and its ability to capture the essential structure of the data. Recent theoretical studies have demonstrated that overparameterization leads to accurate predictions in various statistical models, shedding light on the behavior of modern machine learning models.
While the theoretical understanding of the goodness of overparameterization has advanced significantly in linear estimation, such as in linear regression [6, 53] and kernel methods [32, 34], the analysis of nonlinear estimators remains limited. Although some studies have investigated this issue in nonlinear neural networks for classification problems [14, 27], a comprehensive understanding is lacking as to why and under what conditions overparameterized nonlinear regression can achieve good generalization performance. Hence, addressing this question is crucial for reinforcing the theoretical foundations of modern machine learning.
In this paper, the predictive properties of overparameterized nonlinear regression models within the Bayesian framework are explored. Bayesian inference combines the prior distribution of parameters with a likelihood function to obtain the posterior distribution. [60] proposed a methodology that incorporates the structure of the data into the prior distribution, capturing the linear model structure as the likelihood, which results in accurate predictions even in non-sparse high-dimensional models. The key idea is to adaptively adjust the prior variance based on the intrinsic structure (spectra) of the data, which is found to be critical in prediction error convergence in the sense of posterior contraction. We extend this to nonlinear regression models, specifically single-neuron models [61, 48] and generalized linear models (GLMs) [38] and theoretically analyze the prediction error of the resulting posterior distribution. The main contributions of this study are summarized as follows:
-
•
For generalized linear models based on one-parameter exponential families, including logistic and Poisson regressions, the conditions under which the posterior distribution converges to the true parameter are established in terms of predictive risk, providing an upper bound on the convergence rate (Theorem 1). This finding highlights the applicability of the proposed approach to discrete response-variable regression, such as those encountered in classification problems.
-
•
The proposed method is demonstrated to generalize well to single-neuron models with a Lipschitz continuous nonlinear activation function and Gaussian noise (Theorem 2). This result suggests that even in more general models involving nonlinearity, overparameterization allows benign prediction under appropriate prior distributions.
-
•
The proposed Bayesian approach not only provides a theoretical guarantee of the generalization performance of the overparametrized models but also estimates the uncertainty in the predictions. By sampling from the posterior distribution, a confidence level can be constructed for each prediction, which is essential in decision-making processes.
1.1. Related work
Bayesian GLM has been studied intensively for high dimensions. The first paper to study high-dimensional Bayesian GLM [19] realized posterior concentration for the parameter without assuming any sparsity structure by imposing a restricted growth condition on the parameter dimension. This underparameterized setting differs from the present work. Subsequently, several studies explored the GLM setting where the number of parameters may exceed the sample size [25, 35, 24]. These works have demonstrated the asymptotic properties of the posterior distribution; however, they assume sparsity, implying that the parameters are inherently low-dimensional. The properties of Bayesian variable selection in GLMs have also been investigated in similar domains [9, 10, 43, 31, 23]. However, the assumption of parameter sparsity can be stringent, and the current study is conducted in a setting that does not rely on this assumption, thus expanding the scope of research in this area. The methods and theory of Bayesian inference in high-dimensional models are summarized in [2].
Numerous theoretical results suggest that overparameterization leads to good generalization performance. [6] demonstrated that the excess risk converges to zero in overparameterized linear models. This theory has been widely extended to other linear estimators, such as regularized regression [53, 30, 59, 37, 12], kernel regression [32, 33], linear neural networks [11, 51], and regression for dependent data [41, 54]. [14, 7, 27] ventured beyond these linear frameworks to investigate the benign nature of overparameterization, but their focus was limited to classification problems. [48] tackled nonlinear regression and demonstrated the absence of benign overfitting, but their assumptions differ from the present work mainly in terms of distribution of the covariate.
1.2. Organization
The remainder of this paper is structured as follows. In Section 2, the problem setting for one-parameter generalized linear models under the overparameterization assumption is introduced, providing a detailed description of the proposed methodology. Section 3 presents the theoretical results on posterior contraction for GLMs. Section 4 extends the results to the two-parameter case. Section 5 reports on the simulation studies conducted to validate the theoretical findings and assess the uncertainty estimation. Section 6 showcases a real data application, comparing the proposed approach with baseline methods in terms of classification performance. Section 7 concludes the paper, discussing the contribution and limitations of this work. The proof and discussion of the main theorems and details of the computational algorithms used in the analysis are provided in the Appendix.
1.3. Notation
For and any vector , denotes the -norm of . For any vector and symmetric matrix of the same dimension , we define . For sequences and , denotes the existence of a constant such that for all with some finite , and denotes the opposite; indicates that as , whereas indicates that as ; indicates that holds for every , and denotes the opposite. For natural numbers with , we define . For any square matrix , let be the inverse matrix of , be the Moore–Penrose pseudoinverse matrix, and be the sum of the diagonals of . For a pseudo-metric space and a positive value , denotes the covering number, that is, the minimum number of balls that cover with a radius in terms of . For two probability measures and on the same measurable space , the total variation distance is given by , and the Kullback–Leibler (KL) divergence and the KL variation (also referred to as the centered second-order KL divergence) between them are respectively defined as and if is absolutely continuous with respect to ; otherwise, both are The Hellinger distance between two probability distributions and is defined as .
2. One-parameter Generalized Linear Regression Model
This section presents a comprehensive overview of the one-parameter generalized linear regression model, including the problem setting (§2.1) and the composition of the predictor (§2.2).
2.1. Setting
Consider the pair , where is an -valued centered random covariate, and is an -valued univariate response. The GLM is defined as follows:
(1) |
where is the true unknown -dimensional regression coefficient, and is a known Lipschitz continuous function. In this setting, the response variable follows a general distribution belonging to the one-parameter exponential family with a density function given by
(2) |
where is the parameter; and are known continuously differentiable functions, with having a nonzero derivative. The function is chosen such that is a proper probability density.
This formulation encompasses several well-known models: Normal linear models: follows a -mean Gaussian distribution with fixed variance, and is the identity function; Logistic regression model: follows a Bernoulli distribution with parameter , and is the logistic function; Poisson regression model: follows a Poisson distribution with parameter , and is the exponential function. Note that the link function is not restricted to being canonical, thereby providing flexibility in modeling the relationship between the linear predictor and response variable.
This study focuses on a non-sparse overparameterization setting, where independent and identically distributed copies of are generated, and their realizations are observed. From these observations, a -dimensional estimate is constructed for prediction, with being larger than the sample size . In the asymptotic setting, where diverges, possesses a larger order of divergence, i.e., .
The primary objective of this study is to examine the generalization performance of a predictor trained on the observations in the overparameterized domain. We adopt as the predictor the mean , with the estimated parameter plugged in. Then, the notion of excess risk is introduced to quantify the generalization performance, measuring the difference between the oracle predictor and the predictor :
where denotes the norm with respect to the distribution of the covariates. As the later analysis shows, the convergence of the excess risk is related to the distributional convergence of the estimated parameter .
2.2. Method
Predictor construction follows a systematic approach: first, the data are divided into two sets; then, a prior distribution of is developed based on the empirical spectral information of from one set; next, the likelihood is computed in the other set; finally, the posterior distribution of is computed based on the prior and likelihood.
Initially, we divide the dataset into two equal parts, and , each containing observations. Although the split ratio is arbitrary, it should not depend on the sample size . The data splitting allows the use of for constructing the prior distribution and for evaluating the likelihood function, ensuring that the data used in these two steps are independent.
Next, we consider the empirical covariance matrix of the covariates , defined as:
(3) |
where is the sample mean of the covariates from . The factor ensures that is an unbiased estimator of the population covariance matrix, whereby a spectral decomposition of is performed:
(4) |
where and are the eigenvalues and eigenvectors of , respectively, with . Using the empirical spectra, we define a low-rank approximation of as:
(5) |
This approximation captures the most significant eigenvalues and eigenvectors of , effectively reducing the dimensionality of the covariate space while retaining the most informative directions of variation. In choosing as a tuning parameter to manage the computational cost, it has to be fixed to an interval of so that it satisfies Assumption 2.
We then employ the following prior distribution for .
Definition 1 (Prior Distribution with Effective Spectra).
(6) |
where is a fixed constant that determines the radius of the truncation sphere.
Although this prior distribution resembles a -variate Gaussian distribution with mean and covariance , there are some important distinctions. First, the distribution is truncated, with the indicator function restricting the domain to a centered sphere of radius . This truncation is necessary because the low-rank approximation is singular, and defining the density function without support restrictions would cause the integrals over to diverge. Second, unlike [43, 24, 55], we impose a non-sparse prior distribution, even though some parameters may be redundant. This choice is motivated by the desire to retain all potentially relevant covariates in the data and avoid the challenges associated with sparse priors, such as the need to specify sparsity-inducing hyperparameters.
Next, we consider the posterior distribution, which combines the prior distribution and the likelihood function using Bayes’ Theorem.
Definition 2 (Posterior Distribution).
The probability density function of the posterior distribution is given by:
(7) |
For any measurable set , the posterior probability of is given by:
(8) |
The Bayesian framework allows the quantification of events of interest, such as the probability of predictions being close to the true values through probability measures defined in this manner. Moreover, as demonstrated later, the distributional information provided by the posterior is valuable in practice.
Note that the information about the response variable in is discarded since this subset is only used to compute the sample covariance matrix of the covariates. If excluding this information seems inefficient, the roles of and can be switched to construct a new estimator (posterior distribution), and the two estimators can be averaged in a manner similar to the sample splitting scheme [8]. The theoretical results presented next preserve this averaging.
3. Main Results
In this section, the main theoretical results are presented. Initially, the key assumptions that underlie our analysis (§3.1) are introduced. These assumptions concern the spectral properties of the covariance matrix and the concentration of the empirical covariance matrix around its population counterpart. Subsequently, the posterior contraction for the GLM is established (§3.2). The theoretical results characterize the posterior contraction, elucidating the intricate interplay between dimensionality, sample size, and eigenvalue decay of the covariance matrix.
3.1. Assumptions
The initial focus is on the boundedness of the operator norm of and concentration of the empirical covariance matrix around .
Assumption 1.
The operator norm is bounded, and there exist sequences and of non-negative reals with as , such that for any sufficiently large ,
(9) |
This assumption ensures that the empirical covariance matrix concentrates around the population covariance matrix. The sequence quantifies the rate of this concentration, whereas controls the probability of the concentration event. Notably, this assumption is mild and includes not only the sub-Gaussian distributions commonly imposed in high-dimensional statistics but also log-concave distributions, which encompass most unimodal parametric distributions [57, 58] that allow for heavy-tailed distributions such as the Laplace distribution. In contrast, other related literature impose more restrictive conditions such as sub-Gaussianity [6] or strong log-concavity [14]. In a high-dimensional setting, where many covariates are incorporated, the mildness of the distributional assumption is particularly important, as it allows for a wider range of applicable scenarios.
The next assumption concerns the behavior of the eigenvalues of .
Assumption 2.
This assumption imposes certain regularity conditions on the eigenvalues of . Intuitively, conditions (i) and (ii) control the rate at which the bulk of the eigenvalues (those indexed from to ) are allowed to decay moderately. Condition (iii) requires that the eigenvalues beyond the bulk (those indexed larger than ) are sufficiently small, whereas condition (iv) constrains the size of relative to sample size and rate . These conditions are crucial for deriving the posterior contraction rates in Theorem 1.
Finally, we assume a certain local Lipschitz condition of the link function in terms of the Kullback-Leibler (KL) divergence and variational form between the corresponding probability distributions.
Assumption 3.
For any pair , the GLM with link function satisfies
where and are the probability distributions corresponding to the densities and , respectively.
This assumption limits the amount of variation in a distribution when different parameters are used, to control the complexity of the model and is satisfied by several common GLMs, as illustrated in the following examples.
Example 1.
-
(1)
For normal linear regression with known variance , we have
where the last inequality holds if the link function is Lipschitz continuous.
-
(2)
For binary regression with the logistic link, we have
which are both bounded by . In contrast, for the probit link, Assumption 3 does not hold in general.
-
(3)
For Poisson regression with a bounded and positive link function , we have
Although the exponential function, , is the canonical link function for Poisson regression, it is unbounded, and Assumption 3 does not hold.
These examples demonstrate that Assumption 3 is a reasonable condition that holds for several widely used GLMs.
3.2. Posterior Contraction
We now present the main result, which establishes the posterior contraction for the GLM under the assumptions introduced in the previous subsection.
Theorem 1.
Consider the GLM (2) with covariate distribution satisfying Assumptions 1-3, and the prior distribution on is (6). Then, for a sufficiently large constant , the posterior distribution contracts around the true parameter as
Furthermore, if the eigenvalues of satisfy the additional condition for all , then the contraction rate can be explicitly characterized as
(12) |
This theorem shows that under conditions suitable for the covariance matrix and prior distribution, the posterior distribution of the predictor concentrates around the oracle predictor at a rate with respect to the distance. The rate depends on the interplay between sample size , concentration rate of the empirical covariance matrix, and eigenvalues of in the bulk region.
3.2.1. Consistency as a distribution
While Theorem 1 derives the posterior consistency of the predictor with respect to the distance, it also holds for other commonly used distances between probability distributions, such as the total variation distance, the Hellinger distance, and the KL divergence in several cases, including a list of distributions in Example 1. The contraction in these divergences is particularly useful for assessing the quality of the posterior distribution in approximating the true data-generating distribution.
3.2.2. Consistency of Point Estimators
The posterior contraction result in Theorem 1 not only provides a theoretical justification for using the posterior distribution as a reliable approximation of the true data-generating distribution but also has important implications for point estimators in the usual sense of consistency of estimators. Two commonly used point estimators in Bayesian inference are the maximum a posteriori (MAP) estimator and the posterior mean.
The MAP estimator, denoted as , is defined as the mode of the posterior distribution:
Under the same assumptions as in Theorem 1, the MAP estimator can be shown to be consistent with respect to the norm:
Another widely used point estimator is the posterior mean, denoted as , which is defined as the expectation of the parameter with respect to the posterior distribution:
(13) |
Under the assumptions of Theorem 1 and the additional condition that posterior consistency with respect to the total variation distance or Hellinger distance holds (as discussed in §3.2.1), the posterior mean is also consistent:
(14) |
As the posterior mean considers the weights of the entire distribution (defined by integral), its consistency requires the additional condition that the posterior distribution is not ill-behaved. Further discussion is deferred to the Appendix.
4. Extension to Two-Parameter Gaussian Case
In this section, the analysis is extended to the setting of GLM with a Gaussian distribution for the measurement error. Specifically, we consider the single-neuron model [48, 61]:
(15) |
where is the standard deviation of the error term. This model is more flexible than the standard linear regression model and can accommodate various link functions . For instance, if is a censored link function such as the ReLU, tanh, or softplus, then (15) represents a censored regression model.
Here, after specifying the prior distributions for the parameters and , the prior distribution (6) for is used. Regarding the variance parameter, we employ as the prior distribution, the inverse Gaussian distribution with density of the form:
(16) |
where and denote the mean and shape parameters, respectively, and the hyperparameters are arbitrarily assigned. An important property of the inverse Gaussian prior is that the tails on both sides decrease rapidly, which is critical for posterior contraction. In contrast, the inverse gamma distribution, the conjugate prior to the Gaussian likelihood, has a light lower tail and a heavy upper tail. From a Bayesian perspective, the inverse Gaussian prior reflects the analyst’s belief that the parameter will not have extremely large or small values. This prior distribution for the variance parameter has also been employed in [52, 42, 60].
Theorem 2.
Consider the regression model (15) with covariate distribution satisfying Assumptions 1-3 and a prior distribution on satisfying (6) and (16). Then, for a sufficiently large constant , the following convergences hold as :
(17) | |||
(18) |
where denotes the true parameter pair. Furthermore, if the eigenvalues of satisfy the additional condition for all , then the contraction rate can be explicitly characterized as
(19) |
This theorem provides the posterior contraction for both predictor and error variance in the two-parameter Gaussian model. The contraction of the predictor is with respect to the distance, while that of is with respect to the absolute difference.
The extension of the two-parameter Gaussian model is particularly relevant to applications where the error variance is unknown and needs to be estimated from the data. By considering the joint posterior contraction rates for , Theorem 2 presents a theoretical justification for Bayesian inference in such models and offers guidance on the choice of prior distributions that can adapt to the unknown error variance.
5. Simulation
This section confirms by simulation the statement of the theorem presented in §3 and §4 through simulations. The posterior predictive distribution of the proposed method is computed using variational inference, and the detailed algorithm is deferred to the Appendix.
5.1. Logistic Regression
Here, we consider a binary classification scenario where the response variable takes values in . The data are generated according to the following model:
(20) |
where is the logistic function. Specifically, for , to ensure an overparameterized setting, are generated with . Two settings are considered for the covariate: (A) Gaussian covariate, ; (B) Laplacian covariate, , where the eigenvalues of follow . Then, each corresponding to and is sampled from (20).
For every , different datasets were repeatedly generated through the above procedure and each was analyzed to determine the behavior of the proposed method in response to an increase in sample size. In each case, a test dataset of instances was generated, to evaluate the performance of the trained classifier. The performance of the Bayesian predictions was evaluated, employing four widely used metrics: 0-1 loss, which quantifies the misclassification rate; area under the ROC curve (AUC), which assesses the discriminative ability of the classifier; rate of unconfident misclassifications (UM), which measures the proportion of unconfident predictions among misclassified instances; and confident correct (CC) predictions rate, which calculates the proportion of accurate predictions among confident ones. We refer to a confident prediction as one whose predictive interval does not cross the threshold , whereas an unconfident one does. The latter two metrics are helpful when building systems that defer action when there is no confidence in the prediction. Additionally, principal component regression (PCR) [5] was performed as a baseline method, with the number of principal components selected by 5-fold cross-validation from .

Figure 1 illustrates the results, where the reported values are averaged over the datasets. As sample size increases, the 0-1 loss exhibits a consistent decrease in both settings, indicating an improvement in classification accuracy. This trend persists even in the overparameterized setting, aligning with the theoretical findings. The AUC metric also shows a steady increase with sample size, suggesting an enhanced ability to discriminate between the two classes. Our method outperformed PCR, and these results underscore the importance of overparameterization for prediction. The UM and CC metrics provide insights into the uncertainty estimation of the proposed method. UM consistently presents a high value. This indicates that the method can warn us of a lack of confidence when misclassifying. The CC also presents a similarly high value, suggesting that confident predictions are trustworthy. These observations suggest that the proposed method effectively captures the uncertainty associated with predictions, thereby leveraging it to make more informed decisions.
5.2. Nonlinear regression with Gaussian noise
In this subsection, the scenario of true responses taking values along the positive real line is considered. Specifically, the data are generated as follows:
(21) |
where is the softplus function. For , we set and sample the coefficient and covariate vectors in the same manner as in the previous experiment. Then, the scalar response is generated by (21).
For every , distinct datasets were generated using the above procedure and, on each one, Bayesian prediction performance changes were examined with an increase in training data size. For a newly generated test dataset , the accuracy of Bayesian predictions was assessed using three widely adopted indicators: root-mean-squared error (RMSE), measuring the average deviation between predicted and true values; coverage probability (CP), evaluating the reliability of the prediction intervals by calculating the proportion of true values falling within the confidence intervals; and average length (AL), quantifying the width of the prediction intervals, providing insight into the precision of the predictions. Moreover, PCR was implemented, selecting the number of principal components using 5-fold cross-validation from .

The results, averaged over datasets, are shown in Figure 2. First, the RMSE decreases monotonically as sample size increases. Even when is large and the setting is clearly overparameterized, the fact that accuracy continues to improve supports the theoretical results experimentally. Also, our method excels PCR, providing advantages of our method for prediction problems. Next, we focus on CP and AL, which are indicators related to prediction intervals, with CP representing the reliability of the prediction interval and AL representing its width. In the figure, the coverage is not sufficient when ; however, after that, CP stabilizes at a high value. AL also slightly increases but not by much, implying that the interval is not trivial. (Regardless of , the mean of in the test data is around , and the variance is around .) These results suggest that the proposed method properly quantifies the uncertainty in constructing prediction intervals.
6. Data Analysis
6.1. Dataset
We apply the proposed method to the ARCENE dataset, which was originally curated for the NIPS 2003 feature selection challenge for the binary classification problem. The dataset consists of mass spectrometric measurements from cancer patients and healthy individuals, with the goal of distinguishing between the two groups based on these measurements. The ARCENE dataset, accessible through the UC Irvine Machine Learning Repository [17], was constructed by merging three separate datasets: National Cancer Institute (NCI) ovarian cancer data, NCI prostate cancer data, and Eastern Virginia Medical School (EVMS) prostate cancer data. In total, the public dataset contains samples, comprising cancer samples and normal samples. These data are categorized as cancer and control samples, with continuous covariates, with 7000 dimensions corresponding to real mass-spectrometry peaks and dimensions representing random noise variables (probes). The primary challenges posed by the dataset lie in its high dimensionality and the presence of overlapping classes. The objective here is to discriminate between cancerous and normal samples.
6.2. Experimental Setting
The performance of the six distinct methods were compared.
-
•
Logistic Regression with effective spectral (ES) prior distribution: The proposed Bayesian methodology, as described in Section 2.2, was applied to the logistic regression model to make predictions on unseen data.
- •
-
•
Logistic Regression with the normal (NM) prior: We investigated the performance of logistic regression with a standard normal prior distribution, which is the most commonly used prior and serves as a ridge-like regularization [16]. This prior provides flexibility in modeling the coefficient distribution and can be easily implemented using the Pyro probabilistic programming language [3].
-
•
Logistic Regression with the Laplace (LP) prior: the performance of logistic regression was also explored using a standard Laplace prior distribution, which promotes sparsity in the coefficient estimates as a Bayesian Lasso [44]. This prior has proven effective in high-dimensional settings and can be efficiently implemented using Pyro, leveraging its support for non-Gaussian prior distributions [3].
-
•
LightGBM: As a baseline method for table data prediction tasks, LightGBM [28], which is an efficient and scalable gradient boosting framework, was implemented adopting the ”lightgbm” package in Python for the implementation and training of the model using Bayesian optimization [49, 15] via the ”Optuna” package [1].
-
•
Principal Component Regression: As a representative method for high-dimensional data, PCR was implemented with the number of bases selected from to by 5-fold cross-validation.
For all methods, performance evaluation was conducted using 5-fold cross-validation. In each fold, the model was trained on the training data, and predictions were made on the test data. As in the previous section, the evaluation metrics used to assess predictive performance were Accuracy, AUC, UM, and CC.
6.3. Results
The results of the experiment are presented in Table 1. The entries represent the average values of the metrics over -fold cross-validation. Overall, among all the methods, logistic regression with the ES prior distribution achieves the highest accuracy and AUC, demonstrating its effectiveness in capturing the underlying structure of the high-dimensional ARCENE dataset. The superior performance of the ES prior can be attributed to its ability to adapt to the intrinsic dimensionality of the data by incorporating the empirical spectral information of the covariance matrix [60].
Let us take a closer look at the comparison. The horseshoe prior, known for its flexibility and sparsity-inducing properties [13], performs well, with the second-highest accuracy and AUC. This suggests that the regression on ARCENE dataset may exhibit some degree of sparsity, which the HS prior is able to capture effectively. However, the ES prior outperforms the HS prior, indicating that the adaptive nature of the ES prior is more beneficial in this context. Also, the fact that the covariates are essentially irrelevant to the regression and have zero coefficients but are not highly sparse, may be the reason why the proposed method prevails. The LP and NM priors, while still providing competitive results, do not perform as well as the ES and HS priors. Nevertheless, their performance is still superior to the LightGBM baseline, highlighting the advantages of the Bayesian approach in handling excess covariates. PCR effectively selects relevant components from a large number of covariates although the results are only moderate and do not match those of our method which retains all covariates without reduction.
A notable aspect of the results is the uncertainty estimation of the Bayesian methods. The ES prior achieves the highest UM and CC values, indicating that it is highly effective in identifying uncertainty and providing reliable confidence estimates. This is particularly valuable in practical applications, where the ability to defer decision-making when the model is uncertain can prevent the risk of misclassifications. The HS prior also demonstrates good uncertainty quantification, with the second-highest UM value. However, the LP and NM priors exhibit very low UM values, suggesting that they may be overconfident in their predictions. Note that the LightGBM method does not provide uncertainty estimation, as it is not a probabilistic model.
Model | Accuracy | AUC | UM | CC |
---|---|---|---|---|
Logistic with ES | 0.895 | 0.950 | 0.860 | 0.963 |
Logistic with HS | 0.845 | 0.900 | 0.632 | 0.908 |
Logistic with LP | 0.805 | 0.886 | 0.009 | 0.936 |
Logistic with NM | 0.815 | 0.883 | 0.008 | 0.936 |
LightGBM | 0.775 | 0.858 | – | – |
PCR | 0.810 | 0.878 | – | – |
7. Discussion
This study introduces a novel Bayesian approach for prediction in overparameterized generalized linear models, providing a theoretical foundation for practical applications. The primary theoretical contribution lies in establishing posterior contraction under appropriate conditions, validating the use of the posterior distribution as a reliable approximation of the true data-generating distribution. Empirical evidence from simulations and real-world datasets corroborates the effectiveness of the proposed method in delivering accurate predictions and reliable uncertainty estimates.
In the field of high-dimensional Bayesian inference, the proposed approach stands out by directly addressing non-sparsity while providing theoretical guarantees for prediction under nonlinearity. This unique perspective paves the way for Bayesian inference in overparameterized models.
This study marks a significant advance in overparameterization theory by achieving consistency in nonlinear regression problems. The robustness of posterior contraction theory to model complexity (nonlinearity) plays a pivotal role in attaining consistency, necessitating control of the covering number, decay of the prior distribution in the tail region, and concentration of the prior distribution around the true value of the parameter.
Finally, the limitations of the paper can help identify potential areas for improvement. The theoretical analysis hinges on specific assumptions regarding the covariance matrix, which may not always hold in real-world scenarios. Future research can focus on relaxing these assumptions to extend the results to more general settings, as deep neural networks generalize to a wide range of data. Moreover, the computational complexity associated with sampling from the posterior distribution can pose challenges for large-scale datasets. Further improvements in uncertainty estimation can also be combined with more flexible approximation approaches, such as normalizing flow [45, 46] and Stein variational gradient descent [36].
Computer Programs
The computer programs used in the paper for the numerical experiments in Section 5 and the application in Section 6 are publicly available on GitHub repository https://github.com/TomWaka/BA-Overparameterized-NonLinReg. The computational algorithm is outlined in the appendix.
Acknowledgements
I would like to express my sincere gratitude to my advisor, Masaaki Imaizumi, for his patient and rigorous supervision. I am also grateful to Shonosuke Sugasawa for his helpful suggestions. This work was supported by JSPS KAKENHI (22J21090) and JST ACT-X (JPMJAX23CS).
Appendix A Proof of the Main Results
In this section, proofs are provided for the main results presented in the previous sections. The key is to establish the posterior contraction rate for the squared norm of the difference between the true and estimated mean functions, i.e., . Since the link function is assumed to be Lipschitz continuous, we have the following upper bound (up to a constant factor):
(22) |
where denotes the norm induced by the covariance matrix . This bound relates the contraction of the posterior distribution in the norm to the contraction in the norm, which is similar to the predictive risk in linear regression models [6, 40, 12]. Therefore, it suffices to show that the posterior distribution of concentrates around zero at a certain rate.
To this end, we invoke a general result on posterior contraction rates from [18]. Let be the parameter space equipped with distance induced by the norm, further defining it as a sequence of restricted parameter spaces
(23) |
where is a radius satisfying as . Recall from Assumption 1 that the empirical covariance matrix concentrates around with probability at least , which controls the randomness of the posterior distribution.
Theorem 2.1 of [18] states that the posterior contraction in Theorem 1 holds if there exists a sequence such that the following three conditions are satisfied for any sufficiently large and some constant :
(24) | |||
(25) | |||
(26) |
The first two conditions (24) and (25) are determined by the tuple . Since these conditions have been verified in [60] for the same choice of the parameter spaces and prior distribution, we omit their proofs here.
To establish the remaining condition (26), we observe that by Assumption 3,
Therefore, it suffices to show that
(27) |
The proof of (27) follows from a similar argument as in [60].
The extension to the case where is unknown, is straightforward. Taking as in the case of an inverse Gaussian distribution as a prior distribution for , the posterior contraction holds because dominates as increases and the remaining part has only the negligible mass of the prior.
Combining the above arguments, we conclude that the conditions of Theorem 2.1 in [18] are satisfied, and hence the posterior contraction in Theorem 1 holds. The explicit characterization of the contraction rate in terms of the eigenvalues of follows from a careful analysis of the interplay between the rate in Assumption 1, upper and lower bounds in Assumption 2, and the additional assumption on the eigenvalue decay stated in Theorem 1.
Appendix B Consistency as a Distribution and Point Estimator
Following the discussion on posterior distribution contraction, the convergence of distributions and estimators is further explored using various metrics.
To extend the discussion, let us revisit Example 1.
Example 2.
Recall that is the KL divergence; is the KL variation; and is the Hellinger distance between models with different parameters.
-
(1)
For normal linear regression with known variance , we have
-
(2)
For binary regression with a logistic link, we have
-
(3)
For Poisson regression with a link function , we have
where , , and . Note that the factors, and , are not necessarily bounded.
As shown in the previous section, the consistency of holds. Consequently, in the above examples, if the distance between distributions can be bounded by a constant times the quantity, the consistency of the posterior distribution to the true data-generating distribution is obtained with respect to the KL divergence, KL variation, and Hellinger distance. Furthermore, because the total variation and Hellinger distances induce the same topology, posterior consistency also holds in terms of the total variation distance.
In connection with distributional convergence, we now shift our focus to the convergence of the posterior mean. If the parameter space is convex and distance on is bounded and convex, then the posterior mean serves as a consistent estimator ([e.g., Theorem 6.8 in 21]). The assumption of being convex is often satisfied, and the boundedness and convexity of the distance hold for the total variation distance. Hence, in the above examples, the consistency of the posterior mean is achieved.
Appendix C Computational Method
In this section, the computational method for the proposed approach is described. Stochastic variational inference [20, 22, 4] is employed to approximate the posterior distribution of the model parameters. Since the closed-form expression of the KL divergence is not available, a reparameterization trick [29, 47] was used to enable efficient gradient-based optimization. The reparameterization trick allows the variational distribution to be expressed in terms of a deterministic function of the variational parameters and a random variable, making the gradients of the variational parameters computable. The optimization part was performed using the Adam optimizer [26].
C.1. Logistic Regression Example
To illustrate the computational procedure, let us consider an example of logistic regression. The model components are as follows:
-
(1)
Likelihood:
-
(2)
Prior:
-
(3)
Posterior:
-
(4)
Approximator:
Indeed, the prior distribution is not just a normal distribution but has a support constraint. However, it can be approximated as a normal distribution if is taken large enough. Also, a procedure that computes the posterior distribution without support constraints and then projects it onto the constrained region, was justified by [50].
Our objectove is to find the optimal variational parameters and that would minimize the KL divergence between the variational approximation and the true posterior :
(28) |
where denotes the KL divergence between two probability measures with densities and . The KL divergence can be expressed as
(29) |
To compute the gradients of the KL divergence with respect to the variational parameters, we employ a reparameterization trick. Specifically, denoting by a sample from the variational distribution , we parameterize it as , where is a sample from a standard Gaussian distribution and are obtained from a decomposition of , i.e., . With this reparameterization, the KL divergence can be approximated as an unbiased estimate
(30) |
Now, the gradients of the KL divergence can be computed with respect to the variational parameters. The gradient with respect to is given by
(31) | |||
(32) | |||
(33) | |||
(34) | |||
(35) |
where is the logistic function. Similarly, the gradient with respect to is given by
(36) | |||
(37) | |||
(38) | |||
(39) | |||
(40) | |||
(41) | |||
(42) | |||
(43) | |||
(44) |
where denotes the pseudoinverse of .
The derived gradients can be used in conjunction with the Adam optimizer to update the variational parameters and . The optimization procedure iteratively updates the variational parameters until the maximum number of iterations or convergence is reached.
References
- ASY+ [19] Takuya Akiba, Shotaro Sano, Toshihiko Yanase, Takeru Ohta, and Masanori Koyama. Optuna: A Next-generation Hyperparameter Optimization Framework. In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 2019.
- BCG [21] Sayantan Banerjee, Ismaël Castillo, and Subhashis Ghosal. Bayesian inference in high-dimensional models. arXiv preprint arXiv:2101.04491, 2021.
- BCJ+ [19] Eli Bingham, Jonathan P. Chen, Martin Jankowiak, Fritz Obermeyer, Neeraj Pradhan, Theofanis Karaletsos, Rohit Singh, Paul Szerlip, Paul Horsfall, and Noah D. Goodman. Pyro: Deep Universal Probabilistic Programming. Journal of Machine Learning Research, 20(28):1–6, 2019.
- BCKW [15] Charles Blundell, Julien Cornebise, Koray Kavukcuoglu, and Daan Wierstra. Weight Uncertainty in Neural Network. In Proceedings of the 32nd International Conference on Machine Learning, pages 1613–1622. PMLR, 2015.
- Bis [06] Christopher M. Bishop. Pattern Recognition and Machine Learning. Springer, 2006.
- BLLT [20] Peter L Bartlett, Philip M Long, Gábor Lugosi, and Alexander Tsigler. Benign overfitting in linear regression. Proceedings of the National Academy of Sciences, 117(48):30063–30070, 2020.
- CCBG [22] Yuan Cao, Zixiang Chen, Misha Belkin, and Quanquan Gu. Benign Overfitting in Two-layer Convolutional Neural Networks. In Advances in Neural Information Processing Systems, 2022.
- CCD+ [18] Victor Chernozhukov, Denis Chetverikov, Mert Demirer, Esther Duflo, Christian Hansen, Whitney Newey, and James Robins. Double/debiased machine learning for treatment and structural parameters. The Econometrics Journal, 21(1):C1–C68, 01 2018.
- CHIK [08] Ming-Hui Chen, Lan Huang, Joseph G Ibrahim, and Sungduk Kim. Bayesian Variable Selection and Computation for Generalized Linear Models with Conjugate Priors. Bayesian Analysis, 3(3):585, 2008.
- CL [20] Xuan Cao and Kyoungjae Lee. Variable Selection Using Nonlocal Priors in High-Dimensional Generalized Linear Models With Application to fMRI Data Analysis. Entropy, 22(8):807, 2020.
- CLB [22] Niladri S. Chatterji, Philip M. Long, and Peter L. Bartlett. The Interplay Between Implicit Bias and Benign Overfitting in Two-Layer Linear Networks. Journal of Machine Learning Research, 23(263):1–48, 2022.
- CM [22] Chen Cheng and Andrea Montanari. Dimension free ridge regression. arXiv preprint arXiv:2210.08571, 2022.
- CPS [09] Carlos M. Carvalho, Nicholas G. Polson, and James G. Scott. Handling Sparsity via the Horseshoe. In Proceedings of the Twelth International Conference on Artificial Intelligence and Statistics, volume 5 of Proceedings of Machine Learning Research, pages 73–80. PMLR, 2009.
- FCB [22] Spencer Frei, Niladri S Chatterji, and Peter Bartlett. Benign Overfitting without Linearity: Neural Network Classifiers Trained by Gradient Descent for Noisy Linear Data. In Proceedings of Thirty Fifth Conference on Learning Theory, volume 178, pages 2668–2703. PMLR, 2022.
- Fra [18] Peter I Frazier. A tutorial on Bayesian Optimization. arXiv preprint arXiv:1807.02811, 2018.
- GCS+ [13] A. Gelman, J.B. Carlin, H.S. Stern, D.B. Dunson, A. Vehtari, and D.B. Rubin. Bayesian Data Analysis. Chapman and Hall/CRC, 3 edition, 2013.
- GGBHD [08] Isabelle Guyon, Steve Gunn, Asa Ben-Hur, and Gideon Dror. Arcene. UCI Machine Learning Repository, 2008. DOI: https://doi.org/10.24432/C58P55.
- GGvdV [00] Subhashis Ghosal, Jayanta K Ghosh, and Aad W van der Vaart. Convergence rates of posterior distributions. The Annals of Statistics, pages 500–531, 2000.
- Gho [97] Subhashis Ghosal. Normal approximation to the posterior distribution for generalized linear models with many covariates. Mathematical Methods of Statistics, 6(3):332–348, 1997.
- Gra [11] Alex Graves. Practical Variational Inference for Neural Networks. Advances in Neural Information Processing Systems, 24, 2011.
- GvdV [17] Subhashis Ghosal and Aad van der Vaart. Fundamentals of nonparametric Bayesian inference, volume 44. Cambridge University Press, 2017.
- HBWP [13] Matthew D Hoffman, David M Blei, Chong Wang, and John Paisley. Stochastic variational inference. Journal of Machine Learning Research, 2013.
- Jeo [22] Seonghyun Jeong. Posterior contraction in group sparse logit models for categorical responses. Journal of Statistical Planning and Inference, 219:266–278, 2022.
- JG [21] Seonghyun Jeong and Subhashis Ghosal. Posterior contraction in sparse generalized linear models. Biometrika, 108(2):367–379, 2021.
- Jia [07] Wenxin Jiang. Bayesian variable selection for high dimensional generalized linear models: Convergence rates of the fitted densities. The Annals of Statistics, 35(4):1487 – 1511, 2007.
- KB [14] Diederik P Kingma and Jimmy Lei Ba. Adam: A method for Stochastic Optimization. In The Third International Conference on Learning Representations, pages 1–15, 2014.
- KCCG [23] Yiwen Kou, Zixiang Chen, Yuanzhou Chen, and Quanquan Gu. Benign Overfitting in Two-layer ReLU Convolutional Neural Networks. In Proceedings of the 40th International Conference on Machine Learning, pages 17615–17659. PMLR, 2023.
- KMF+ [17] Guolin Ke, Qi Meng, Thomas Finley, Taifeng Wang, Wei Chen, Weidong Ma, Qiwei Ye, and Tie-Yan Liu. LightGBM: A Highly Efficient Gradient Boosting Decision Tree. In Advances in Neural Information Processing Systems, volume 30. Curran Associates, Inc., 2017.
- KW [14] Diederik P Kingma and Max Welling. Auto-Encoding Variational Bayes. In The Third International Conference on Learning Representations, 2014.
- KZSS [21] Frederic Koehler, Lijia Zhou, Danica J. Sutherland, and Nathan Srebro. Uniform Convergence of Interpolators: Gaussian Width, Norm Bounds and Benign Overfitting. In Advances in Neural Information Processing Systems, volume 34. Curran Associates, Inc., 2021.
- LC [21] Kyoungjae Lee and Xuan Cao. Bayesian group selection in logistic regression with application to MRI data analysis. Biometrics, 77(2):391–400, 2021.
- LR [20] Tengyuan Liang and Alexander Rakhlin. Just interpolate: Kernel “Ridgeless” regression can generalize. The Annals of Statistics, 48(3):1329–1347, 2020.
- LRZ [20] Tengyuan Liang, Alexander Rakhlin, and Xiyu Zhai. On the Multiple Descent of Minimum-Norm Interpolants and Restricted Lower Isometry of Kernels. In Proceedings of Thirty Third Conference on Learning Theory, volume 125, pages 2683–2711. PMLR, 2020.
- LSS [23] Zhu Li, Weijie J Su, and Dino Sejdinovic. Benign Overfitting and Noisy Features. Journal of the American Statistical Association, 118(544):2876–2888, 2023.
- LSY [13] Faming Liang, Qifan Song, and Kai Yu. Bayesian Subset Modeling for High-Dimensional Generalized Linear Models. Journal of the American Statistical Association, 108(502):589–606, 2013.
- LW [16] Qiang Liu and Dilin Wang. Stein Variational Gradient Descent: A General Purpose Bayesian Inference Algorithm. In Advances in Neural Information Processing Systems, volume 29. Curran Associates, Inc., 2016.
- LW [21] Yue Li and Yuting Wei. Minimum -norm interpolators: Precise asymptotics and multiple descent. arXiv preprint arXiv:2110.09502, 2021.
- McC [19] Peter McCullagh. Generalized Linear Models. Routledge, 2 edition, 2019.
- MCM [19] Arnab Kumar Maity, Raymond J. Carroll, and Bani K. Mallick. Integration of Survival and Binary Data for Variable Selection and Prediction: A Bayesian Approach. Journal of the Royal Statistical Society Series C: Applied Statistics, 68(5):1577–1595, 2019.
- MM [23] Theodor Misiakiewicz and Andrea Montanari. Six lectures on linearized neural networks. arXiv preprint arXiv:2308.13431, 2023.
- NI [22] Shogo Nakakita and Masaaki Imaizumi. Benign overfitting in time series linear model with over-parameterization. arXiv preprint arXiv:2204.08369, 2022.
- NJG [20] Bo Ning, Seonghyun Jeong, and Subhashis Ghosal. Bayesian linear regression for multivariate responses under group sparsity. Bernoulli, 26(3):2353–2382, 2020.
- NSH [18] Naveen N Narisetty, Juan Shen, and Xuming He. Skinny Gibbs: A Consistent and Scalable Gibbs Sampler for Model Selection. Journal of the American Statistical Association, 2018.
- PC [08] Trevor Park and George Casella. The bayesian lasso. Journal of the American Statistical Association, 103(482):681–686, 2008.
- PNR+ [21] George Papamakarios, Eric Nalisnick, Danilo Jimenez Rezende, Shakir Mohamed, and Balaji Lakshminarayanan. Normalizing Flows for Probabilistic Modeling and Inference. Journal of Machine Learning Research, 22(57):1–64, 2021.
- RM [15] Danilo Rezende and Shakir Mohamed. Variational Inference with Normalizing Flows. In Proceedings of the 32nd International Conference on Machine Learning, volume 37 of Proceedings of Machine Learning Research, pages 1530–1538, Lille, France, 07–09 Jul 2015. PMLR.
- RMW [14] Danilo Jimenez Rezende, Shakir Mohamed, and Daan Wierstra. Stochastic Backpropagation and Approximate Inference in Deep Generative Models. In Proceedings of the 31st International Conference on Machine Learning, pages 1278–1286. PMLR, 2014.
- Sha [22] Ohad Shamir. The Implicit Bias of Benign Overfitting. In Proceedings of Thirty Fifth Conference on Learning Theory, volume 178, pages 448–478. PMLR, 2022.
- SLA [12] Jasper Snoek, Hugo Larochelle, and Ryan P Adams. Practical Bayesian Optimization of Machine Learning Algorithms. In Advances in Neural Information Processing Systems, volume 25. Curran Associates, Inc., 2012.
- SPD [22] Deborshee Sen, Sayan Patra, and David Dunson. Constrained inference through posterior projections. arXiv preprint arXiv:1812.05741, 2022.
- SS [24] Keita Suzuki and Taiji Suzuki. Optimal criterion for feature learning of two-layer linear neural network in high dimensional interpolation regime. In The Twelfth International Conference on Learning Representations, 2024.
- SvdVvZ [13] BT Szabó, AW van der Vaart, and JH van Zanten. Empirical Bayes scaling of Gaussian priors in the white noise model. Electronic Journal of Statistics, 7:991–1018, 2013.
- TB [23] Alexander Tsigler and Peter L. Bartlett. Benign overfitting in ridge regression. Journal of Machine Learning Research, 24(123):1–76, 2023.
- TI [23] Toshiki Tsuda and Masaaki Imaizumi. Benign overfitting of non-sparse high-dimensional linear regression with correlated noise. arXiv preprint arXiv:2304.04037, 2023.
- TM [23] Yiqi Tang and Ryan Martin. Empirical Bayes inference in sparse high-dimensional generalized linear models. arXiv preprint arXiv:2303.07854, 2023.
- Vap [99] Vladimir N Vapnik. An overview of statistical learning theory. IEEE Transactions on Neural Networks, 10(5):988–999, 1999.
- Wal [02] Guenther Walther. Detecting the Presence of Mixing with Multiscale Maximum Likelihood. Journal of the American Statistical Association, 97(458):508–513, 2002.
- Wal [09] Guenther Walther. Inference and Modeling with Log-concave Distributions. Statistical Science, pages 319–327, 2009.
- WDY [22] Guillaume Wang, Konstantin Donhauser, and Fanny Yang. Tight bounds for minimum -norm interpolation of noisy data. In Gustau Camps-Valls, Francisco J. R. Ruiz, and Isabel Valera, editors, Proceedings of The 25th International Conference on Artificial Intelligence and Statistics, volume 151 of Proceedings of Machine Learning Research, pages 10572–10602. PMLR, 28–30 Mar 2022.
- WI [23] Tomoya Wakayama and Masaaki Imaizumi. Bayesian Analysis for Over-parameterized Linear Model without Sparsity. arXiv preprint arXiv:2305.15754, 2023.
- YO [20] Gilad Yehudai and Shamir Ohad. Learning a Single Neuron with Gradient Methods. In Proceedings of Thirty Third Conference on Learning Theory, volume 125 of Proceedings of Machine Learning Research, pages 3756–3786. PMLR, 09–12 Jul 2020.
- ZBH+ [21] Chiyuan Zhang, Samy Bengio, Moritz Hardt, Benjamin Recht, and Oriol Vinyals. Understanding deep learning (still) requires rethinking generalization. Communications of the ACM, 64(3):107–115, 2021.