Demystify Optimization and Generalization of Over-parameterized PAC-Bayesian Learning
Abstract
PAC-Bayesian is an analysis framework where the training error can be expressed as the weighted average of the hypotheses in the posterior distribution whilst incorporating the prior knowledge. In addition to being a pure generalization bound analysis tool, PAC-Bayesian bound can also be incorporated into an objective function to train a probabilistic neural network, making them a powerful and relevant framework that can numerically provide a tight generalization bound for supervised learning. For simplicity, we call probabilistic neural network learned using training objectives derived from PAC-Bayesian bounds as PAC-Bayesian learning. Despite their empirical success, the theoretical analysis of PAC-Bayesian learning for neural networks is rarely explored. This paper proposes a new class of convergence and generalization analysis for PAC-Bayes learning when it is used to train the over-parameterized neural networks by the gradient descent method. For a wide probabilistic neural network, we show that when PAC-Bayes learning is applied, the convergence result corresponds to solving a kernel ridge regression when the probabilistic neural tangent kernel (PNTK) is used as its kernel. Based on this finding, we further characterize the uniform PAC-Bayesian generalization bound which improves over the Rademacher complexity-based bound for non-probabilistic neural network. Finally, drawing the insight from our theoretical results, we propose a proxy measure for efficient hyperparameters selection, which is proven to be time-saving.
1 Introduction
Deep learning has demonstrated powerful learning capability thanks to its over-parameterization structure, in which various network architectures have been responsible for the great leap in performance (LeCun et al., 2015). For the reason that over-fitting and complex hyperparameters tuning are two of the major pain points in deep learning, designing generalization guarantee for deep networks is a long-term pursuit for researchers (Zhang et al., 2021). To this end, a learning framework that trains a probabilistic neural network with a PAC-Bayesian bound objective function has been proposed (Bégin et al., 2016; Dziugaite & Roy, 2017; Neyshabur et al., 2017b; Raginsky et al., 2017; Neyshabur et al., 2017a; London, 2017; Smith & Le, 2017; Liu et al., 2021). We dub this learning method, PAC-Bayesian learning. The PAC-Bayesian learning has been proven to be able to achieve competitive expected test set error, while providing a tight generalization bound. Furthermore, this generalization bound computed from the training error can obviate the need for splitting data into a train, test, or validation set, which is highly applicable to train a deep network with scarce data (Pérez-Ortiz et al., 2020; Grünwald & Mehta, 2020). Meanwhile, these advancements on PAC-Bayesian bounds have been widely adapted with different deep neural network structures including meta-learning (Amit & Meir, 2018), convolutional neural network (Zhou et al., 2018), binary activated multilayer networks (Letarte et al., 2019), partially aggregated neural networks (Biggs & Guedj, 2020), and graph neural network (Liao et al., 2020).
Due to the tremendous empirical success of PAC-Bayesian learning, there is increasing interest in understanding their theoretical properties. However, they are either restricted to a specific model variant (Dziugaite & Roy, 2018a; Dziugaite et al., 2020) or rely heavily on empirical exploration (Neyshabur et al., 2017a). To our best knowledge, there is neither investigation of why the training of PAC-Bayesian learning is successful nor the reason for PAC-Bayesian bound to be tight on unseen data. Therefore, it is natural to ask: for a probabilistic neural network with an objective function derived from a PAC-Bayesian bound when gradient descent is adopted,
-
Q1:
How effective the training (i.e., the trainability) on training set?
-
Q2:
How tight the generalization bound (i.e., generalization) compared to those learning framework using non-probabilistic neural networks?
The exploration of the answer can be highly non-trivial due to inherented non-convex of over-parameterization (Jain & Kar, 2017) and additional randomness introduced by probabilistic neural network (Specht, 1990) as well as additional challenges brought by the divergence between posterior/prior distribution pairs known as Kullback-Leibler (KL) divergence.
In this paper, we answer the above questions by leveraging the recent advances in the theory of deep learning in over-parameterized settings with extremely (or infinitely) wide networks. It can be shown that ultra-wide networks optimized with gradient descent can achieve near-zero training error, and the critical factor that governs training process is the so called neural tangent kernel (NTK), which can be proven to be unchanged during gradient descent training (Jacot et al., 2018), thus providing a guarantee for achieving global minimum (Du et al., 2019a; Allen-Zhu et al., 2019). Under the PAC-Bayesian framework, NTK is no longer calculated from the derivative of the weights directly, but instead is calculated based on the gradient of the distribution parameters of the weights. We call it Probabilistic NTK (PNTK), based on which we build a convergence analysis to characterize the optimization process of PAC-Bayes learning. Thanks to the explicit solution obtained by optimization analysis, we further formulate the generalization bound of PAC-Bayesian learning for the first time, and demonstrate its advantage by comparing with theoretical generalization bound of learning framework with non-stochastic neural networks (Arora et al., 2019a; Cao & Gu, 2019; Hu et al., 2019). We summarize our contribution as follows:
-
•
With a detailed characterization of gradient descent training of PAC-Bayes objective function, we arrive at a conclusion that the final solution is kernel ridge regression with its kernel being the PNTK.
-
•
Based on the optimization solution, we derive a PAC-Bayesian bound for deep networks and find that it improves over the Rademacher complexity bound for non-probabilistic neural network with a fair comparison.
-
•
PAC-Bayesian learning depends on a large number of hyperparameters. We design a training-free proxy based on our theoretical bound and show it is effective and time-saving.
-
•
Technically, our proof of convergence is similar to (Du et al., 2018; Arora et al., 2019a). But there are essential differences. The main difference is that our network architecture is much complex (e.g. probabilistic network contain two set of parameter) and each set involves its own randomness which requires bounding many terms in this work differently and more elaborately. Due to the particularly wide range of application scenarios of probabilistic neural networks such as Variational Auto-Encoder (Pu et al., 2016) and Deep Bayesian Network (Nie et al., 2018), we hope our technique can provide the basis for analysis of wide probabilistic neural networks.
2 Related Work
PAC-Bayesian Analysis.
Probably Approximately Correct (PAC) Bayes framework (McAllester, 1999a, b) can incorporate knowledge about the learning algorithm and probability distribution over a set of hypotheses, thus providing a test performance (generalization) guarantee. Subsequently, PAC-Bayesian method is adopted to analyze the generalization bound of the probabilistic neural network (Langford & Caruana, 2002b). The original PAC-Bayes theory only works with bounded loss function, Haddouche et al. (2021) expands the PAC-Bayesian theory to learning problems with unbounded loss functions. Besides, several improved PAC-Bayesian bounds suitable for different scenarios are introduced by (Bégin et al., 2014, 2016). As a result of the flexibility and generalization properties of PAC-Bayes, it is widely used to analyze the complex, non-convex, and overparameterized optimization problem, especially over-parameterized neural networks (Guedj, 2019). Neyshabur et al. (2017b) present a generalization bound for feedforward neural networks with ReLU activations in terms of the product of the spectral norm of the layers and the Frobenius norm of the weights. London (2017) study the generalization error of randomized learning algorithms – focusing on stochastic gradient descent (SGD) – using a novel combination of PAC-Bayes and algorithmic stability.
PAC-Bayesian Learning.
Apart from providing theoretical analysis for the generalization properties of deep learning, PAC-Bayes pay an increasing number of attention to afford numerical generalization bound (certificate) for practical deep learning algorithm. Langford & Caruana (2002a) unprecedentedly introduce a method to train a Bayesian neural network and use a refined PAC-Bayesian bound for computing error upper bound. Later, Neyshabur et al. (2017a) extend Langford & Caruana (2002a)’s work by developing a training objective function derived from a relaxed PAC-Bayesian bound. In the standard application of PAC-Bayes, the prior is typically chosen to be a spherical Gaussian centered at the origin. However, without incorporating the information of data, the KL divergence might be unreasonable large, limiting the performance of PAC-Bayes method. To fill this gap, a large literature propose to obtain localized PAC-Bayes bounds via distribution-dependent priors through data (Ambroladze et al., 2007; Negrea et al., 2019; Dziugaite et al., 2020; Perez-Ortiz et al., 2021). Furthermore, Dziugaite & Roy (2018b) show how an differentially private data-dependent prior yields a valid PAC-Bayes bound for the situation that data distribution is presumed to be unknown. More recently, research focus on providing PAC-Bayesian bound for more realistic architectures, such as, ImageNet (Zhou et al., 2018), binary activated multilayer networks (Letarte et al., 2019), Partially Aggregated Neural Networks (Biggs & Guedj, 2020), and graph neural network (Liao et al., 2020). We denote the practical use of PAC-Bayesian algorithm to train over-parameterized neural networks as PAC-Bayesian learning and the target of this work is to demystify the success behind the deep learning trained via PAC-Bayesian bound.
Neural Tangent Kernel.
One of the greatest recent progress in deep learning theory is the gradient descent training can find global minimum with deep neural networks thanks to the over-parameterization (Du et al., 2019a, 2018; Allen-Zhu et al., 2019). Technically, (Jacot et al., 2018) resort to the NTK, which will stay constant during training. Under a mild assumption that the lowest eigenvalue of the NTK is greater than zero, the loss that is convex regards output will convergence to the global minimum. Furthermore, the generalization ability to unseen data of trained ultra-wide networks has been characterized by the Rademacher complexity analysis (Cao & Gu, 2019; Arora et al., 2019a). In this work, we will make a comparison between Rademacher bound and PAC-Bayesian bound with ultra-wide networks, where Rademacher bound of kernel ridge regression is formulated by (Hu et al., 2019). Since the origin of the NTK, it has been applied to different deep networks structures, such as orthogonal initialization (Huang et al., 2020), convolutional networks (Arora et al., 2019b), graph neural networks (Du et al., 2019b; Huang et al., 2021), and transformer (Hron et al., 2020).
3 Preliminary
3.1 Notation
We use bold-faced letters for vectors and matrices otherwise representing scalar. We use to denote the Euclidean norm of a vector or the spectral norm of a matrix, while denoting as the Frobenius norm of a matrix. For a neural network, we denote as the activation function and we adopt ReLU activation in this work. Given a logical variable , then if is true otherwise . Let be the identity matrix with dimension of . We denote . The least eigenvalue of matrix is denoted as .
3.2 Over-parameterized Probabilistic Neural Network
In PAC-Bayesian learning we use probabilistic neural networks instead of deterministic networks, where the weights always follow a certain distribution. In this work, we adopt the Gaussian distribution for the weights, and define a two-layer probabilistic neural network governed by the following expression,
(1) |
where is the input, are weight matrix in the first layer, and is the weight vector in the second layer. For simplicity, we represent the weight matrix as a set of column vectors, i.e., . In particular, we use the re-parametrization trick to treat the weight in the first layer and leave the weights in the second to be initialized by uniform distribution,
(2) | ||||
where denotes the element-wide product operation, and are mean and variance respectively. We then fix the second layer and only optimize the first layer. This is a trick that has been widely used in the NTK paper (Arora et al., 2019a; Du et al., 2018) for simplifying analysis, the same results can be obtained on deep probabilistic neural network.
3.3 The PAC-Bayesian Learning
Suppose data are i.i.d. samples from a non-degenerate distribution . Defining that the represents the hypothesis space, is the prediction of hypothesis over sample . represents the generalization error of classifier and represents the empirical error of classifier , where is the loss function.
Theorem 3.1.
Denote be some prior distribution over Then for any , the following inequality holds uniformly for all posteriors distributions with probability at least ,
(3) |
Here and . is the Kullback-Leibler (KL) divergence and is the binary KL divergence. Furthermore, combining with pinsker’s inequality for binary KL divergence, , when , yields,
(4) |
The above bound is the classical bound. On the other hand, combining with the inequality , for all , lead to a PAC-Bayes- bound, which is proposed by (Thiemann et al., 2017),
Theorem 3.2.
Denote be some prior distribution over Then for any , the following inequality holds uniformly for all posteriors distributions with probability at least
(5) |
In this work, we study the PAC-Bayesian learning by training the probabilistic network described by Equation (1) with an objective PAC-Bayes bound (5). Specifically, the objective function can be expressed as,
(6) |
where is a hyperparameter introduced in a heuristic manner to make the method more flexible, and . Since the term regarding is a constant, we omit it in the objective function. Instead of optimizing itself directly, we introduce reparameterization trick (Kingma et al., 2015) to perform gradient descent,
(7) | |||
where is the learning rate.
3.4 Neural Tangent Kernel
The neural tangent kernel was originated from (Jacot et al., 2018), whose definition is given by,
(8) |
where with being the number of parameters in the model. We demonstrate how it is used to describe the training dynamics of deep neural networks.
The dynamics of gradient flow for parameters are given by,
(9) |
Then the dynamics of output functions follow,
(10) |
It is proven that when the width of neural networks is infinite, converge in probability to (Jacot et al., 2018), i.e., . Assuming that the loss is the mean squared error loss, , then we have,
(11) |
At the end of the training, namely, , the solution is equivalent to the well-known kernel regression, of which the kernel is NTK.
4 Theoretical Results
4.1 Optimization analysis
To simplify the analysis, we first consider the optimization of probabilistic neural networks of the form (1) with objective and adopting the mean squared error loss. In other words, we neglect the KL divergence term at this stage and will show that the corresponding results can be extended to the target function with KL divergence. To perform the optimization analysis, we define the neural tangent kernel for probabilistic neural network, and name them as probabilistic neural tangent kernel (PNTK).
Definition 4.1 (Probabilistic Neural Tangent Kernel).
The tangent kernels associated with output function at parameters and are defined as,
(12) | ||||
Different from standard (deterministic) neural network, the probabilistic network consist of two sets of parameters, i.e., and , with . The gradient descent based on reparameteriztion trick (Equation 7) is performed on each of the two parameter set separately. As a result, we have two corresponding tangent kernels. It is shown that the NTK will converge in probability to a deterministic kernel in the infinite-width limit (Jacot et al., 2018), as we will show in later derivation, the PNTK will converge to a limiting kernel at initialization and during training with a large width condition. The detailed expression for the limiting kernel is given as follows
Definition 4.2 (Limiting Neural Tangent Kernel with ReLU Activation).
Consider a probabilistic network of the form (1), with a Gaussian distribute weights . Then the tangent kernels in the infinite-width limit will follow the expression,
(13) |
The definition above gives the exact formulation of the limiting NTK for random networks. In addition, its positive definiteness is a critical factor to guarantee the convergence of gradient descent, thus we state the assumption as follows,
Assumption 4.3.
Assume the lowerest eigenvalue of the limiting NTK is greater than zero, i.e., .
With all the preliminaries above, we state our main theorem,
Theorem 4.4 (Convergence of Probabilistic Networks with large width).
Suppose the network’s width is of with the initialization. Then with probability at least over the initialization, we have,
(14) |
where we adopt initialization of and , for . and are entries of weights matrix and , and are positive constants.
Our theorem establishes that if is large enough, the expected training error converges to zero at a linear rate. In particular, the least eigenvalue of PNTK governs the convergence rate. To prove the theorem, we split the process into three parts, namely, PNTK at initialization, PNTK during training and changes of weights during training, and all the detailed proofs for our theorems and lemmas can be found in the Appendix.
We first study the behavior of tangent kernels with ultra-wide condition, namely , at initialization. The following lemma demonstrates that if is large, then and have a lower bound on smallest eigenvalue with high probability. The proof is by the standard concentration bound.
Lemma 4.5 (PNTK at initialization).
If , while and are initialized by the form in Theorem 4.4, then with probability at least , , ; and , .
Remark 4.6.
The concentration bound is over randomness of initialization for and randomness of variance .
From the above lemma, we implicitly show that both tangent kernels regarding and are equivalent in the infinite-width limit. It is also true in general as long as the reparametrization trick is adopted for training probabilistic neural networks. The next problem is that PNTKs are time-dependent matrices, thus varying during gradient descent training. To account for this problem, we build a lemma stating that if the weights is close to during gradient descent training, then the corresponding PNTKs and are close to and have a lower bound over smallest eigenvalue . Here we introduce , where copies sample value from that in . Notice in our definition is not exact weights at training step , but we introduce it to facilitate our proof.
Lemma 4.7 (PNTK during training).
If and are initialized the same with Theorem 4.4, for any set of weight vectors that satisfy for any , , where is a constant, then with probability at least , , ; and , .
By the above lemma, we demonstrate that if the change of weight is bounded, then the tangent kernel matrix is closed to its expectation. The next lemma will show that the change of weight is bounded during gradient descent training when the PNTK is close to the limiting kernel Equation (13).
Lemma 4.8 (Change of weights during training).
Suppose for . Then we have , and with probability at least over initialization, .
The main gap between PNTK and final results for output function can be filled by writing down the gradient flow dynamics,
(15) |
Combined with fact that PNTKs are bounded during training, therefore the behavior of the loss is traceable. To finalize the proof for Theorem 4.4, we construct a final lemma.
Lemma 4.9.
If , we have
4.2 Training with KL divergence
According to Equation (6), there is a KL divergence term in the objective function. We expand the KL-divergence for two Gaussian distribution, , and ,
(16) |
We compare the gradient of mean weights and variance weights . With a direct calculation, we have and . It is shown that there is one more random variable associated with the gradient regarding variance weights, which leads to the expected gradient norm to be zero. To verify this result, we conduct an experiment and leave the results in the Appendix E.2. Therefore, it is approximately equivalent to fix during gradient descent training. With this assumption we arrive at the conclusion:
Theorem 4.10.
The theorem above reveals that the regularization effect of the KL term in PAC-Bayesian learning, and presents an explicit expression for the convergence result of the output function.
4.3 Generalization analysis
Recall that in Theorem 3.2, the PAC-Bayesian bound with respect to the distribution at initialization and after optimization is given. Therefore, combined with results from Theorem 4.10, we can directly provide a generalization bound for PAC-Bayesian learning with ultra-wide networks.
Theorem 4.11 (PAC-Bayesian bound with NTK).
Suppose data are i.i.d. samples from a non-degenerate distribution , and . Consider any loss function . Then with probability at least over the random initialization and the training samples, the probabilistic network trained by gradient descent for iterations has population risk that is bounded as follows:
(18) | ||||
In this theorem, we establish a reasonable generalization bound for PAC-Bayesian learning framework, thus providing a theoretical guarantee. We further demonstrate the advantage of PAC-Bayesian learning by comparing it with the Rademacher complexity-based generalization bound for deterministic neural network with kernel ridge solution.
Theorem 4.12 (Rademacher bound with NTK).
Suppose data are i.i.d. samples from a non-degenerate distribution , and . Consider any loss function . Then with probability at least over the random initialization and training samples, the network trained by gradient descent for iterations has population risk that is bounded as follows:
(19) | ||||
Theorem 4.12 is obtained by following Theaorem 5.1 in (Hu et al., 2019), who presents a Rademacher complexity-based generalization bound for ultra-wide neural networks with kernel ridge regression solution. Similar analysis for kernel regression with NTK can be found in (Arora et al., 2019a; Cao & Gu, 2019).
The main difference between two generalization bound is versus , which is due to that PAC-Bayesian bound count the KL divergence while Rademacher bound calculate the reproducing kernel Hilbert space (RKHS) norm. We find that the rates of convergence are different. One is and another is . Therefore, we conclude that the PAC-Bayesian bound has an improvement over Rademacher complexity-based bound.


5 Experiments
In this section, we first provide empirical support showing that the training dynamics of wide probabilistic neural networks using the training objective derived from a PAC-Bayes bound are captured by PNTK, which validates our Lemma 4.8. Second, as an extension of our finding of PAC-Bayesian bound in Theorem D.1, we provide a training-free metric to approximate the PAC-Bayesian bound via NTK, which can be used to select the best hyperparameters without involving any training and eliminate excessive computation time.
5.1 Experimental setup
In all experiments, the NTK parameterization is chosen for initializing parameters, which follows Equation (1). Specifically, the initial mean for weights, , is sampled from a truncated Gaussian distribution with a mean of 0 and a standard variance of , truncating at 2 standard deviations. To make sure that the variance is positive, the initial variance for weight is transformed from the given value of through formula .
For the experiment in section 5.2, we consider a three hidden layer ReLU fully-connected network of the training objective derived from the PAC-Bayesian lambda bound in Equation (5), used an ordinary MSE function as loss, trained with full-batch gradient descent using learning rates equal to 1 on a fixed subset of MNIST () of ten-classification. An random initialized prior with no connection to data is used since it is inline with our theoretical setting and we only intend to observe the change in parameters rather than the performance of the actual bound.
In section 5.3, we use both fully connected and convoluted neural network structures to perform experiments on MINIST and CIFAR-10 datasets for demonstrating the effectiveness of our training-free PAC-Bayesian network bound for searching hyperparameters under different datasets and network structures. We build a 3 layers fully-connected neural network with 600 neurons on each layer. Another convolutional architecture with a total of 13 layers with around 10 million learnable parameters is constructed. We adopt data-dependent prior since it is a practical and popular method (Dziugaite et al., 2020; Perez-Ortiz et al., 2021). Specifically, this data-dependent prior is pre-trained on a subset of total training data with empirical risk minimization. The networks for posterior training is then initialized by the weights learnt from prior. At last, the generalization bound is computed using Equation (5). Bound computing-related settings are referred to the work done by Pérez-Ortiz et al. (2020), such as, confidence parameters for the risk certificate and chernoff bound, and the number of monte carlo samples for estimating the risk certificate.
5.2 Validation of theoretical results
After steps of gradient descent updates from different random initialization, we plot the changes of (input/output/hidden layer) and for weights corresponded to the variation in width for each layer on Figure 1. We observe that the relative Frobenius norm change in input/output layer’s weights scales as while hidden layers’ weights scales as during the training, which verifies our lemma 4.8.






Method | Network & dataset | Generalization bound | Testing error | Computation time |
---|---|---|---|---|
MNIST + FCN | 0.0427 | 0.0192 | 3888 mins | |
Brute-Force | MNIST + CNN | 0.0247 | 0.0091 | 20736 mins |
CIFAR10 + FCN | 0.5377 | 0.4840 | 8424 mins | |
CIFAR10 + CNN | 0.1969 | 0.1514 | 55080 mins | |
MNIST + FCN | 0.0478 | 0.0213 | 1296 mins | |
MNIST + CNN | 0.0346 | 0.0168 | 1296 mins | |
CIFAR10 + FCN | 0.5490 | 0.4905 | 1296 mins | |
CIFAR10 + CNN | 0.1970 | 0.1510 | 1296 mins |
5.3 Selecting hyperparameters via training-free metric
PAC-Bayesian learning framework provides competitive performance with non-vacuous generalization bounds. However, the tightness of this generalization bound depends on the hyperparameters used, such as the proportionality of data used for the prior, the initialization of , and the KL penalty weight (). Since these three values do not change during the training, we refer them as hyperparameters. Choosing the right hyperparameters via grid search is obviously prohibitive, as each attempt to compute the generalization bound can involve significant computational resources.
Another plausible approach is to design some kind of predictive, so-called “training-free” metric such that we can approximate the error bound without going through an expensive training process. In light of this goal, we have already developed a generalization bound in the theorem D.1 via NTK. Since NTK changes are held constant during training, we can predict the generalization bound by this proxy metric, which can be formulated as follows:
(20) | ||||
where denotes the gram matrix. is a label similarity matrix (if two data have the same label, their joint entry in the matrix is 1 and 0 otherwise), and is the number of data used. To demonstrate the computational practicality of this training-free metric, we compute using only a subset of the data for each class (325 per class for FCN and 75 per class for CNN). We should also mention that training-free methods for searching neural architectures are not new, they can be found in NAS (Chen et al., 2021; Deshpande et al., 2021), MAE Random Sampling (Camero et al., 2021), pruning at initialization (Abdelfattah et al., 2021). To the best of our knowledge, there is currently no training-free method for selecting hyperparameters in the PAC-Bayesian framework, which we consider to be one of the novelties of this paper.
The Figure 2 demonstrates a strong correlation between and the actual generalization bound. Finally, we demonstrate that by searching through all possible combinations of hyperparameters using , it is possible to select a hyperparameter leading towards a result that is comparable to the best generalization bound, but without the excessive computation. To put things in perspective, in Table 1, we show that grid search can be much more expensive than . For instance, under the setting of CNN and CIFAR10 dataset, we save 42.5 times computational time to find the bound that is 1% lower than the best generalization bound.
6 Discussion
In this work, we theoretically prove that the learning dynamics of deep probabilistic neural networks using training objectives derived from PAC-Bayes bounds are exactly described by the NTK in over-parameterized setting. Empirical investigation reveals that this agrees well with the actual training process. Furthermore, the expected output function trained with PAC-Bayesian bound converges to the kernel ridge regression under a mild assumption. Based on this finding, we obtain an explicit generalization bound with respect to NTK for PAC-Bayesian learning, which improves over the generalization bound obtained through NTK on non-probabilistic neural network. Finally, we show PAC-Bayesian bound score, the training-free method, can effectively select the hyperparameters which leads to a lower generalization bound without cost excessive computation time that brute-force grid search would take. In a summary, we establish our theoretical analysis on PAC-Bayes with random initialized prior. One promising direction is to study PAC-Bayesian learning with data-dependent prior by NTK.
References
- Abdelfattah et al. (2021) Abdelfattah, M. S., Mehrotra, A., Dudziak, Ł., and Lane, N. D. Zero-cost proxies for lightweight nas. arXiv preprint arXiv:2101.08134, 2021.
- Allen-Zhu et al. (2019) Allen-Zhu, Z., Li, Y., and Song, Z. A convergence theory for deep learning via over-parameterization. In International Conference on Machine Learning, pp. 242–252. PMLR, 2019.
- Ambroladze et al. (2007) Ambroladze, A., Parrado-Hernández, E., and Shawe-Taylor, J. Tighter pac-bayes bounds. Advances in neural information processing systems, 19:9, 2007.
- Amit & Meir (2018) Amit, R. and Meir, R. Meta-learning by adjusting priors based on extended pac-bayes theory. In International Conference on Machine Learning, pp. 205–214. PMLR, 2018.
- Arora et al. (2019a) Arora, S., Du, S., Hu, W., Li, Z., and Wang, R. Fine-grained analysis of optimization and generalization for overparameterized two-layer neural networks. In International Conference on Machine Learning, pp. 322–332. PMLR, 2019a.
- Arora et al. (2019b) Arora, S., Du, S. S., Hu, W., Li, Z., Salakhutdinov, R., and Wang, R. On exact computation with an infinitely wide neural net. arXiv preprint arXiv:1904.11955, 2019b.
- Bégin et al. (2014) Bégin, L., Germain, P., Laviolette, F., and Roy, J.-F. Pac-bayesian theory for transductive learning. In Artificial Intelligence and Statistics, pp. 105–113. PMLR, 2014.
- Bégin et al. (2016) Bégin, L., Germain, P., Laviolette, F., and Roy, J.-F. Pac-bayesian bounds based on the rényi divergence. In Artificial Intelligence and Statistics, pp. 435–444. PMLR, 2016.
- Biggs & Guedj (2020) Biggs, F. and Guedj, B. Differentiable pac-bayes objectives with partially aggregated neural networks. arXiv preprint arXiv:2006.12228, 2020.
- Camero et al. (2021) Camero, A., Wang, H., Alba, E., and Bäck, T. Bayesian neural architecture search using a training-free performance metric. Applied Soft Computing, 106:107356, 2021.
- Cao & Gu (2019) Cao, Y. and Gu, Q. Generalization bounds of stochastic gradient descent for wide and deep neural networks. Advances in Neural Information Processing Systems, 32:10836–10846, 2019.
- Chen et al. (2021) Chen, W., Gong, X., and Wang, Z. Neural architecture search on imagenet in four gpu hours: A theoretically inspired perspective. arXiv preprint arXiv:2102.11535, 2021.
- Deshpande et al. (2021) Deshpande, A., Achille, A., Ravichandran, A., Li, H., Zancato, L., Fowlkes, C., Bhotika, R., Soatto, S., and Perona, P. A linearized framework and a new benchmark for model selection for fine-tuning. arXiv preprint arXiv:2102.00084, 2021.
- Du et al. (2019a) Du, S., Lee, J., Li, H., Wang, L., and Zhai, X. Gradient descent finds global minima of deep neural networks. In International Conference on Machine Learning, pp. 1675–1685. PMLR, 2019a.
- Du et al. (2018) Du, S. S., Zhai, X., Poczos, B., and Singh, A. Gradient descent provably optimizes over-parameterized neural networks. arXiv preprint arXiv:1810.02054, 2018.
- Du et al. (2019b) Du, S. S., Hou, K., Salakhutdinov, R. R., Poczos, B., Wang, R., and Xu, K. Graph neural tangent kernel: Fusing graph neural networks with graph kernels. Advances in Neural Information Processing Systems, 32:5723–5733, 2019b.
- Dziugaite & Roy (2018a) Dziugaite, G. K. and Roy, D. Entropy-sgd optimizes the prior of a pac-bayes bound: Generalization properties of entropy-sgd and data-dependent priors. In International Conference on Machine Learning, pp. 1377–1386. PMLR, 2018a.
- Dziugaite & Roy (2017) Dziugaite, G. K. and Roy, D. M. Computing nonvacuous generalization bounds for deep (stochastic) neural networks with many more parameters than training data. CoRR, abs/1703.11008, 2017. URL https://arxiv.org/abs/1703.11008.
- Dziugaite & Roy (2018b) Dziugaite, G. K. and Roy, D. M. Data-dependent pac-bayes priors via differential privacy. arXiv preprint arXiv:1802.09583, 2018b.
- Dziugaite et al. (2020) Dziugaite, G. K., Hsu, K., Gharbieh, W., and Roy, D. M. On the role of data in pac-bayes bounds. arXiv preprint arXiv:2006.10929, 2020.
- Grünwald & Mehta (2020) Grünwald, P. D. and Mehta, N. A. Fast rates for general unbounded loss functions: From erm to generalized bayes. J. Mach. Learn. Res., 21:56–1, 2020.
- Guedj (2019) Guedj, B. A primer on pac-bayesian learning. arXiv preprint arXiv:1901.05353, 2019.
- Haddouche et al. (2021) Haddouche, M., Guedj, B., Rivasplata, O., and Shawe-Taylor, J. Pac-bayes unleashed: generalisation bounds with unbounded losses. Entropy, 23(10):1330, 2021.
- Hron et al. (2020) Hron, J., Bahri, Y., Sohl-Dickstein, J., and Novak, R. Infinite attention: Nngp and ntk for deep attention networks. In International Conference on Machine Learning, pp. 4376–4386. PMLR, 2020.
- Hu et al. (2019) Hu, W., Li, Z., and Yu, D. Simple and effective regularization methods for training on noisily labeled data with generalization guarantee. arXiv preprint arXiv:1905.11368, 2019.
- Huang et al. (2020) Huang, W., Du, W., and Da Xu, R. Y. On the neural tangent kernel of deep networks with orthogonal initialization. arXiv preprint arXiv:2004.05867, 2020.
- Huang et al. (2021) Huang, W., Li, Y., Du, W., Da Xu, R. Y., Yin, J., and Chen, L. Wide graph neural networks: Aggregation provably leads to exponentially trainability loss. arXiv preprint arXiv:2103.03113, 2021.
- Jacot et al. (2018) Jacot, A., Gabriel, F., and Hongler, C. Neural tangent kernel: Convergence and generalization in neural networks. arXiv preprint arXiv:1806.07572, 2018.
- Jain & Kar (2017) Jain, P. and Kar, P. Non-convex optimization for machine learning. arXiv preprint arXiv:1712.07897, 2017.
- Kingma et al. (2015) Kingma, D. P., Salimans, T., and Welling, M. Variational dropout and the local reparameterization trick. Advances in neural information processing systems, 28:2575–2583, 2015.
- Langford & Caruana (2002a) Langford, J. and Caruana, R. (not) bounding the true error. In Dietterich, T., Becker, S., and Ghahramani, Z. (eds.), Advances in Neural Information Processing Systems, volume 14. MIT Press, 2002a.
- Langford & Caruana (2002b) Langford, J. and Caruana, R. (not) bounding the true error. Advances in Neural Information Processing Systems, 2:809–816, 2002b.
- Langford & Seeger (2001) Langford, J. and Seeger, M. Bounds for averaging classifiers. 2001.
- LeCun et al. (2015) LeCun, Y., Bengio, Y., and Hinton, G. Deep learning. nature, 521(7553):436–444, 2015.
- Lee et al. (2019) Lee, J., Xiao, L., Schoenholz, S. S., Bahri, Y., Novak, R., Sohl-Dickstein, J., and Pennington, J. Wide neural networks of any depth evolve as linear models under gradient descent. arXiv preprint arXiv:1902.06720, 2019.
- Letarte et al. (2019) Letarte, G., Germain, P., Guedj, B., and Laviolette, F. Dichotomize and generalize: Pac-bayesian binary activated deep neural networks. arXiv preprint arXiv:1905.10259, 2019.
- Liao et al. (2020) Liao, R., Urtasun, R., and Zemel, R. A pac-bayesian approach to generalization bounds for graph neural networks. arXiv preprint arXiv:2012.07690, 2020.
- Liu et al. (2021) Liu, T., Lu, J., Yan, Z., and Zhang, G. Pac-bayes bounds for meta-learning with data-dependent prior. arXiv preprint arXiv:2102.03748, 2021.
- London (2017) London, B. A pac-bayesian analysis of randomized learning with application to stochastic gradient descent. arXiv preprint arXiv:1709.06617, 2017.
- Maurer (2004) Maurer, A. A note on the pac bayesian theorem. arXiv preprint cs/0411099, 2004.
- McAllester (1999a) McAllester, D. A. Pac-bayesian model averaging. In Proceedings of the twelfth annual conference on Computational learning theory, pp. 164–170, 1999a.
- McAllester (1999b) McAllester, D. A. Some PAC-Bayesian theorems. Machine Learning, 37(3):355–363, 1999b.
- Negrea et al. (2019) Negrea, J., Haghifam, M., Dziugaite, G. K., Khisti, A., and Roy, D. M. Information-theoretic generalization bounds for sgld via data-dependent estimates. arXiv preprint arXiv:1911.02151, 2019.
- Neyshabur et al. (2017a) Neyshabur, B., Bhojanapalli, S., McAllester, D., and Srebro, N. Exploring generalization in deep learning. arXiv preprint arXiv:1706.08947, 2017a.
- Neyshabur et al. (2017b) Neyshabur, B., Bhojanapalli, S., and Srebro, N. A pac-bayesian approach to spectrally-normalized margin bounds for neural networks. arXiv preprint arXiv:1707.09564, 2017b.
- Nie et al. (2018) Nie, S., Zheng, M., and Ji, Q. The deep regression bayesian network and its applications: Probabilistic deep learning for computer vision. IEEE Signal Processing Magazine, 35(1):101–111, 2018.
- Pérez-Ortiz et al. (2020) Pérez-Ortiz, M., Rivasplata, O., Shawe-Taylor, J., and Szepesvári, C. Tighter risk certificates for neural networks. CoRR, abs/2007.12911, 2020. URL https://arxiv.org/abs/2007.12911.
- Perez-Ortiz et al. (2021) Perez-Ortiz, M., Rivasplata, O., Guedj, B., Gleeson, M., Zhang, J., Shawe-Taylor, J., Bober, M., and Kittler, J. Learning pac-bayes priors for probabilistic neural networks. arXiv preprint arXiv:2109.10304, 2021.
- Pu et al. (2016) Pu, Y., Gan, Z., Henao, R., Yuan, X., Li, C., Stevens, A., and Carin, L. Variational autoencoder for deep learning of images, labels and captions. Advances in neural information processing systems, 29:2352–2360, 2016.
- Raginsky et al. (2017) Raginsky, M., Rakhlin, A., and Telgarsky, M. Non-convex learning via stochastic gradient langevin dynamics: a nonasymptotic analysis. In Conference on Learning Theory, pp. 1674–1703. PMLR, 2017.
- Seeger (2002) Seeger, M. Pac-bayesian generalisation error bounds for gaussian process classification. Journal of machine learning research, 3(Oct):233–269, 2002.
- Smith & Le (2017) Smith, S. L. and Le, Q. V. A bayesian perspective on generalization and stochastic gradient descent. arXiv preprint arXiv:1710.06451, 2017.
- Specht (1990) Specht, D. F. Probabilistic neural networks. Neural networks, 3(1):109–118, 1990.
- Thiemann et al. (2017) Thiemann, N., Igel, C., Wintenberger, O., and Seldin, Y. A strongly quasiconvex pac-bayesian bound. In International Conference on Algorithmic Learning Theory, pp. 466–492. PMLR, 2017.
- Zhang et al. (2021) Zhang, C., Bengio, S., Hardt, M., Recht, B., and Vinyals, O. Understanding deep learning (still) requires rethinking generalization. Communications of the ACM, 64(3):107–115, 2021.
- Zhou et al. (2018) Zhou, W., Veitch, V., Austern, M., Adams, R. P., and Orbanz, P. Non-vacuous generalization bounds at the imagenet scale: a pac-bayesian compression approach. arXiv preprint arXiv:1804.05862, 2018.
Appendix A Preliminary Lemmas
In this section, we list preliminary lemmas that with be used in the proof.
Lemma A.1 (Markov’s inequality).
If is a nonnegative random variable and , then the probability that is at least is at most the expectation of divided by :
Lemma A.2 (Hoeffding’s inequality).
Let be independent random variables such that almost surely. Consider the sum of these random variables, , then Hoeffding’s theorem states that, for all :
Appendix B Mission Proofs of Section 4.1
This section contains detailed proofs of the results that are missing in the main paper.
Lemma B.1 (PNTK at initialization).
If , while and are initialized by the form in Theorem 4.4, then with probability at least , , ; and , .
Proof.
The derivative of output over the parameters and can be expressed as
(21) |
Since the probabilistic neural network contains two set of parameters, we discuss the corresponding NTK accordingly.
(1) .
Plugging the derivative result regarding mean weights in Equation (21) into the definition of PNTK (Eqution 12) yields:
By an analysis, we find that for all pairs of , is the average of i.i.d. random variables bounded in , with the expectation . Here , and . Then with a simple argument, we show that is also a Gaussian variable. The variance of distribution for can be inferred through the following process,
Taking the density function of variable into , we can obtain,
Then by Hoeffding’s inequality, we know that the following inequality holds with probability at least ,
Because NTK matrix is of size , we then apply a union bound over all (by setting ), and obtain that
Thus we have,
Finally, if , which implies , then with probability at least ,
(2) .
Plugging the derivative result regarding mean weights in Equation (21) into the definition of PNTK (Eqution 12) yields:
Note that the tangent kernel for differs from with an additional term . It is known the independently with . Because , the expectation of equals the expectation of . Thus for all pairs of , is the average of i.i.d. random variables with the expectation .
Now we calculate the concentration bound. It is known that is independent and sub-exponential. Then, by sub-exponential tail bound, we know that the following holds with probability at least ,
This bound is of the same order to concentration bound for . Thus we can take all the arguments for above to finalize the proof: If , which implies , then with probability at least ,
∎
Lemma B.2 (PNTK during training).
If and are initialized the same with Theorem 4.4, for any set of weight vectors that satisfy for any , , where is a constant, then with probability at least , , ; and , .
Proof.
Define the event . Then this event would only happen when . As calculated in the proof of Lemma B.1, we know that . Then with the condition that , we obtain
By integrating over the condition , we have
Then we can bound the entry of the matrix :
Summing over all entries of the matrix, we have . By Markov’s inequality, with probability of over distribution of for , . Then by the matrix perturbation theory, we have,
If , which implies , then with probability at least ,
On the other hand, we can bound the entry of the matrix with following inequality:
For the first inequality in the second lien, we use according to the definition of . We argue that our definition is reasonable because when and are two i.i.d. variables. With our definition, we can compute directly, and keep the corresponding conclusion applicable to real weights at training step simultaneously.
The above inequality shows that with additional , we still have the same result. Once again, we can apply arguments for here, and conclude the proof: If , which implies , then with probability at least ,
∎
Lemma B.3 (Change of weights during training).
Suppose for . Then we have , and with probability at least over initialization, .
Proof.
According to the gradient flow of output function, we have
Then the dynamics of loss can be calculated,
where we define . are collections of features and labels. Then the loss can be bounded,
The above equation implies the linear convergence rate of gradient descent on over-parameterized probabilistic network.
Now we bounded the change of mean weights:
Integrating the gradient, we have
Next we bounded the change of variance weights:
In the above derivation, we use the definition of loss which is and interchange integration and differentiation. The result leads to:
Plug the results for both mean and variance weights together, we obtain
Finally, it is time to bound ,
Thus by Markov’s inequality, we have with probability at least , , and
∎
Lemma B.4.
If , we have
Proof.
Since and , we have . ∎
Appendix C Missing Proofs of Section 4.2
Theorem C.1.
Proof.
At a high level, our proof first establishes the result of kernel ridge regression in the infinite-width limit, then bounds the perturbation on the predict.
According the linearization rules for infinitely-wide networks (Lee et al., 2019), the output function can be expressed as,
where , and . Since does not change during training, then the KL divergence reduces to
Then the gradient flow equation for becomes,
By analysing above equation, we find it is an ordinary differential equation regarding , and the solution is,
Plug this result into the linearization of expected output function, we have,
when we take the time to be infinity,
The next step is to show that
where . In which, we define and .
The proof relies a careful analysis on the trajectories induced by gradient flows for optimizing the neural network and the NTK predictor. The detailed proof can be found in the proof of Theorem 3.2 in (Arora et al., 2019b), and we can replace kernel ridge regression here by kernel regression.
∎
Appendix D Missing Proofs of Section 4.3
Theorem D.1 (PAC-Bayesian bound with NTK).
Suppose data are i.i.d. samples from a non-degenerate distribution , and . Consider any loss function . Then with probability at least over the random initialization and the training samples, the probabilistic network trained by gradient descent for iterations has population risk that is bounded as follows:
(23) |
Proof.
The generalization bound consists two terms, one is the empirical error, and another is KL divergence.
(1) We first bound the empirical error with following inequality,
Then we can further bound the error as,
(2) The next step is to calculate the KL divergence. According to the solution of differential equation, we have,
then yields . Therefore, the KL divergence is,
KL |
Finally, we achieve the PAC-Bayesian generalization bound,
∎
Theorem D.2 (Rademacher bound with NTK).
Suppose data are i.i.d. samples from a non-degenerate distribution , and . Consider any loss function . Then with probability at least over the random initialization and training samples, the network trained by gradient descent for iterations has population risk that is bounded as follows:
(24) |
Proof.
In this proof, we use Rademacher-complexity analysis. Let be the reproducing kernel Hilbert space (RKHS) corresponding to the kernel . It is known that the RKHS norm of a function is , where and . Then we can bound the .
For function class , it is shown that its empirical Rademacher complexity can be bounded as,
Assume that . Recall the standard generalization bound from Rademacher complexity, with probability at least , we have,
Thus we have,
∎
Appendix E Additional Experiments
This section contains additional experimental results.
E.1 Training dynamics of wide probabilistic network
We conduct an experiment similar to the experiment conducted in section 5.2 but without incorporating the KL term in the objective function. The same results can be observed, further verifying our theoretical results.
E.2 Comparison of gradient norm with respect to and
In Lemma 4.9, we assume that the variance weight is fixed. To verify if this assumption is reasonable in the practical training process, we conduct an experiment, to compare the gradient of norm with respect to and . The result is shown in Figure S3. We can see that the gradient norm of is much larger than that of , which implies that is effectively fixed during gradient descent training.



E.3 Correlation between generalization bound proxy metric and generalization bound
In Figure 2, we observe a positive and significant correlation between and generalization bound held among different values of a selected hyperparameter while fixing other hyperparameters. Furthermore, we provide a Figure S2 presenting the correlation for aggregated values of and , under the circumstance where 50% data is used for prior training. We can clearly see that lower corresponds to the lower bound, with a strong positive Kendall-tau correlation of 0.7.
E.4 Grid search
For selecting hyperparameters, we conduct a grid search over , percent of prior data, and KL penalty . Notably, we do grid sweep over the data for prior training with different proportion in [0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9] since 0.2 is the minimum proportion required for obtaining a reasonably lower value generalization bound (Dziugaite et al., 2020). For the rest, we run over at value [0.03, 0.05, 0.07, 0.09, 0.1, 0.3, 0.5, 0.7] for FCN ([0.05, 0.07, 0.09, 0.1, 0.3, 0.5, 0.7, 0.9] for CNN) and KL penalty at [0.0001, 0.0005, 0.001, 0.005, 0.01, 0.05, 0.1, 0.5, 1] for both structures.