Redundancy of Hidden Layers in Deep Learning: An Information Perspective
Abstract
Although deep structures guarantee powerful expressivity of deep neural networks (DNNs), they may also trigger overfitting problems. To improve the generalization capability of DNNs while retaining their expressivity, many strategies were developed to improve the diversity among the hidden units. Following this research direction, we propose a label-based diversity measure (LDiversity) quantified as the gap between a newly added inductive-bias term and a canonical unsupervised diversity measure term by formalizing the effect of the entanglement of the hidden units on the generalization capacity as mutual information. The existence of an inverse relationship between LDiversity and the generalization capacity is proved; i.e., the decrease in LDiversity generally improves the generalization capacity. Further, a regularization method is proposed by using LDiversity as the regularizer. The experiments show that the new method can effectively reduce overfitting and decrease the generalization error, experimentally justifying our approach.
1 Introduction
Deep neural networks (DNNs) have achieved significant success in many practical applications due to their strong expression capacity and powerful learning ability. However, the deep structure of DNNs may lead to complicated nonlinear mappings from input to output, giving rise to the problem of overfitting. Therefore, many studies have been devoted to developing approaches to improve the generalization capacity of DNNs in order to address the overfitting problem. One research direction is to constrain the model complexity; these methods include dropout Srivastava et al. (2014), weight decay Krogh & Hertz (1991), and CIF Zhao et al. (2018). Despite the high effectiveness of these methods, they may not fully leverage the expression capability of the model because they generally reduce the effective number of model parameters.
To improve the generalization capacity of DNNs while simultaneously maintaining their expressivity, another research direction to pursue is to explore the diversity among the hidden units of a single specified layer of DNNs and to improve it, encouraging hidden units to be as uncorrelated or independent from each other as possible. For instance, Cogswell et al. minimized the cross-covariance of hidden activations to obtain diverse representation of hidden units to reduce overfitting Cogswell et al. (2016). Gu et al. extended this method, treating nonoverlapping groups of hidden units as component learners in order to avoid the negative influence of the breakdown of correlations Gu et al. (2018). Impressively, by using mutual information among the hidden units as the measure of diversity, Brakel et al. proposed a method to learn independent features Brakel & Bengio (2018). Many other studies reported in the literature also investigated the positive role of feature independence in the feature encoding Bengio et al. (2017); Hjelm et al. (2019).
This paper follows the second research direction and further focuses on the role of label information in defining the diversity measure. In fact, while boosting the diversity among the hidden units of DNNs has been shown to be beneficial for the performance in classification tasks, there is no single widely accepted formal definition of diversity measure. It was found that the role of inductive biases should be made explicit and enforced in the process of learning disentangled hidden representations for downstream tasks Locatello et al. (2019). However, the measures of diversity in the current set mostly only considered the correlations among the hidden units over the whole mixed data distribution but neglected the local clustering feature of different classes, which may be harmful to the classification performance Grover & Ermon (2019). A feasible approach to understand the role of label information in defining the diversity measure and obtain an appropriate measure with embedded inductive biases is to investigate the measure of diversity under supervised settings by examining the relationship between diversity and generalization capacity.
To achieve this goal, we first introduce a generalization error bound of DNNs from an information perspective, following the work of Xu et al. Xu & Raginsky (2017), which formalizes the bound as mutual information between the activation values of the hidden units in the specified layer and model parameters from this layer to the end layer (see Fig.1). Intuitively, the new bound describes the information of the extracted features stored in the model parameters, which was proposed as a measure of the effective complexity of a network by Hinton and Van Camp and was used as a regularizer to simplify the networks by Achille and Soatto Hinton & Van Camp (1993); Achille & Soatto (2018). However, the direct usage of this bound is usually difficult due to its excessively complicated estimation in previous work. Nevertheless, compared to the traditional generalization error bounds that are based on hypothesis space, e.g., the Rademacher complexity Boucheron et al. (2005) and the uniform stability Bousquet & Elisseeff (2002), the new bound is tight and simpler in form, making it sufficient for our further analysis.

We then decompose the new bound and naturally obtain a measure of diversity formulated as a difference of two terms, named the label-based diversity measure (LDiversity), where the first term that has been used to obtain independent features Brakel & Bengio (2018) is a canonical unsupervised diversity measure and the second term is a label-based term that has not been considered by the current diversity measures in DNNs; this term is the main difference between our measure and the other measures, reflecting the inductive biases embedded in the hidden representations. Decreasing the defined label-based diversity measure is expected to improve the generalization capacity. Furthermore, if regarding the hidden units as base learners, the proposed diversity measure can also be viewed as the ensemble diversity proposed by Brown et al. Brown (2009); Zhou & Li (2010) for ensemble learning, where the ensemble diversity is shown to be the part of the upper bound of classification error. This means that decreasing LDiversity may also suppress the probability of classification error.
By using LDiversity as the regularizer, we develop a new regularization method named the LDiversity method (LDM), the goal of which is to minimize the classification loss and newly added LDiversity. In particular, the process of minimizing LDiversity is the same as that of training generative adversarial networks (GANs), where two additional “discriminators” are involved to estimate LDiversity by maximizing their output values. Finally, we apply this method to fully connected neural networks and convolutional neural networks. Many experiments show that LDM can effectively reduce the overfitting and decrease the generalization error compared to the methods without the LDiversity regularizer. Furthermore, LDiversity between hidden units is demonstrated to be a crucial factor for reducing the generalization error in DNNs.
2 Preliminaries
Let be an instance space, where is a feature space and is a label space. A training set of size is an -tuple, i.e.,
(1) |
of i.i.d random elements of with an unknown PDF . Given a neural network with multilayers, let be the set of all hidden units in the discussed layer and be the collection of model parameters from the specified hidden layer to the end layer, where is the hypothesis space of . Due to the randomness in the realization of the dataset , the values in are considered to be random variables.
We will make frequent use of the following standard information theoretical quantities Cover & Thomas (1991). For a stochastic variable , its Shannon entropy is defined as
(2) |
where denotes the expectation of the random object within the brackets w.r.t to the subscript random variable .
The mutual information of two stochastic variables is
(3) |
which by capturing the nonlinear statistical dependencies between the variables can be reformulated as the Kullback-Leibler (KL-)divergence between the joint density and the product of the marginal densities, i.e.,
(4) |
which is zero if and only if and are independent. For more than two variables, the multivariate mutual information is defined as
(5) |
We will later use it to measure the entanglement in the hidden units and still call it the mutual information in the following for consistency. Consequently, the conditional mutual information of multiple variables given is
(6) |
and will be employed below to describe the class-conditional correlation.
3 Generalization Error Bound
This section gives an upper-bound of the generalization error from an information-theoretical perspective, formulating the absolute difference in the expectation between the expected risk and the empirical risk as the mutual information between the hidden units and the model parameters. We follow the framework proposed by Russo and Xu et al. Russo & Zou (2020); Xu & Raginsky (2017), where it is convenient to think of each unit in the discussed layer as a mapping from the datum to its activation value, i.e., , to investigate the effect of entanglement of the hidden units on the generalization capacity since both the expected risk and the empirical risk are initially related to the datum rather than to the values of the hidden units; in fact, we are more interested in the mechanism that produces the activation value than in the activation value itself (see Fig. 1). Consequently, let , ; then, given , the loss function on the sample can be restated as a function w.r.t. and , i.e., , where . Accordingly, let . Now, we are ready to obtain the upper bound.
The empirical risk of a hypothesis over the dataset is
(7) |
The expected risk of on is
(8) | ||||
where are i.i.d random variables. Taking expectation on the difference between and with respect to the joint distribution , we obtain
(9) |
Then, the generalization error can be decomposed as
(10) |
We focus on , which reflects the quality of the generalization of the output hypothesis. Some further steps show that
|
(11) |
where means taking the expectation w.r.t the product of the marginal PDFs of and .
Xu and Raginsky (Lemma 1 in Xu & Raginsky (2017)) have justified that given two random variables and with the joint PDF and the product of the marginal PDFs , if the function is a subgaussian function under , then
(12) |
where a random variable is subguassian if for all . In fact, if the loss function in Eqs. (7) and (8) is restricted as a function bounded in , e.g., the sigmoid function, thereby being a -subgaussian function by Hoeffding’s lemma Massart et al. (2007), then in Eq. (11) is consequently a -subgaussian function for due to the independence among , where . Then, according to Eq. (12), by setting and in Eq. (12) as and , respectively, we obtain the following lemma.
Lemma 1.
If the loss function is -subgaussian, then the absolute value of is upper-bounded in terms of the mutual information between and , i.e.,
(13) |
4 Label-based Diversity Measure
The generalization error bound deduced by Lemma 1 may be more tightly coupled to the generalization error than some existing bounds can because the new bound depends on almost all ingredients of learning problems, including the distribution of the dataset, the hypothesis space and the learning algorithm, while some existing bounds, such as VC dimension or Rademacher complexity, mainly depend on the hypothesis space and neglect of the learning algorithm, which potentially means a looser quantity of bound to unify the other ignored ingredients in the learning problems. Moreover, Lemma 1 implies that regularizing the empirical risk with may lead to improved genelarization capacity. However, due to the high dimensions of hypothesis space , the direct usage of is usually intractable. In this section, we decompose the upper bound in Eq. (13), remove the terms related to and naturally derive a label-based diversity measure among the hidden units.
Theorem 1.
Proof.
Only a belief proof is given here (see the the supplementary material for the details).
(15) | ||||
By adding to the right-hand side of the above equation, it follows that
(16) | ||||
Considering that the samples in are sampled in an i.i.d. fashion, we have
(17) | ||||
Combining above equations gives Eq. (14), which completes the proof. ∎
Let us focus on the Eq. (14). There are five terms in the square root. Only the first two terms are completely unrelated to the sample size and the model parameters , reflecting the relationships among the hidden units. Since the two terms are part of the decomposed upper bound, regularizing the empirical risk with their sum is expected to reduce the upper bound of the absolute value of as well as the generalization error. We argue that the two terms naturally quantify the diversity among the hidden units (see Definition 1). For remaining terms: the third term, which is the sum of the respective relevancy of the hidden units to the labels , demonstrating the classification ability of each hidden unit itself and as a whole having a positive correlation, with the second term to some extent, is not considered by our diversity measure; the forth term is non-optimizable w.r.t. the training process, which is also not considered by the new diversity measure; Compared to the other terms, the last term is the only one related to the sample size and the model parameters , which makes it unsuitable for describing the diversity among hidden units. Moreover, when the sample size is relatively large, this term tends to be relatively small and consequently has a small effect on the generalization error since the upper bound in Lemma 1 for any reasonable hypothesis necessarily declines to a very small value when the sample size increases. Thus, we do not consider optimizing this term in this work.
Definition 1.
The label-based diversity measure among the hidden units is defined as
|
(18) |
As discussed in the introduction, the first term in the diversity measure is the canonical unsupervised diversity measure, which was used to learn independent data representations Brakel & Bengio (2018). It has also been shown that reducing such a term of correlations among the hidden units will lead to an improved generalization capabilityCogswell et al. (2016); Hjelm et al. (2019). The second term in LDiversity is a label-dependent term; it describes the local clustering feature captured by the hidden units. Improving this term may strengthen the class-conditional correlation and make the activations of the same class behave more collaboratively, which is usually important for a classification task. This term is the main difference between our diversity measure and other diversity measures.
It is worth noting that the diversity measure is identical in form to the ensemble diversity measure proposed by Brown (2009); Zhou & Li (2010) for ensemble learning. They derived the ensemble measure by analyzing the upper bound of the probability of the classification error,which is Hellman & Raviv (1970)
(19) |
where we continue to use the previous symbols and see as a set of base classifiers for the sample ; is any given combination function that minimizes the probatility . By decomposing the mutual information term in Eq. (19), they obtained
|
(20) |
and defined the sum of the last two terms as the ensemble diversity measure. Although the two measures have the same form, the ensemble diversity measure is based on the the Bayesian learning framework where only 0-1 loss is permitted, which makes it not very suitable for the cases in deep learning; they also did not propose an effective process for using the measure in practice. Nonetheless, their work implies that if we can regard the hidden units as base classifers, the decrease in LDiversity may well lead to a decrease of the classification error.
5 Regularization Method
In this section, a new regularization method named the label-based diversity method (LDM) is proposed by using LDiversity as the regularizer. Its total loss function is formulated as
(21) |
where is the premier loss function of the DNNs without any regularizer, for instance, the cross-entropy between the outputs of DNNs and the labels; controls the label-based diversity among the hidden units in one specified layer; and is the balance parameter.
The regularizer is actually the difference of two mutual information terms. Although the estimation of mutual information was recognized as a very difficult problem due to the continuity and high dimensions of data, recent studies Belghazi et al. (2018); Brakel & Bengio (2018) revealed that this problem can be solved in terms of the Donsker-Varadhan representation Donsker & Varadhan (1975) of KL-based mutual information, which is
(22) |
where is usually realized as a neural network such that the two expectations are finite. Then, by Eq. (5), the mutual information is estimated by optimizing to narrow the divergence between the joint distribution and the product of the marginals. However, such strategy for KL-based mutual information may suffer from the instability problem; an alternative approach is to use Jensen-Shannon (JS-)divergence-based mutual information to replace KL-based mutual information as proposed by Brakel and Bengio, where the possible deviation of using JS-divergence is usually acceptable Brakel & Bengio (2018). That is,
(23) |
where is JS-divergence. It is estimated by
(24) | ||||
where obeys the distribution and hereinafter represents the sigmoid function. Similarly, for the conditional mutual information, we have
(25) |
where taking expectation on requires using Eq. (24) to first obtain the corresponding JS-divergence for any given and combining the obtained divergence according to the prior probability of , which is estimated by the proportion of samples of each class to the total. To distinguish the network used in Eqs. (23) and (25), they are denoted by and , respectively. For brevity, the JS-divergence for conditional mutual information is abbreviated as .
The learning algorithm is finally shown as an iterative min-max process:
|
(26) |
where the maximization process guarantees a sufficient approximation to LDiversity by and ; and the minimization process is the training process of the regularizing DNNs to obtain the classifier . The overall training process of LDM is similar to that of generative adversarial networks (GANs) Goodfellow et al. (2014). In fact, both and in LDM play the same role as the discriminator in GANs (see Fig. 2).

To implement the learning algorithm presented in Eq. (26), it is important to estimate the expectations first. The expectation taken on the joint distribution or , can be estimated directly by its average value on the samples from the joint distribution. However, taking expectation on the product of marginals or is not straightforward because there are no samples from such a distribution for the empirical estimation. In this work, two strategies are established to approximately obtain samples from the product of marginals according to the type of network. For the case of fully connected neural networks, each sample from the product of marginals with dimensions is obtained by randomly selecting samples from the joint distribution, taking the th element from the th selected sample and combining them. For convolutional neural networks (CNNs), after viewing each filter in the discussed layer as a map, each group of the mapped values of all the filters is seen as a sample from the joint distribution. For instance, in the case where there are 3 filters in the the specified CNNs layer, then the samples from the joint distribution are the vectors with 3 mapped values of all the 3 filters. Then, by the same method applied to the fully connected neural networks, we can obtain the samples from the product of marginals. The implementation of LDM is presented in Algorithm 1 (also see Fig. 2).
Input: dataset , classifier as well as auxiliary networks and , hidden units in one specified layer as maps, loss function
Parameter:
Output:
6 Experiments
We apply LDM to fully connected neural networks and convolutional neural networks. We compare it with the method without a regularizer (NONE), dropout with a dropout rate of 0.5 Srivastava et al. (2014), the method with decorrelation regularizer (Decov) in which the hyperparameter was set to 0.1 Cogswell et al. (2016) and the method with the unsupervised diversity term in LDiversity as a regularizer (UDM), in which the balance parameter was set to 0.1 as proposed by Brakel and Bengio in Brakel & Bengio (2018). All the methods involved were implemented by TensorFlow Abadi et al. (2016).
6.1 Experiments on Fully Connected Neural Networks
Dataset.
The experiments were conducted on the MNIST dataset LeCun et al. (2010), which contains a training set of 60000 samples and a test set of 10000 samples with pixel values normalized to [0, 1]. Moreover, Gaussian noise with a mean value 0 and variance 1 was added to the original dataset to increase the performance differentiation.
Method Settings.
Since the goal is to check the role of the inductive-bias term in LDiversity and evaluate the performance of LDM and other regularization methods, we used a simple 3-layer fully connected network in this work, with 32 ReLUs in the hidden layer and ten units in the output layer. The batch size was set to 64. The main network was trained using the Adam algorithm Da (2014) with a learning rate of 0.001 until the number of iterations reached 1000. Moreover, the architectures of two auxiliary networks and were both set to 32-200-1. Their training settings were the same as those of the main network except that the number of updates for each update of main network was set to 4; and the learning rate was set to 0.0001. The balance parameter was set to 0.7.



Methods | MNIST | CIFAR-10 | CIFAR-100 | ||||||
---|---|---|---|---|---|---|---|---|---|
Train | Test | Train - Test | Train | Test | Train - Test | Train | Test | Train-Test | |
NONE | 0.702 | 0.679 | 0.023 | 0.978 | 0.740 | 0.238 | 0.949 | 0.376 | 0.573 |
Dropout | 0.670 | 0.650 | 0.020 | 0.980 | 0.755 | 0.226 | 0.925 | 0.436 | 0.489 |
Decov | 0.699 | 0.674 | 0.016 | 0.982 | 0.736 | 0.246 | 0.923 | 0.379 | 0.544 |
UDM | 0.703 | 0.675 | 0.028 | 0.978 | 0.736 | 0.242 | 0.934 | 0.381 | 0.553 |
LDM | 0.691 | 0.680 | 0.011† | 0.851 | 0.765 | 0.086† | 0.610 | 0.440 | 0.170† |
Regularizer Comparisons.
We first checked whether LDM can reduce LDiversity. For a fair comparison, we did not investigate the value of LDiversity directly but checked the difference between the class-independent correlation and the class-conditional correlation, named the correlation gap, whose definition is given below. Given any pair of hidden units and , the covariance between them is
(27) |
where is the sample mean of the activations of hidden unit over all the samples. Then the class-independent correlation is defined as
(28) |
Conformably, the class-conditional correlation is the expectation of correlation over the class labels, which is estimated by using Eq. (28) to obtain the respective correlations of given labels first and combining them according to the proportions of samples of each class as weights. Moreover, we did not record the correlation gap of dropout or NONE method since they were not designed to enforce the diversity.
Every experiment was repeated five times. The average results are shown in Fig. 3(a), from which we can see that as the number of iterations increases, the correlation gap of LDM is smaller than that of the other methods, indicating that LDM encourages the reduction of LDiversity.
Further, we examined the classification performance of these methods, where the generalization capacity is evaluated by the difference between the training accuracy and test accuracy. The results are shown in the left part of Table 1. From Table 1, we can observe that LDM has the best test accuracy as well as a minimal accuracy gap. We note that the other regularization methods differ from LDM mainly in the absence of the inductive-bias term. Particularly for UDM, its regularizer is exactly the same as the first term of LDiversity. It is reasonable to attribute the good performance of LDM to the effect of the additional inductive-bias term in the LDiversity. Additionally, it is noteworthy that UDM achieves the worst performance on the accuracy gap even compared with the NONE method. The reason may be that over-disentanglement of the features may destroy the valuable local clustering information and weaken the learning ability of the networks. However, the use of regularization methods in the term of LDiversity can avoid such problems.
6.2 Experiments on convolutional neural networks
Dataset.
The experiments were conducted on the CIFAR-10/100 datasets. The CIFAR-10 dataset consists of 60000 32x32 color images in 10 classes, with 50000 training images and 10000 test images. CIFAR-100 is similar to CIFAR-10: it has 100 classes 6000 images, with per class 600 images.
Method Settings.
We compared these methods on the CIFAR10-quick architecture, which contains 3 convolutional layers followed by a fully connected layer with 64 hidden units and a softmax layer. Since the different features in the obtained representation by CNN may come from the same weights, i.e., they share the same mappings, while LDM measures the diversity among different mappings, we applied LDM to the last pool layer and regarded the resampled filters in this layer as mappings (see the Section Regularization Method). For fair comparison, we also apply Decov and UDM to this layer because they also used some diversity measure as the regularizer. Dropout is applied to the fully connected layer. With exception of setting the number of iterations to 20000, other training settings including the architectures of the two auxiliary networks were set to be the same as those in the fully connected neural networks.
Regularizer Comparisons.
We also investigated the changes in the values of the correlation gap with increasing numbers of iterations. The average results of all the methods over 5 trials are shown in Figs. 3(b) and (c), where only the results obtained by LDM, Decov and UDM are recorded. We can see that on both the CIFAR-10 and CIFAR-100 datasets, LDM achieves smaller correlation gaps than the other methods, which further confirms that LDM is an effective approach to enforce the diversity among the hidden units while strengthening the local clustering feature of different classes.
The average classification accuracy of the examined methods on the CIFAR-10 and CIFAR-100 datasets are presented in the right two parts of Table 1. On the CIFAR-10 dataset, we observe that LDM outperforms the other methods on test accuracy and has a minimal train-test accuracy gap. In particular, LDM shows a 0.1 improvement on test accuracy and an approximately 0.14 improvement on the accuracy gap compared to UDM, which may be due to the use of the inductive-bias term in LDM.
We also observe similar results on the CIFAR-100 dataset, justifying our hypothesis that the diversity measure should reflect the inductive-bias information.
Finally, we tested the influence of hyperparameters on experimental performance and record the results on Table 2 and 3. From Table 2 and 3 we can find that LDM achieves the best performance when or ; moreover, as the value of increases, it plays a larger role in LDM and controls the accuracy gap to be smaller, which justifying our point that the decrease in LDiversity generally improves the generalization capacity.
0.1 | 0.3 | 0.5 | 0.7 | 0.9 | |
---|---|---|---|---|---|
Train | 0.982 | 0.973 | 0.923 | 0.851 | 0.826 |
Test | 0.743 | 0.755 | 0.767 | 0.765 | 0.756 |
Train - test | 0.217 | 0.239 | 0.156 | 0.086 | 0.07 |
0.1 | 0.3 | 0.5 | 0.7 | |
---|---|---|---|---|
Train | 0.946 | 0.901 | 0.813 | 0.610 |
Test | 0.391 | 0.401 | 0.432 | 0.440 |
Train - test | 0.555 | 0.5 | 0.381 | 0.170 |
7 Conclusion
In this paper, by investigating the upper bound of the generalization error from an information perspective, we found that we can naturally derive a measure of the diversity among the hidden units of DNNs, which differs from the other measures because it contains an inductive-bias term. Based on this insight, we designed a regularization method using the diversity measure as the regularizer. Our experiments verified the effectiveness of the proposed method and provided empirical evidence for the validity of our approach.
References
- Abadi et al. [2016] Abadi, M., Barham, P., et al. Tensorflow: a system for large-scale machine learning. In OSDI’16 Proceedings of the 12th USENIX conference on Operating Systems Design and Implementation, pp. 265–283, 2016. URL https://academic.microsoft.com/paper/2402144811.
- Achille & Soatto [2018] Achille, A. and Soatto, S. Emergence of invariance and disentanglement in deep representations. The Journal of Machine Learning Research, 19(1):1947–1980, 2018.
- Belghazi et al. [2018] Belghazi, M. I., Rajeswar, S., Baratin, A., Hjelm, D., and Courville, A. Mine: Mutual information neural estimation. arXiv preprint arXiv:1801.04062, 2018. URL https://academic.microsoft.com/paper/2783047733.
- Bengio et al. [2017] Bengio, E., Thomas, V., Pineau, J., Precup, D., and Bengio, Y. Independently controllable features. arXiv preprint arXiv:1703.07718, 2017. URL https://academic.microsoft.com/paper/2604626881.
- Boucheron et al. [2005] Boucheron, S., Bousquet, O., and Lugosi, G. Theory of classification : a survey of some recent advances. Esaim: Probability and Statistics, 9:323–375, 2005. URL https://academic.microsoft.com/paper/2014902932.
- Bousquet & Elisseeff [2002] Bousquet, O. and Elisseeff, A. Stability and generalization. Journal of Machine Learning Research, 2(3):499–526, 2002. URL https://academic.microsoft.com/paper/2139338362.
- Brakel & Bengio [2018] Brakel, P. and Bengio, Y. Learning independent features with adversarial nets for non-linear ica. arXiv preprint arXiv:1710.05050, 2018. URL https://academic.microsoft.com/paper/2766527109.
- Brown [2009] Brown, G. An information theoretic perspective on multiple classifier systems. In MCS ’09 Proceedings of the 8th International Workshop on Multiple Classifier Systems, pp. 344–353, 2009. URL https://academic.microsoft.com/paper/1763872900.
- Cogswell et al. [2016] Cogswell, M., Ahmed, F., Girshick, R., Zitnick, L., and Batra, D. Reducing overfitting in deep networks by decorrelating representations. In ICLR 2016 : International Conference on Learning Representations 2016, 2016. URL https://academic.microsoft.com/paper/2962684187.
- Cover & Thomas [1991] Cover, T. M. and Thomas, J. A. Elements of information theory. 1991.
- Da [2014] Da, K. A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
- Donsker & Varadhan [1975] Donsker, M. D. and Varadhan, S. Asymptotic evaluation of certain markov process expectations for large time-iii. Communications on Pure and Applied Mathematics, 28(2):389–461, 1975. URL https://academic.microsoft.com/paper/2136144249.
- Goodfellow et al. [2014] Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., and Bengio, Y. Generative adversarial nets. In Advances in Neural Information Processing Systems 27, pp. 2672–2680, 2014. URL https://academic.microsoft.com/paper/2099471712.
- Grover & Ermon [2019] Grover, A. and Ermon, S. Uncertainty autoencoders: Learning compressed representations via variational information maximization. In The 22nd International Conference on Artificial Intelligence and Statistics, pp. 2514–2524, 2019. URL https://academic.microsoft.com/paper/2963037669.
- Gu et al. [2018] Gu, S., Hou, Y., Zhang, L., and Zhang, Y. Regularizing deep neural networks with an ensemble-based decorrelation method. In IJCAI 2018: 27th International Joint Conference on Artificial Intelligence, pp. 2177–2183, 2018. URL https://academic.microsoft.com/paper/2808014987.
- Hellman & Raviv [1970] Hellman, M. and Raviv, J. Probability of error, equivocation, and the chernoff bound. IEEE Transactions on Information Theory, 16(4):368–372, 1970.
- Hinton & Van Camp [1993] Hinton, G. E. and Van Camp, D. Keeping the neural networks simple by minimizing the description length of the weights. In Proceedings of the sixth annual conference on Computational learning theory, pp. 5–13, 1993.
- Hjelm et al. [2019] Hjelm, R. D., Fedorov, A., Lavoie-Marchildon, S., Grewal, K., Bachman, P., Trischler, A., and Bengio, Y. Learning deep representations by mutual information estimation and maximization. In ICLR 2019 : 7th International Conference on Learning Representations, 2019. URL https://academic.microsoft.com/paper/2887997457.
- Krogh & Hertz [1991] Krogh, A. and Hertz, J. A. A simple weight decay can improve generalization. In Advances in Neural Information Processing Systems 4, volume 4, pp. 950–957, 1991. URL https://academic.microsoft.com/paper/2144513243.
- LeCun et al. [2010] LeCun, Y., Cortes, C., and Burges, C. Mnist handwritten digit database. AT&T Labs [Online]. Available: http://yann. lecun. com/exdb/mnist, 2:18, 2010.
- Locatello et al. [2019] Locatello, F., Bauer, S., Lucic, M., Raetsch, G., Gelly, S., Schölkopf, B., and Bachem, O. Challenging common assumptions in the unsupervised learning of disentangled representations. In ICML 2019 : Thirty-sixth International Conference on Machine Learning, pp. 4114–4124, 2019. URL https://academic.microsoft.com/paper/2903538854.
- Massart et al. [2007] Massart, P., Picard, J., and École d’été de probabilités de Saint-Flour. Concentration inequalities and model selection. 2007.
- Russo & Zou [2020] Russo, D. and Zou, J. How much does your data exploration overfit? controlling bias via information usage. IEEE Transactions on Information Theory, 66(1):302–323, 2020.
- Srivastava et al. [2014] Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I., and Salakhutdinov, R. Dropout: a simple way to prevent neural networks from overfitting. Journal of Machine Learning Research, 15(1):1929–1958, 2014. URL https://academic.microsoft.com/paper/2095705004.
- Xu & Raginsky [2017] Xu, A. and Raginsky, M. Information-theoretic analysis of generalization capability of learning algorithms. In 31st Annual Conference on Neural Information Processing Systems, NIPS 2017, pp. 2524–2533, 2017. URL https://academic.microsoft.com/paper/2963862692.
- Zhao et al. [2018] Zhao, X., Hou, Y., Song, D., and Li, W. A confident information first principle for parameter reduction and model selection of boltzmann machines. IEEE Transactions on Neural Networks, 29(5):1608–1621, 2018. URL https://academic.microsoft.com/paper/2963767314.
- Zhou & Li [2010] Zhou, Z.-H. and Li, N. Multi-information ensemble diversity. In MCS’10 Proceedings of the 9th international conference on Multiple Classifier Systems, pp. 134–144, 2010. URL https://academic.microsoft.com/paper/1539376383.