Long-Tailed Learning Requires Feature Learning
Abstract
We propose a simple data model inspired from natural data such as text or images, and use it to study the importance of learning features in order to achieve good generalization. Our data model follows a long-tailed distribution in the sense that some rare subcategories have few representatives in the training set. In this context we provide evidence that a learner succeeds if and only if it identifies the correct features, and moreover derive non-asymptotic generalization error bounds that precisely quantify the penalty that one must pay for not learning features.
1 Introduction
Part of the motivation for deploying a neural network arises from the belief that algorithms that learn features/representations generalize better than algorithms that do not. We try to give some mathematical ballast to this notion by studying a data model where, at an intuitive level, a learner succeeds if and only if it manages to learn the correct features. The data model itself attempts to capture two key structures observed in natural data such as text or images. First, it is endowed with a latent structure at the patch or word level that is directly tied to a classification task. Second, the data distribution has a long-tail, in the sense that rare and uncommon instances collectively form a significant fraction of the data. We derive non-asymptotic generalization error bounds that quantify, within our framework, the penalty that one must pay for not learning features.
We first prove a two part result that quantifies precisely the necessity of learning features within the context of our data model. The first part shows that a trivial nearest neighbor classifier performs perfectly when given knowledge of the correct features. The second part shows it is impossible to a priori craft a feature map that generalizes well when using a nearest neighbor classification rule. In other words, success or failure depends only on the ability to identify the correct features and not on the underlying classification rule. Since this cannot be done a priori, the features must be learned.
Our theoretical results therefore support the idea that algorithms cannot generalize on long-tailed data if they do not learn features. Nevertheless, an algorithm that does learn features can generalize well. Specifically, the most direct neural network architecture for our data model generalizes almost perfectly when using either a linear classifier or a nearest neighbor classifier on the top of the learned features. Crucially, designing the architecture requires knowing only the meta structure of the problem, but no a priori knowledge of the correct features. This illustrates the built-in advantage of neural networks; their ability to learn features significantly eases the design burden placed on the practitioner.
Subcategories in commonly used visual recognition datasets tend to follow a long-tailed distribution [25, 28, 11]. Some common subcategories have a wealth of representatives in the training set, whereas many rare subcategories only have a few representatives. At an intuitive level, learning features seems especially important on a long-tailed dataset since features learned from the common subcategories help to properly classify test points from a rare subcategory. Our theoretical results help support this intuition.
We note that when considering complex visual recognition tasks, datasets are almost unavoidably long-tailed [19] — even if the dataset contains millions of images, it is to be expected that many subcategories will have few samples. In this setting, the classical approach of deriving asymptotic performance guarantees based on a large-sample limit is not a fruitful avenue. Generalization must be approached from a different point of view (c.f. [10] for very interesting work in this direction). In particular, the analysis must be non-asymptotic. One of our main contribution is to derive, within the context of our data model, generalization error bounds that are non-asymptotic and relatively tight — by this we mean that our results hold for small numbers of data samples and track reasonably well with empirically evaluated generalization error.
In Section 2 we introduce our data model and in Section 3 we discuss our theoretical results. For the simplicity of exposition, both sections focus on the case where each rare subcategory has a single representative in the training set. Section 4 is concerned with the general case in which each rare subcategory has few representatives. Section 5 provides an overview of our proof techniques. Finally, in Section 6, we investigate empirically a few questions that we couldn’t resolve analytically. In particular, our error bounds are restricted to the case in which a nearest neighbor classification rule is applied on the top of the features — we provide empirical evidence in this last section that replacing the nearest neighbor classifier by a linear classifier leads to very minimal improvement. This further support the notion that, on our data model, it is the ability to learn features that drives success, not the specific classification rule used on the top of the features.
Related work. By now, a rich literature has developed that studies the generalization abilities of neural networks. A major theme in this line of work is the use of the PAC learning framework to derive generalization bounds for neural networks (e.g. [6, 21, 14, 4, 22]), usually by proving a bound on the difference between the finite-sample empirical loss and true loss. While powerful in their generality, such approaches are usually task independent and asymptotic; that is, they are mostly agnostic to any idiosyncrasies in the data generating process and need a statistically meaningful number of samples in the training set. As such, the PAC learning framework is not well-tailored to our specific aim of studying generalization on long-tailed data distributions; indeed, in such setting, a rare subcategory might have only a handful of representatives in the training set.
After breakthrough results (e.g. [15, 9, 3, 16]) showed that vastly over-parametrized neural networks become kernel methods (the so-called Neural Tangent Kernel or NTK) in an appropriate limit, much effort has gone toward analyzing the extent to which neural networks outperform kernel methods [27, 26, 24, 12, 13, 17, 1, 2, 18, 20]. Our interest lies not in proving such a gap for its own sake, but rather in using the comparison to gain some understanding on the importance of learning features in computer vision and NLP contexts.
Analyses that shed theoretical light onto learning with long-tailed distributions [10, 7] or onto specific learning mechanisms [17] are perhaps closest to our own. The former analyses [10, 7] investigate the necessity of memorizing rare training examples in order to obtain near-optimal generalization error when the data distribution is long-tailed. Our analysis differs to the extent that we focus on the necessity of learning features and sharing representations in order to properly classify rare instances. Like us, the latter analysis [17] also considers a computer vision inspired task and uses it to compare a neural network to a kernel method, with the ultimate aim of studying the learning mechanism involved. Their object of study (finding a sparse signal in the presence of noise), however, markedly differs from our own (learning with long-tailed distributions).

2 The Data Model
We begin with a simple example to explain our data model and to illustrate, at an intuitive level, the importance of learning features when faced with a long-tailed data distribution. For the sake of exposition we adopt NLP terminology such as ‘words’ and ‘sentences,’ but the image-based terminology of ‘patches’ and ‘images’ would do as well.
The starting point is a very standard mechanism for generating observed data from some underlying collection of latent variables. Consider the data model depicted in Figure 1. We have a vocabulary of words and a set of concepts:
The words are partitioned into the concepts as shown on the left of Figure 1. We also have sequences of concepts of length . They are denoted by and . Sequences of concepts are latent variables that generate sequences of words. For example
The sequence of words on the right was obtained by sampling each word uniformly at random from the corresponding concept. For example, the first word was randomly chosen out of the dairy concept (butter, cheese, cream, yogurt), and the last word was randomly chosen out of the vegetable concept (potato, carrot, leek, lettuce.) Sequences of words will be referred to as sentences.
The non-standard aspect of our model comes from how we use the ‘latent-variable observed-datum’ process to form a training distribution. The training set in Figure 1 is made of sentences split into categories. The latent variables each generate a single sentence, whereas the latent variables each generate sentences. We will refer to as the familiar sequences of concepts since they generate most of the sentences encountered in the training set. On the other hand will be called unfamiliar. Similarly, a sentence generated by a familiar (resp. unfamiliar) sequence of concepts will be called a familiar (resp. unfamiliar) sentence. The former represents a datum sampled from the head of a distribution while the latter represents a datum sampled from its tail. We denote by the sentence of the category, indexed so that the first sentence of each category is an unfamiliar sentence and the remaining ones are familiar.
Suppose now that we have trained a learning algorithm on the training set described above and that at inference time we are presented with a previously unseen sentence generated by the unfamiliar sequence of concept . To fix ideas, let’s say that sentence is:
(1) |
This sentence is hard to classify since there is a single sentence in the training set that has been generated by the same sequence of concepts, namely
(2) |
Moreover these two sentences do not overlap at all (i.e. the word of is different from the word of for all .) To properly classify , the algorithm must have learned the equivalences butter cheese, yogurt butter, carrot lettuce, and so forth. In other words, the algorithm must have learned the underlying concepts.
Nevertheless, a neural network with a well-chosen architecture can easily succeed at such a classification task. Consider, for example, the network depicted on Figure 2. Each word of the input sentence, after being encoded into a one-hot-vector, goes through a multi-layer perceptron (MLP 1 on the figure) shared across words.

The output is then normalized using LayerNorm [5] to produce a representation of the word. The word representations are then concatenated into a single vector that goes through a second multi-layer perceptron (MLP 2 on the figure). This network, if properly trained, will learn to give similar representations to words that belong to the same concept. Therefore, if it correctly classifies the train point given by (2), it will necessarily correctly classify the test point given by (1). So the neural network is able to classify the previously unseen sentence despite the fact that the training set contains a single example with the same underlying sequence of concepts. This comes from the fact that the neural network learns features and representations from the familiar part of the training set (generated by the head of the distribution), and uses these, at test time, to correctly classify the unfamiliar sentences (generated by the tail of the distribution). In other words, because it learns features, the neural network has no difficulty handling the long-tailed nature of the distribution.
To summarize, the variables and parametrize instances of our data model. They denote, respectively, the length of the sentences, the number of words in the vocabulary, the number of concepts, the number of categories, and the number of samples per category. So in the example presented in Figure 1 we have , , , and (four familiar sentences and one unfamiliar sentence per category). The vocabulary and set of concepts are discrete sets with and , rendered as and for concreteness. A partition of the vocabulary into concepts, like the one depicted at the top of Figure 1, is encoded by a function that assigns words to concepts. We require that each concept contains the same number of words, so that satisfies
(3) |
and we refer to such a function satisfying (3) as equipartition of the vocabulary. The set
denotes the collection of all such equipartitions, while the data space and latent space are denoted
respectively. Elements of are sentences of words and they take the form while elements of take the form and correspond to sequences of concepts.
In the context of this work, a feature map refers to any function from data space to feature space. The feature space can be any Hilbert space (possibly infinite dimensional) and we denote by the associated inner product. Our analysis applies to the case in which a nearest neighbor classification rule is applied on the top of the extracted features. Such rule works as follow: given a test point , the inner products are evaluated for all in the training set; the test point is then given the label of the training point that led to the highest inner product.
3 Statement and Discussion of Main Results
Our main result states that, in the context of our data model, features must be tailored (i.e. learned) to each specific task. Specifically, it is not possible to find a universal feature map that performs well on a collection of tasks like the one depicted on Figure 1. In the context of this work, a task refers to a tuple
(4) |
that prescribes a partition of the vocabulary into concepts, familiar sequences of concepts, and unfamiliar sequences of concepts. Given such a task we generate a training set as described in the previous section. This training set contains sentences split over categories, and each category contains a single unfamiliar sentence. Randomly generating the training set from the task corresponds to sampling from a distribution defined on the space and parametrized by the variables in (4) (the appendix provides an explicit formula for this distribution). We measure performance of an algorithm by its ability to generalize on previously unseen unfamiliar sentences. Generating an unfamiliar sentence amounts to drawing a sample from a distribution on the space parametrized by the variables in (4) that determine unfamiliar sequences of concepts. Finally, associated with every task we have a labelling function that assigns the label to sentences generated by either or (this function is ill-defined if two sequences of concepts from different categories are identical, but this issue is easily resolved by formal statements in the appendix). Summarizing our notations, for every task we have a distribution on the space , a distribution on the space , and a labelling function
Given a feature space , a feature map , and a task , the expected generalization error of the nearest neighbor classification rule on unfamiliar sentences is given by:
(5) |
For simplicity, if the test point has multiple nearest neighbors with inconsistent labels in the training set (and so the returns multiple training points ), we will count the classification as a failure for the nearest neighbor classification rule. We therefore replace (5) by the more formal (but more cumbersome) formula
(6) |
to make this explicit. Our main theoretical results concern performance of a learner not on a single task but on a collection of tasks and so we define
(7) |
as the expected generalization error on such a collection of tasks. As a task refers to an element of the discrete set , any subset defines a collection of tasks. Our main result concerns the case where the collection of tasks consists in all possible tasks that one might encounter. For concreteness, we choose specific values for the model parameters and state the following special case of our main theorem (Theorem 3 at the end of this section) —
Theorem 1.
Let , , , and . Let . Then
for all feature spaces , and all feature maps .
In other words, for the model parameters specified above, it is not possible to design a ‘task-agnostic’ feature map that works well if we are uniformly uncertain about which specific task we will face. Indeed, the best possible feature map will fail at least of the time at classifying unfamiliar sentences (with a nearest-neighbor classification rule), where the probability is with respect to the random choices of the task, of the training set, and of the unfamiliar test sentence.
Interpretation: Our desire to understand learning demands that we consider a collection of tasks rather than a single one, for if we consider only a single task then the problem, in our setting, becomes trivial. Indeed, assume with consists only of a single task. With knowledge of this task we can easily construct a feature map that performs perfectly. Indeed, the map
(8) |
that simply ‘replaces’ each word of the input sentence by the one-hot-encoding of its corresponding concept will do. A bit of thinking reveals that the nearest neighbor classification rule associated with feature map (8) perfectly solves the task . This is due to the fact that sentences generated by the same sequence of concepts are mapped by to the exact same location in feature space. As a consequence, the nearest neighbor classification rule will match the unfamiliar test sentence to the unique training sentence that occupies the same location in feature space, and this training sentence has the correct label by construction (assuming that sequences of concepts from different categories are distinct). To put it formally:
Theorem 2.
Given a task satisfying and for all , there exists a feature space and a feature map such that .
Consider now the case where consists of two tasks. According to Theorem 2 there exists a that perfectly solves , but this might perform poorly on , and vice versa. So, it might not be possible to design good features if we do not know a priori which of these tasks we will face. Theorem 1 states that, in the extreme case where contains all possible tasks, this is indeed the case — the best possible ‘task-agnostic’ features will perform catastrophically on average. In other words, features must be task-dependent in order to succeed.
To draw a very approximate analogy, imagine once again that and that represents, say, a hand-written digit classification task. A practitioner, after years of experience, could hand-craft a very good feature map that performs almost perfectly for this task. If we then imagine the case where represents a hand-written digit classification task and represents, say, an animal classification task, then it becomes more difficult for a practitioner to handcraft a feature map that works well for both tasks. In this analogy, the size of the set encodes the amount of knowledge the practitioner has about the specific tasks she will face. The extreme choice corresponds to the practitioner knowing nothing beyond the fact that natural images are made of patches. Theorem 1 quantifies, in this extreme case, the impossibility of hand-crafting a feature map knowing only the range of possible tasks and not the specific task itself. In a realistic setting the collection of tasks is smaller, of course, and the data generative process itself is more coherent than in our simplified setup. Nonetheless, we hope our analysis sheds some light on some of the essential limitations of algorithms that do not learn features.
Finally, our empirical results (see Section 6) show that a simple algorithm that learns features does not face this obstacle. We do not need knowledge of the specific task in order to design a good neural network architecture, but only of the family of tasks that we will face. Indeed, the architecture in Figure 2 succeeds at classifying unfamiliar test sentences more than of the time. This probability, which we empirically evaluate, is with respect to the choice of the task, the choice of the training set, and the choice of the unfamiliar test sentence (we use the values of and from Theorem 1, and for this experiment). Continuing with our approximate analogy, this means our hypothetical practitioner needs no domain specific knowledge beyond the patch structure of natural images when designing a successful architecture. In sum, successful feature design requires task-specific knowledge while successful architecture design requires only knowledge of the task family.
Main Theorem: Our main theoretical result extends Theorem 1 to arbitrary values of , , , and . The resulting formula involves various combinatorial quantities. We denote by the binomial coefficients and by the Stirling numbers of the second kind. Let and let be the functions defined by and respectively. We then define, for , the sets
We let , and we note that the inclusion always holds. Given we denote by
the set of -admissible matrices. Finally, we let be the functions defined by
respectively. With these definitions at hand, we may now state our main theorem.
Theorem 3 (Main Theorem).
Let . Then
(9) |
for all feature spaces , all feature maps , and all .
The combinatorial quantities involved appear a bit daunting at a first glance, but, within the context of the proof, they all take a quite intuitive meaning. The heart of the proof involves the analysis of a measure of concentration that we call the permuted moment, and of an associated graph-cut problem. The combinatorial quantities arise quite naturally in the course of analyzing the graph cut problem. We provide a quick overview of the proof in Section 5, and refer to the appendix for full details. For now, it suffices to note that we have a formula (i.e. the right hand side of (9)) that can be exactly evaluated with a few lines code. This formula provides a relatively tight lower bound for the generalization error. Theorem 1 is then a direct consequence — plugging , , , and in the right hand side of (9) gives the claimed lower bound.
4 Multiple Unfamiliar Sentences per Category
The two previous sections were concerned with the case in which each unfamiliar sequence of concepts has a single representative in the training set. In this section we consider the more general case in which each unfamiliar sequence of concepts has representatives in the training set, see Figure 3. Using a simple union bound, inequality (9) easily extends to this situation — the resulting formula is a bit cumbersome so we present it in the appendix (see Theorem 7). In the concrete case where , , , this formula simplifies to
(10) |
therefore exhibiting an affine relationship between the error rate and the number of unfamiliar sentences per category.

Note that choosing in (10) leads to a lower bound on the error rate, therefore recovering the result from Theorem 1. This lower bound then decreases by with each additional unfamiliar sentence per category in the training set.
We would like to emphasize one more time the importance of non-asymptotic analysis in the long-tailed learning setting. For example, in inequality (10), the difficulty lies in obtaining a value as small as possible for the coefficient in front of . We accomplish this via a careful analysis of the graph cut problem associated with our data model.
5 Proof Outline — Permuted Moment and Optimal Feature Map
The proof involves two main ingredients. First, the key insight of our analysis is the realization that generalization in our data model is closely tied to the permuted moment of a probability distribution. To state this central concept, it will prove convenient to think of probability distributions on as vectors with , together with indices given by some arbitrary (but fixed) indexing of the elements of data space. Then denotes the probability of the element of in this indexing. We use to denote the set of permutations of and to refer to a particular permutation. The permuted moment of the probability vector is
(11) |
Since (11) involves a maximum over all possible permutations, the definition clearly does not depend on the way the set was indexed. In order to maximize the sum, the permutation must match the largest values of with the largest values of , so the maximizing permutation simply orders the entries from smallest to largest. A very peaked distribution that gives large probability to only a handful of elements of will have large permuted moment. Because of this, the permuted moment is akin to the negative entropy; it has large values for delta-like distributions and small values for uniform ones. From definition (11) it is clear that for all probability vectors , and it is easily verified that the permuted moment is convex. These properties, as well as various useful bounds for the permuted moment, are presented and proven in the appendix.
Second, we identify a specific feature map, , which is optimal for a collection of tasks closely related to the ones considered in our data model. Leveraging the optimality of on these related tasks allows us to derive an error bound that holds for the tasks of interest. The feature map is better understood through its associated kernel, which is given by the formula
(12) |
Up to normalization, simply counts the number of equipartitions of the vocabulary for which sentences and have the same underlying sequence of concepts. Intuitively this makes sense, for the best possible kernel must leverage the only information we have at hand. We know the general structure of the problem (words are partitioned into concepts) but not the partition itself. So to try and determine if sentences were generated by the same sequence of concepts, the best we can do is to simply try all possible equipartitions of the vocabulary and count how many of them wind up generating from the same underlying sequence of concepts. A high count makes it more likely that were generated by the same sequence of concepts. The optimal kernel does exactly this, and provides a good (actually optimal, see the appendix) measure of similarity between pairs of sentences.
For fixed , the function defines a probability distribution on data space. The connection between generalization error, permuted moment, and optimal feature map, come from the fact that
(13) |
and so, up to a small error , it is the permuted moments of that determine the success rate. We then obtain the lower bound (9) by studying these moments in great detail. A simple union bound is then used to obtain inequalities such as (10).
6 Empirical Results
We conclude by presenting empirical results that complement our theoretical findings. The full details of these experiments (training procedure, hyperparameter choices, number of experiments ran to estimate the success rates, and standard deviations of these success rates), as well as additional experiments, can be found in Appendix E. Codes are available at https://anonymous.4open.science/r/Long_Tailed_Learning_Requires_Feature_Learning-17C4.
Parameter Settings. We consider five parameter settings for the data model depicted in Figure 3. Each setting corresponds to a column in Table 1. In all five settings, we set the parameters , , and to the values for which the error bound (10) holds. We choose values for the parameters and so that the column of the table corresponds to a setting in which the training set contains familiar and unfamiliar sentences per category. Recall that is the total number of samples per category in the training set. So the first column of the table corresponds to a setting in which each category contains familiar sentences and unfamiliar sentence, whereas the last column corresponds to a setting in which each category contains familiar sentences and unfamiliar sentences.
Algorithms. We evaluate empirically seven different algorithms. The first two rows of the table correspond to experiments in which the neural network in Figure 2 is trained with SGD and constant learning rate. At test time, we consider two different strategies to classify test sentences. The first row of the table considers the usual situation in which the trained neural network is used to classify test points. The second row considers the situation in which the trained neural network is only used to extract features (i.e. the concatenated words representation right before MLP2). The classification of test points is then accomplished by running a nearest neighbor classifier on these learned features. The third (resp. sixth) row of the table shows the results obtained when running a nearest neighbor algorithm (resp. SVM) on the features of the optimal feature map. By the kernel trick, these algorithms only require the values of the optimal kernel , computed via (12), and not the features themselves. The fourth (resp. seventh) row shows results obtained when running a nearest neighbor algorithm (resp. SVM) on features extracted by the simplest possible feature map, that is
where denotes the one-hot-encoding of the word of the input sentence. Finally, the last row considers a SVM with Gaussian Kernel (also called RBF kernel).
| | | | | |
Neural network in Figure 2 | |||||
Nearest neighb. on features learned by neural net | |||||
Nearest neighb. on features extracted by | |||||
Nearest neighb. on features extracted by | |||||
Theoretical upper bound () | |||||
SVM on features extracted by | |||||
SVM on features extracted by | |||||
SVM with Gaussian kernel |
Results. The first two rows of the table correspond to algorithms that learn features from the data; the remaining rows correspond to algorithms that use a pre-determined (not learned) feature map. Table 1 reports the success rate of each algorithm on unfamiliar test sentences. A crystal-clear pattern emerges. Algorithms that learn features generalize almost perfectly, while algorithms that do not learn features catastrophically fail. Moreover, the specific classification rule matters little. For example, replacing MLP2 by a nearest neighbor classifier on the top of features learned by the neural network leads to equally accurate results. Similarly, replacing the nearest neighbor classifier by a SVM on the top of features extracted by or leads to almost equally poor results. The only thing that matters is whether or not the features are learned. Finally, inequality (10) gives an upper bound of on the success rate of the nearest neighbor classification rule applied on the top of any possible feature map (including and ). The fifth row of Table 1 compares this bound against the empirical accuracy obtained with and , and the comparison shows that our theoretical upper bound is relatively tight.
When our main theorem states that no feature map can succeed more than of the time on unfamiliar test sentences (fifth row of the table). At first glance this appears to contradict the empirical performance of the feature map extracted by the neural network, which succeeds of the time (second row of the table). The resolution of this apparent contradiction lies in the order of operations. The point here is to separate hand crafted or fixed features from learned features via the order of operations. If we choose the feature map before the random selection of the task then the algorithm performs poorly since it uses unlearned, task-independent features. By contrast, the neural network learns a feature map from the training set, and since the training set is generated by the task, this process takes place after the random selection of the task. It therefore uses task-dependent features, and the network performs almost perfectly for the specific task that generated its training set. But by our main theorem, it too must fail if the task changes but the features do not.
Acknowledgements
Xavier Bresson is supported by NRF Fellowship NRFF2017-10 and NUS-R-252-000-B97-133.
References
- [1] Zeyuan Allen-Zhu and Yuanzhi Li. What can resnet learn efficiently, going beyond kernels? Advances in Neural Information Processing Systems, 32, 2019.
- [2] Zeyuan Allen-Zhu and Yuanzhi Li. Backward feature correction: How deep learning performs deep learning. arXiv preprint arXiv:2001.04413, 2020.
- [3] Zeyuan Allen-Zhu, Yuanzhi Li, and Zhao Song. A convergence theory for deep learning via over-parameterization. In International Conference on Machine Learning, pages 242–252. PMLR, 2019.
- [4] Sanjeev Arora, Rong Ge, Behnam Neyshabur, and Yi Zhang. Stronger generalization bounds for deep nets via a compression approach. In International Conference on Machine Learning, pages 254–263. PMLR, 2018.
- [5] Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E Hinton. Layer normalization. arXiv preprint arXiv:1607.06450, 2016.
- [6] Peter L Bartlett, Dylan J Foster, and Matus J Telgarsky. Spectrally-normalized margin bounds for neural networks. Advances in neural information processing systems, 30, 2017.
- [7] Gavin Brown, Mark Bun, Vitaly Feldman, Adam Smith, and Kunal Talwar. When is memorization of irrelevant training data necessary for high-accuracy learning? In Proceedings of the 53rd Annual ACM SIGACT Symposium on Theory of Computing, pages 123–132, 2021.
- [8] Chih-Chung Chang and Chih-Jen Lin. Libsvm: a library for support vector machines. ACM transactions on intelligent systems and technology (TIST), 2(3):1–27, 2011.
- [9] Simon S Du, Xiyu Zhai, Barnabas Poczos, and Aarti Singh. Gradient descent provably optimizes over-parameterized neural networks. arXiv preprint arXiv:1810.02054, 2018.
- [10] Vitaly Feldman. Does learning require memorization? a short tale about a long tail. In Proceedings of the 52nd Annual ACM SIGACT Symposium on Theory of Computing, pages 954–959, 2020.
- [11] Vitaly Feldman and Chiyuan Zhang. What neural networks memorize and why: Discovering the long tail via influence estimation. Advances in Neural Information Processing Systems, 33:2881–2891, 2020.
- [12] Behrooz Ghorbani, Song Mei, Theodor Misiakiewicz, and Andrea Montanari. Limitations of lazy training of two-layers neural network. Advances in Neural Information Processing Systems, 32, 2019.
- [13] Behrooz Ghorbani, Song Mei, Theodor Misiakiewicz, and Andrea Montanari. When do neural networks outperform kernel methods? Advances in Neural Information Processing Systems, 33:14820–14830, 2020.
- [14] Noah Golowich, Alexander Rakhlin, and Ohad Shamir. Size-independent sample complexity of neural networks. In Conference On Learning Theory, pages 297–299. PMLR, 2018.
- [15] Arthur Jacot, Franck Gabriel, and Clément Hongler. Neural tangent kernel: Convergence and generalization in neural networks. Advances in neural information processing systems, 31, 2018.
- [16] Ziwei Ji and Matus Telgarsky. Polylogarithmic width suffices for gradient descent to achieve arbitrarily small test error with shallow relu networks. arXiv preprint arXiv:1909.12292, 2019.
- [17] Stefani Karp, Ezra Winston, Yuanzhi Li, and Aarti Singh. Local signal adaptivity: Provable feature learning in neural networks beyond kernels. Advances in Neural Information Processing Systems, 34, 2021.
- [18] Yuanzhi Li, Tengyu Ma, and Hongyang R Zhang. Learning over-parametrized two-layer neural networks beyond ntk. In Conference on learning theory, pages 2613–2682. PMLR, 2020.
- [19] Ziwei Liu, Zhongqi Miao, Xiaohang Zhan, Jiayun Wang, Boqing Gong, and Stella X Yu. Large-scale long-tailed recognition in an open world. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 2537–2546, 2019.
- [20] Eran Malach, Pritish Kamath, Emmanuel Abbe, and Nathan Srebro. Quantifying the benefit of using differentiable learning over tangent kernels. In International Conference on Machine Learning, pages 7379–7389. PMLR, 2021.
- [21] Behnam Neyshabur, Srinadh Bhojanapalli, and Nathan Srebro. A Pac-Bayesian approach to spectrally-normalized margin bounds for neural networks. arXiv preprint arXiv:1707.09564, 2017.
- [22] Behnam Neyshabur, Zhiyuan Li, Srinadh Bhojanapalli, Yann LeCun, and Nathan Srebro. Towards understanding the role of over-parametrization in generalization of neural networks. arXiv preprint arXiv:1805.12076, 2018.
- [23] F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay. Scikit-learn: Machine learning in Python. Journal of Machine Learning Research, 12:2825–2830, 2011.
- [24] Maria Refinetti, Sebastian Goldt, Florent Krzakala, and Lenka Zdeborová. Classifying high-dimensional Gaussian mixtures: Where kernel methods fail and neural networks succeed. In International Conference on Machine Learning, pages 8936–8947. PMLR, 2021.
- [25] Ruslan Salakhutdinov, Antonio Torralba, and Josh Tenenbaum. Learning to share visual appearance for multiclass object detection. In CVPR 2011, pages 1481–1488. IEEE, 2011.
- [26] Colin Wei, Jason D Lee, Qiang Liu, and Tengyu Ma. Regularization matters: Generalization and optimization of neural nets vs their induced kernel. Advances in Neural Information Processing Systems, 32, 2019.
- [27] Gilad Yehudai and Ohad Shamir. On the power and limitations of random features for understanding neural networks. Advances in Neural Information Processing Systems, 32, 2019.
- [28] Xiangxin Zhu, Dragomir Anguelov, and Deva Ramanan. Capturing long-tail distributions of object subcategories. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 915–922, 2014.
Appendix
In Section A we prove a few elementary properties of the permuted moment (11). Section B is devoted to the proof of inequality (13), which we restate here for convenience:
(14) |
where the collection of tasks consists in all possible tasks that one might encounter. Inequality (14) plays a central role in our work as it establishes the connection between the generalization error, the permuted moment, and the optimal kernel defined by (12). The proof is non-technical and easily accessible. In Section C we provide the following upper bound on the permuted moment of the optimal kernel:
(15) |
for all The proof is combinatorial in nature, and involves the analysis of a graph-cut problem. Combining (14) and (15) establishes Theorem 3. In Section D we consider the case in which each unfamiliar sequence of concepts has representatives in the training set. A simple union bound shows that, in this situation, inequality (14) becomes
(16) |
Combining (16) and (15) then provides our most general error bound, see Theorem 7. Inequality (10) in the main body of the paper is just a special case of Theorem 7. Finally, in Section E, we provide the full details of the experiments.
Appendix A Properties of the Permuted Moment
The permuted moment, in Section 5, was defined for probability vectors only. It will prove convenient to consider the permuted moment of nonnegative vectors as well. We denote by the nonnegative real numbers, and by the vectors with nonnegative real entries indexed from to . The permuted moment of is then given by
(17) |
where denote the set of permutations of . The concept of an ordering permutation will prove useful in the next lemma.
Definition 1.
is said to be an ordering permutation of if
(18) |
The lemma below shows that the permutation maximizing (17) is the one that sorts the entries from smallest to largest.
Lemma 1.
Let and let be an ordering permutation of . Then
(19) |
Proof.
The optimization problem (19) can be formulated as finding a pairing between the ’s and the ’s that maximizes the sum of the product of the pairs. An ordering permutation of corresponds to pairing the smallest entry of to , the second smallest entry to , the third smallest entry to , and so forth. This pairing is clearly optimal. ∎
In light of the previous lemma, we see that computing the permuted moment of a vector can be accomplished as follow: 1) sort the entries of from smallest to largest; 2) compute the dot product between this sorted vector and the vector
(20) |
Let us now focus on the case where is a probability distribution. If is very peaked, it must have a large permuted moment since, after sorting, most of the mass concentrates on the high values of (20) located on the right. On the contrary, if is very spread, it must have small permuted moment since it ‘wastes’ its mass on small values of (20). Because of this, the permuted moment is akin to the negative entropy; it has large values for delta-like distributions and small values for uniform ones.
We now show that the permuted moment is subaddiditive and one-homogeneous on (as a consequence it is convex on the set of probability vectors) and we derive some elementary and bounds. We denote by the -norm of a vector . In particular, if , we have
With this notation in hand, we can now state our lemma:
Lemma 2.
-
(i)
for all .
-
(ii)
for all and all .
-
(iii)
for all .
-
(iv)
for all .
Proof.
Properties (i) and (ii) are obvious. To prove (iii) and (iv), define and note that
Then (iii) comes from whereas (iv) comes from ∎
We conclude this section with a slightly more sophisticated bound that holds for probability vectors — this bound will play a central role in Section C.
Lemma 3.
Suppose , and suppose Then
Proof.
Fix a and define the vectors and as follow:
Note that this two vectors are non-negative and sum to . We can therefore use Lemma 2 to obtain
To conclude, we note that , and ∎
Appendix B Permuted Moment of and Generalization Error
This section is devoted to the proof of inequality (14). We start by recalling a few definitions. The vocabulary, set of concepts, data space, and latent space are
respectively. Elements of are sentences of words and they take the form while elements of take the form and correspond to sequences of concepts. We also recall that the collection of all equipartitions of the vocabulary is denoted by
where denote the size of the concepts. Given , we denote by the function
that operates on sentences element-wise. The informal statement “the sentence is randomly generated by the sequence of concepts ” means that is sampled uniformly at random from the set
(21) |
We will often do the abuse of notation of writing instead of . We now formally define the sampling process associated with our main data model.
Sampling Process DM:
(i) Sample uniformly at random in . (ii) For : • Sample uniformly at random in . • Sample uniformly at random in . (iii) Sample uniformly at random in .
Step (i) of the above sampling process consists in selecting at random a task among all possible tasks. Step (ii) consists in generating a training set111We refer to as the ‘training set’. In our formalism however it is not a set, but an element of . exactly as depicted on Figure 1: each unfamiliar sequence of concept generates sentences, whereas each familiar sequence of concept generates sentences (recall that the number of samples per category is ). Finally step (iii) consists in randomly generating an unfamiliar test sentence . Without loss of generality we assume that this test sentence is generated by the unfamiliar sequence of concept .
We denote by the p.d.f. of the sampling process DM. This function is defined on the sample space
A sample from takes the form
and we have the following formula for
(22) |
where and denote the indicator functions of the set and respectively. Let us compute a few marginals of in order to verify that it is indeed the p.d.f. of the sampling process DM. Writing , summing over the variables and , and using the fact that we obtain
This shows that each task is equiprobable. Summing over the variable gives
This shows that, given a task , the test sentence is obtained by sampling uniformly at random from . A similar calculation shows that, given a task , the train sentence is obtained by sampling uniformly at random from if , and from if .
Given a feature space and a feature map , we define the events as follow:
(23) |
Note that this event consists in all the outcomes for which the feature map associates the test sentence to a train point from the first category. Since by construction, belongs to the first category, consists in all the outcomes for which the nearest neighbor classification rule is ‘successful’. As a consequence, when , the generalization error can be expressed as
(24) |
where denote the probability measure on induced by . Equation (24) should be viewed as our ‘fully formal’ definition of the quantity , as opposed to the more informal definition given earlier by equations (6) and (7).
The goal of this section is to prove inequality (14), which, in light of (24) is equivalent to
(25) |
which in turn equivalent to
(26) |
where the event is defined by
(27) |
and where the supremum is taken over all kernels which are symmetric positive semidefinite. We will actually prove a slightly stronger result, namely
(28) |
where the supremum is taken over all functions that satisfy for all The rest of the section is devoted to proving (28).
In Subsection B.1 we start by considering a simpler data model — for this simpler data model we are able to show that the function implicitly defined by (12) is the best possible feature map (we actually only work with the associated kernel , and never need itself). We also show that the success rate is exactly equal to the permuted moment of — see Theorem 4, which is is the central result of this section. In the remaining subsections, namely Subsection B.2 and Subsection B.3, we leverage the bound obtained for the simpler data model in order to obtain bound (28) for the main data model. These two subsections are mostly notational. The core of the analysis takes place in Subsection B.1.
B.1 A Simpler Data Model
We start by presenting the sampling process associated with our simpler datamodel.
Sampling Process SDM:
(i) Sample uniformly at random in . Sample uniformly at random in . (ii) For : Sample uniformly at random in . (iii) Sample uniformly at random in .
The function
(29) |
on is the p.d.f. of the above sampling process. We use to denote the probability measure on induced by this function. The identity in the next theorem is the central result of this section.
Theorem 4.
Let denote the set of all symmetric functions from to . Then
(30) |
In (30), stands for the function . Theorem 4 establishes an intimate connection between the permuted moment and the ability of any fixed feature map (or equivalently, any fixed kernel) to generalize well in our framework. The sampling process considered in this theorem involves two points, and , generated by the same sequence of concepts , and ‘distractor’ points generated by different sequences of concepts. Success for the kernel means correctly recognizing that is more ‘similar’ to than any of the distractors, and the success rate in (30) precisely quantifies its ability to do so as a function of the number of distractors. The theorem shows that the probability of success for the best possible kernel at this task is exactly equal to the averaged -permuted moment of , so it elegantly quantifies the generalization ability of the best possible fixed feature map in term of the permuted moment. We also provide an explicit construction for a kernel that achieves the supremum in (30) — First, choose a kernel that satisfies
-
(i)
for all with .
-
(ii)
for all .
and then define the following perturbation
(31) |
of . Any such kernel is a maximizer of the optimization problem in (30), so we may think of perturbations of as bona-fide optimal.
The rest of this subsection is devoted to the proof of Theorem 4, and we also show, in the course of the proof, that (31) is a maximizer of the optimization problem in (30). We use to denote the set of all symmetric functions from to . We will refers to such functions as ‘kernel’ despite the fact that these functions are not necessarily positive semi-definite.
Proving Theorem 4 requires that we study the following optimization problem:
(32) | |||
over all kernels . | (33) |
We recall the definition of the optimal kernel,
(34) |
where denotes the size of a concept. We start with the following simple lemma:
Lemma 4.
The function is a probability distribution on .
Proof.
First note that can be written as
(35) |
Since maps exactly words to each concept , we have that
(36) |
Therefore
∎
We now show that the marginal of the p.d.f. is related to .
Lemma 5.
For all and in we have
Proof.
Identity (36) can be expressed as for all . As a consequence, definition (29) of simplifies to
(37) |
where the constant is given by
In the above we have used the fact that and . We then note that the identity implies
(38) | |||
(39) |
for all . Summing (37) over the variables we obtain
where we have used (38) and (39) to obtain the last equality. Summing the above over the variable gives . ∎
The next lemma provides a purely algebraic (as opposed to probabilistic) formulation for the functional defined in (32).
Lemma 6.
The functional can be expressed as
(40) |
Proof.
Let be the indicator function defined by
Let denote the sample . Since only depends on the last variables of , we have
(41) | |||
(42) | |||
(43) | |||
(44) | |||
(45) |
where we have used Lemma 5 to go from (43) to (44). Writing the indicator function as a product of indicator functions,
we obtain the following expression for the term appearing between parentheses in (45):
Changing the name of variables to gives (40). ∎
We now use expression (40) for and reformulate optimization problem (32)-(33) into an equivalent optimization problem over symmetric matrices. Putting an arbitrary ordering on the set (starting with ) and denoting by the value of the kernel on the pair that consists of the and element of , we see that optimization problem (32)-(33) can be written as
(46) | |||
(47) |
In the above we have used the letter to denote the cardinality of , that is , and we have used the notation Before solving the matrix optimization problem (46)-(47), we start with a simpler vector optimization problem. Let be a probability vector, that is with , and consider the optimization problem:
(48) | |||
(49) |
Recall from Definition 1 that an ordering permutation of a vector is a permutation that sorts its entries from smallest to largest. We will say that two vectors have the same ordering if there exist which is ordering for both and . The following lemma is key — it shows that the optimization problem (48)–(49) has a simple solution.
Lemma 7.
The following identity
holds. Moreover, the supremum is achieved by any vector that has mutually distinct entries222That is, for all . and that has the same ordering than .
Proof.
Let denote the vectors of N that have mutually distinct entries. We will first show that
(50) |
To do this we show that for any , there exists such that
(51) |
There are many ways to construct such a . One way is to simply set for some permutation that orders . Indeed, note that provides the position of in the sequence of inequality (18). Therefore if we must have that . This implies
which in turn implies (51).
Because of (50) we can now restrict our attention to . Note that if , then it has a unique ordering permutation ,
and, recalling that provide the position of in the above ordering, we clearly have that
Therefore, if and if denotes its unique ordering permutation, can be expressed as
(52) |
Looking at definition (11) of the permuted moment, it is then clear that for all . We then note that if has the same ordering than , then its unique ordering permutation must also be an ordering permutation of . Then (52) combined with Lemma 1 implies that . This concludes the proof. ∎
Relaxing the symmetric constraint in the optimization problem (46)-(47) gives the following unconstrained problem over all -by- matrices:
(53) | |||
(54) |
Let us denote by the row of the matrix and remark that is a probability vector (because is a probability distribution on , see Lemma 4). We then note that the above unconstrained problem decouples into separate optimization problems of the type (48)-(49) in which the probability vector must be replaced by the probability vector . Using Lemma 7 we therefore have that any that satisfies, for each ,
-
(a)
The entries of are mutually distinct,
-
(b)
and have the same ordering,
must be a solution of (53)-(54). Lemma 7 also gives:
Now let be a symmetric matrix that satisfies:
-
(i)
for all with ,
-
(ii)
for all ,
and define the following perturbation of the matrix :
(55) |
Recalling definition (35) of the kernel , it is clear that for each , we have
(56) |
As a consequence perturbing by adding to its entries quantities smaller than can not change the ordering of its rows. Therefore the kernel defined by (55) satisfies (b). It also satisfies (a). Indeed, if and , then we clearly have that due to (i). On the other hand if , then due to (ii) and (56).
We have therefore constructed a symmetric matrix that is a solution of the optimization problem (53)-(54). As a consequence we have
where should now be interpreted as the set of -by- symmetric matrices. The above equality proves Theorem 4, and we have also shown that the perturbed kernel (55) achieves the supremum.
B.2 Connection Between the Two Sampling Processes
In this subsection we show that the p.d.f. of Sampling Process SDM can be obtained by marginalizing the p.d.f. of Sampling Process DM over a subset of the variables. We also compute another marginal of that will prove useful in the next subsection. Recall that
(57) |
on is the p.d.f. of the sampling process for our main data model. Samples from take the form
We separate these variables into two groups, , where
(58) | ||||
(59) |
The variable belongs to , and the variable belongs to Note that the variables in contains, among other, sequences of concepts and training points (the first and second training points of each category). Each of these training points is generated by one of the sequences of concepts. So the variables involved in are generated by a process similar to the one involved in the simpler data model. The following lemma shows that of , after marginalizing , is indeed .
Lemma 8.
For all we have
Proof of Lemma 8.
We start by reorganizing the terms involved in the product defining so that the variables in and are clearly separated:
To demonstrate the process, let us start by summing the above formula over the first variable of , namely . Since this variable only occurs in the last term of the above product, we have:
Since , the last term of the above product is equal to and can therefore be omitted. Repeating this process for all the that constitute leads to the desired result. ∎
In the next subsection we will need the marginal of with respect to another set of variables. To this aim we write where
(61) | |||
(62) |
Note that all the unfamiliar training points are contained in . The test point and the familiar training points are in . We also let and .
Lemma 9.
For all we have
Proof.
We reorganizing the terms involved in the product defining so that the variables in and are separated:
Summing the above formula over the last variable involved in , namely , gives
The last term in the above product is equal to and can therefore be omitted. Iterating this process gives
We then use the fact that see (38), together with , see (36), to obtain
We then sum over . Since these variables are not involved in the above formula we get
where we have used to obtain the last equality. Summing over gives
To obtain the last equality we have used (39) but for a product of indicator functions instead of just two. Iterating this process we obtain
Using one more time that and gives to the desired result. ∎
B.3 Conclusion of Proof
We now establish the desired upper bound (28), which we restate below for convenience:
(63) |
where
(64) |

We recall that the test point is generated by the unfamiliar sequence of concepts and that it belongs to category 1, see Figure 4. The event describes all the outcomes in which the training point most similar to (where similarity is measured with respect to the kernel ) belongs to the first category. There are two very distinct cases within the event : the training point most similar to can be — this corresponds to a ‘meaningful success’ in which the learner recognizes that is generated by the same sequence of concepts than , see Figure 4. Or the training point most similar to can be one of the points — this corresponds to a ‘lucky success’ because are not related to (they are generated by a different sequence of concept, see Figure 4). To make this discussion formal, we fix a kernel , and we partition the event as follow
(65) |
where
The next two lemmas provide upper bounds for the probability of the events and
Lemma 10.
Proof.
Define the event
This events involves only the first two training points of each category. On the example depicted on Figure 4, that would be the points and , the points and , and finally the points and . The event consists in all the outcomes in which, among these training points, is most similar to . We then make two key remarks. First, these points are generated by distinct sequences on concepts — so if we restrict our attention to these points, we are in a situation very similar to the simpler data model (i.e. we first generate sequences of concepts, then from each sequence of concepts we generate a single training point, and finally we generate a test point from the first sequence of concepts.) We will make this intuition precise by appealing to the fact that is the marginal of and this will allow us to obtain a bound for in term of the permuted moment of The second remark is that is clearly contained in , and therefore we have
(66) |
so an upper bound for is also an upper bound for .
Let us rename some of the variables. We define , and as follow:
On the example depicted on Figure 4, that would be:
In other words, the ’s are the first two training points of each category and the ’s are their corresponding sequence of concepts. With these notations it is clear that the training points are generated by distinct sequences of concepts, and that the test point is generated by the same sequence of concepts than . Moreover the event can now be conveniently written as
Let be the indicator function defined by
We now recall the splitting described in (58)-(59) and note that can be written as
Since only depends on the variables involved in , and since, according to Lemma 8, , we obtain
where we have used Theorem 4 in order to get the last inequality (with the understanding that .) Combining the above bound with (66) concludes the proof. ∎
We now estimate the probability of the event
Lemma 11.
Proof.
For , we define the events
Note that the events involve only the training points with an index : these are the familiar training points. On the example depicted on Figure 4, these are the training points generated by and . Let us pursue with this example. The event consists in all the outcomes in which one of the points generated by is more similar to than any of the points generated by and . The event consists in all the outcomes in which one of the points generated by is more similar to than any of the points generated by and . And finally the event consists in all the outcomes in which one of the points generated by is more similar to than any of the points generated by and . Importantly, the test point is generated by the unfamiliar sequence of concepts , and this sequence of concept is unrelated to the sequence and . So one would expect that, from simple symmetry, that
(67) |
We will prove (67) rigorously, for general , using Lemma 9 from the previous subsection. But before to do so, let us show that (67) implies the desired upperbound on the probability of First, note that and therefore
(68) |
Then, note that , and are mutually disjoints, and therefore, continuing with the same example,
We now provide a formal proof of (67). As in the proof of the previous lemma, it is convenient to rename some of the variables. Let denote by the variable that consists in the familiar training points from category :
With this notation we have We now recall the splitting described in (61)-(62), and note that can be written as
(69) |
Using Lemma 9 we have
(70) | ||||
(71) | ||||
(72) |
where is the constant appearing in front of the product in Lemma 9 an is the indicator function implicitly defined in equality (71)-(72). With the slight abuse of notation of viewing as a set instead of a tuple, let us rewrite the event as
We also define the corresponding indicator function
We now compute (the formula for the other are obtained in a similar manner.) Recall from (69) that the variables involved in only appear in . Therefore
(73) | ||||
(74) | ||||
(75) |
where we have used (72) to obtain the last equality. Let us now compare and . Letting , and recalling that , we obtain:
From the above it is clear that (as can be seen by exchanging the name of the variables and ). Similar reasoning shows that the events are all equiprobable, which concludes the proof. ∎
Appendix C Upper bound for the Permuted Moment of
This section is devoted to the proof of inequality (15), which we state below as a theorem for convenience.
Theorem 5.
For all , we have the upper bound
(76) |
The rather intricate formula for the function and can be found in the main body of the paper, but we will recall them as we go through the proof.
We also recall that the optimal kernel is given by the formula:
(77) |
The key insight to derive the upper bound (76) is to note that each pair of sentences induces a graph on the vocabulary , and that the quantity
can be interpreted as the number of equipartitions of the vocabulary that do not sever any of the edges of the graph. This graph-cut interpretation of the optimal kernel is presented in detail in Subsection C.1. In Subsection C.2 we derive a formula for which is more tractable than (77). To do this we partition into subsets on which is constant, then provide a formula for the value of on each of these subsets (c.f. Lemma 16). With this formula at hand, we then appeal to Lemma 3 to derive a first bound for the permuted moment of (c.f. Lemma 17). This first bound is not fully explicit because it involves the size of the subsets on which is constant. In section C.3 we appeal to Cayley’s formula, a classical result from graph theory, to estimate the size of these subsets (c.f. Lemma 18) and therefore conclude the proof of Theorem 5.
We now introduce the combinatorial notations that will be used in this section, and we recall a few basics combinatorial facts. We denote by the nonnegative integers. We use the standard notations
for the binomial and multinomial coefficients, with the understanding that . We recall that multinomial coefficients can be interpreted as the number of ways of placing distinct objects into distinct bins, with the constraint that objects must go in the first bin, objects must go in the second bin, and so forth.
The Stirling numbers of the second kind are close relatives of the binomial coefficients. stands for the number of ways to partition a set of objects into nonempty subsets. To give a simple example, because there are 7 ways to partition the set into two non-empty subsets, namely:
Stirling numbers are easily computed via the following variant of Pascal’s recurrence formula 333Alternatively, Stirling numbers can be defined through the formula :
The above formula is easily derived from the definition of the Stirling numbers as providing the number of ways to partition a set of objects into nonempty subsets (see for example chapter 6 of [graham1989concrete]). Table 2 shows the first few Stirling numbers.
1 | 2 | 3 | 4 | 5 | |
---|---|---|---|---|---|
1 | 1 | ||||
2 | 1 | 1 | |||
3 | 1 | 3 | 1 | ||
4 | 1 | 7 | 6 | 1 | |
5 | 1 | 15 | 25 | 10 | 1 |
We recall that an undirected graph is an ordered pair , where is the vertex set and
is the edge set. Edges are unordered pairs of distinct vertices (so loops are not allowed.) A tree is a connected graph with no cycles. A tree on vertices has exactly edges. Cayley’s formula states that there are ways to put edges on labeled vertices in order to make a tree. We formally state this classical result below:
Lemma 12 (Cayley’s formula).
There are trees on labeled vertices.
C.1 Graph-Cut Formulation of the Optimal Kernel
In this section we consider undirected graphs on the vertex set
Since the data space consists of sentences of length , graphs that have at most edges will be of particular importance. We therefore define:
In other words, consists in all the graphs whose edge set has cardinality less or equal to . Since these graphs all have the same vertex set, we will often identify them with their edge set. We now introduce a mapping between pairs of sentences containing words, and graphs containing at most edges.
Definition 2.
The function is defined by
(78) |
The right hand side of (78) is a set of at most edges. Since graphs in are identified with their edge set, indeed define a mapping from to Let us give a few examples illustrating how the map works. Suppose we have a vocabulary of words and sentences of length . Consider the pair of sentences where
(79) |
Then is the set of 3 edges
which indeed define a graph on . Note that position and of ‘code’ for the same edge , position codes for the edge , and position codes for the edge . On the other hand, position and do not code for any edge: indeed, since and , these two positions do not contribute any edges to the edge set defined by (78). We will say that positions and are silent. We make this terminology formal in the definition below:
Definition 3.
Let . If for some , we say that position of the pair is silent. If for some , we say that position of the pair codes for the edge .
Note that if has some silent positions, or if multiple positions codes for the same edge, then the graph will have strictly less than edges. On the other hand, if none of these take place, then will have exactly edges. For example the pair of sentences
(80) |
does not have silent positions, and all positions code for different edges. The corresponding graph
has the maximal possible number of edges, namely edges. From the above discussion, it is clear that any graph with or less edges can be expressed as for some pair of sentences Therefore is surjective. On the other hand, different pair of sentences can be mapped to the same graph. Therefore is not injective. We now introduce the following function.
Definition 4 (Number of cut-free equipartitions of a graph).
The function is defined by :
(81) |
Recall that is the set of maps that satisfy for all Given a graph , the quantity can therefore be interpreted as the number of ways to partition the vertices into labelled subsets of equal size so that no edges are severed (i.e. two connected vertices must be in the same subset.) In other words, is the number of “cut-free” equipartition of the graph , see Figure 5 for an illustration. Note that if the graph is connected, then since any equipartition of the graph will severe some edges. On the other hand, if the graph has no edges, then since there are no edges to be cut (and therefore any equipartition is acceptable.)


The optimal kernel can be expressed as a composition of the function and . Indeed:
(82) | ||||
(83) | ||||
(84) |
where we have simply used that to go from (82) to (83). We will refer to (84) as the graph-cut formulation of the optimal kernel.
We have discussed earlier that the function is surjective but not injective. We conclude this subsection with a lemma that provides an exact count of how many distinct are mapped by to a same graph
Lemma 13.
Suppose has edges. Then
Proof.
Fix once and for all a graph with edge set where . Given , define the set
We start by noting that the set is empty for all : indeed, since has edges, at least positions of a pair must be coding for edges (i.e., must be non-silent) in order to have . We therefore have the following partition:
To conclude the proof, we need to show that
(85) |
Consider the following process to generate an ordered pair that belong to : we start by deciding which positions of (x,y) are going to be silent, and which positions are going to code for which edge of the graph . This is equivalent to choosing a map where denotes the positions, denote the edges of the graph , and is the silent symbol. Choosing a map correspond to assigning a “role" to each position: means that position is given the role to code for edge , and means that position is given the role of being silent. The map must satisfy:
(86) |
because position must be silent and each edge must be represented by at least one position. The number of maps that satisfies (86) is equal to
Indeed, we need to choose the positions that will be mapped to the silent symbol : there are ways of accomplishing this. We then partition the remaining positions into non-empty subsets: there are ways of accomplishing this. We finally map each of these non-empty subset to a different edge: there are ways of accomplishing this.
We have shown that there are ways to assign roles to the positions. Let say that position is assigned the role of coding for edge Then we have two choices to generate entries and : either and , or and Since positions are coding for edges, this lead to the factor in equation (85). Finally, if the position is silent, then we have choices to generate entries and (because we need to choose such that ) , hence the factor appearing in (85). ∎
C.2 Level Sets of the Optimal Kernel
We recall that a connected component (or simply a component) of a graph is a connected subgraph that is not part of any larger connected subgraph. Since graphs in have at most edges, their components contain at most vertices. This comes from the fact that the largest component that can be made with edges contains vertices. It is therefore natural, given a vector , to define
(87) |
where the size of a component refers to the number of vertices it contains. We recall that therefore some of the entries of the vector can be equal to zero. Note that components of size are simply isolated vertices. The following lemma identify which lead to non-empty
Lemma 14.
The set is not empty if and only if satisfies
(88) |
Proof.
Suppose is not empty. Then there exists a graph that has exactly components of size , for A component of size contains vertices (by definition) and at least edges (otherwise it would not be connected.) Since it must have vertices and at most edges. Therefore (88) must hold.
Suppose that satisfies (88). Then we can easily construct a graph on that has a number of edges less or equal to , and that has exactly components of size , for To do this we first partition the vertices into subsets so that there are subsets of size , for We then put edges on each subset of size so that they form connected components. The resulting graph has components of size , for , and edges. ∎
The previous lemma allows us to partition into non-empty subsets as follow:
(89) | |||
(90) |
Recall that count the number of equipartitions that do not severe edges of The next lemma shows that two graphs that belongs to the same subset have the same number of cut-free equipartitions, and it provides a formula for this number in term of the index of the subset.
Lemma 15.
Suppose and define the set of admissible assignment matrices
(91) |
Then for all , we have that
(92) |
Let us remark that, since , the multinomial coefficient appearing in (92) is equal to when is equal to
Poof of Lemma 15.
Let and fix once and for all a graph . Define the set
so that Note that a map that belongs to must map all vertices that are in a connected component to the same concept (otherwise some edges of would be severed.) So a map can be viewed as assigning connected components to concepts. Given a matrix we define the set:
We then note that the set is empty unless the matrix satisfies:
The first constraint states that the total number of connected components of size is equal to (because ). The second constraint states that concept must receive a total of vertices (because .) The matrices that satisfy these two constraints constitute the set defined in (91). We therefore have the following partition of the set :
To conclude the proof, we need to show that
(93) |
To see this, consider the components of size . The number of ways to assign them to the concepts so that concept receives of them is equal to the multinomial coefficient Repeating this reasonning for the components of each size gives (93). ∎
We now leverage the previous lemma to obtain a formula for For we define
Since is surjective, partition (89) of induces the following partition of :
(94) |
Using the graph-cut formulation of the optimal kernel together with Lemma 15 we therefore have
(95) |
The above formula is key to our analysis. We restate it in the lemma below, but in a slightly different format that will better suit the rest of the analysis. Let be the function defined by
(96) |
We then have:
Lemma 16 (Level set decomposition of ).
The kernel is constant on each subsets of the partition (94). Moreover we have
Proof.
The quantity appearing in (95) can be interpreted as the number of ways to assign the words to the concepts so that each concept receives words. Therefore
Combined with the fact that , this leads to the desired formula for ∎
The above lemma provides us with the level sets of the optimal kernel. Together with Lemma 3, this allows us to derive the following upper bound for the permuted moment of .
Lemma 17.
Let The inequality
holds for all .
Proof.
Let and define:
where we have used the fact that is equal to on We then appeal to Lemma 3 to obtain:
(97) | ||||
(98) | ||||
(99) | ||||
(100) | ||||
(101) | ||||
(102) |
where we have use the fact that to go from (98) to (99), and the fact that on to go from (100) to (101). To conclude the proof, we simply note that according to our definition of ∎
The bound provided by the above lemma is not fully explicit because it involves the size of level sets . In the next section, we appeal to Cayley’s formula to obtain a lower bound for
C.3 Forest Lower Bound for the Size of the Level Sets
We recall that a forest is a graph whose connected components are trees (equivalently, a forest is a graph with no cycles.) Let us define:
In other words, is the set of forests on that have at most edges. We obviously have the following lower bound on the size of the level sets:
(103) |
In this subsection, we use Cayley’s formula to derive an explicit formula for We start with the following lemma:
Lemma 18.
Let , then
(104) |
Proof.
First we note that (104) can be written as
We now explain the above formula. The set consists in all the forests that have exactly trees of size , trees of size , …, trees of size . In order to construct a forest with this specific structure, we start by assigning the vertices to bins, with bin 1 receiving vertices, bin 2 receiving vertices, …, bin receiving vertices. The number of ways of accomplishing this is
Let us now consider the vertices in bin for some . We claim that there are
ways of putting edges on these vertices in order to make trees of size . Indeed, there are ways of partitioning the vertices into disjoint subsets of size , and then, according to Cayley’s formula, there are ways of putting edges on each of these subsets so that they form a tree. To conclude, we remark that there is obviously only one way to to make trees of size 1 out of the vertices in the first bin. ∎
Recall that a tree on vertices always has edges. So a graph that belongs to has
edges since it is made of trees of size , trees of size , …, trees of size . The fact that all graphs in have the same number of edges allows us to to obtain an explicit formula for by combining combine Lemma 13 and 18, namely
This lead us to define the function by
Recalling (103) we therefore have that
Combining the above inequality with Lemma 17 we obtain:
Theorem 6.
The inequality
holds for all .
The above theorem is more general than Theorem 5 — indeed, in Theorem 5, the choice of the subset is restricted to the candidates:
When , , and (these are the parameters used in Theorem 1), choosing leads to a relatively tight upper bound. When , , and (these are the parameters corresponding the the second experiment of the experimental section), choosing gives a good upper bound.
Appendix D Multiple Unfamiliar Sentences per Category

In the data model depicted in Figure 1, each unfamiliar sequence of concept has a single representative in the training set. In this section we consider the more general case in which each unfamiliar sequence of concepts has representatives in the training set, where
An example with is depicted in Figure 6. The variables and parametrize instances of this more general data model, and the associated sampling process is:
Sampling Process DM2:
(i) Sample uniformly at random in . (ii) For : • Sample uniformly at random in . • Sample uniformly at random in . (iii) Sample uniformly at random in .
Our analysis easily adapts to this more general case and gives:
Theorem 7.
Let . Then
(105) |
for all feature space , all feature map , and all .
Theorem 3 in the main body of the paper can be viewed as a special case of the above theorem — indeed, setting in inequality (105) we exactly recover (9).
In order to prove Theorem 7, we simply need to revisit subsection B.3. We denote by and the sample space and probability measure associated with the sampling process DM2. As in subsection B.3, given a kernel , we define the event
Such event describes all outcomes corresponding to successful classification of the test point . For simplicity let us assume that (therefore matching the scenario depicted in Figure 6). We further partition the event according to which training point from the first category is most similar to the test point:
(106) |
The event consists in all the outcomes where, among the points from first category, is the most similar to , consists in all the outcomes where, among the points from first category, is the most similar to , and consists in all the remaining cases. Formally we have:
Exactly as in subsection B.3, we then prove that
(107) | |||
(108) |
The proof of inequality (107) is essentially identical to the proof of Lemma 10. We define the event
(109) |
and the event
The ’s involved in the definition of the event are highlighted in yellow in Figure 6. Crucially they are all generated by different sequences of concepts, except for and . We can therefore appeal to Theorem 4 to obtain
since there is a total of ‘distractors’ (the ‘distractors’ in Figure 6 are , , , and ). We then use the fact to obtain (107) with . The case is exactly similar.
We now prove (108). The proof is similar to the proof of Lemma 11. For , we define the events
By symmetry, these events are equiprobable. They also are mutually disjoints, and therefore . Inequality (108) then comes from the fact that .
Combining (106), (107), (108) then gives
(110) |
and, going back to the general case where denotes the number of representatives that each sequence of unfamiliar concepts has in the training set,
(111) |
which in turn implies (16). Combining inequalities (15) and (16) then concludes the proof of Theorem 7.
Appendix E Details of the Experiments
In this section we provide the details of the experiments described in Section 6, as well as additional experiments. Table 4 provides the results of experiments in which the parameters , , and are set to
The parameters and are chosen so that the training set contains familiar sentences per category, and between and unfamiliar sentences per category. Table 3 is identical to Table 1 in Section 6, with the exception that it contains additional information (i.e. the standard deviations of the obtained accuracies). The abbreviation NN appearing in Table 3 stands for ‘Nearest Neighbor’.
Table 4 provides the results of additional experiments in which the parameters , , and are set to
The parameters and are chosen, as in the previous set experiments, so that the training set contains familiar sentences per category, and between and unfamiliar sentences per category. The tasks considered in this set of experiments are easier due to the fact that the vocabulary is smaller ( instead of ).
Neural network | |||||
---|---|---|---|---|---|
NN on feat. extracted by neural net | |||||
NN on feat. extracted by | |||||
NN on feat. extracted by | |||||
Upper bound () | |||||
SVM on feat. extracted by | |||||
SVM on feat. extracted by | |||||
SVM with Gaussian kernel |
E.1 Neural Network Experiments
We consider the neural network in Figure 2. The number of neurons in the input, hidden and output layers of the MLPs constituting the neural network are set to:
For each of the 10 possible parameter settings in Table 3 and Table 4, we do experiments. For each experiment we generate:
-
•
A training set containing sentences.
-
•
A test set containing unfamiliar sentences (10 sentences per category).
We then train the neural network with stochastic gradient descent until the training loss reaches (we use a cross entropy loss). The learning rate is set to (constant learning rate), and the batch size to . At test time, we either use the neural network to classify the test points (first row of the tables) or we use a nearest neighbor classification rule on the top of the features extracted by the neural network (second row of the tables). The mean and standard deviation of the test accuracies, for each of the 10 settings, and for each of the two evaluation strategies, is reported in the first two rows of Table 3 and Table 4.
E.2 Nearest-Neighbor Experiments
In these experiments we use a nearest neighbor classification rule on the top of features extracted by (third row of Table 3 and 4) or (fourth row). For each of the 10 possible parameter settings in Table 3 and Table 4, we do experiments. For each experiment we generate:
-
•
A training set containing sentences.
-
•
A test set containing unfamiliar sentences (one sentences per category).
In order to perform the nearest neighbor classification rule on the features extracted by , one needs to evaluate the kernel for each pair of sentences. Computing requires an expensive combinatorial calculation which is the reason why we perform fewer experiments and use a smaller test set than in E.1. In order to break ties, the values of are perturbed according to (31).
With the parameter setting , , and , our theoretical lower bound for the generalization error is
(112) |
which is obtained by choosing in inequality (105). This lead to an upper bound of on the success rate. This upper bound is evaluated for ranging from to in the fifth row of Table 4.
Neural network | |||||
---|---|---|---|---|---|
NN on feat. extracted by neural net | |||||
NN on feat. extracted by | |||||
NN on feat. extracted by | |||||
Upper bound () | |||||
SVM on feat. extracted by | |||||
SVM on feat. extracted by | |||||
SVM with Gaussian kernel |
E.3 SVM on Features Extracted by and SVM with Gaussian Kernel
For each of the 10 possible parameter settings in Table 3 and Table 4, we do experiments. For each experiment we generate:
-
•
A training set containing sentences.
-
•
A test set containing unfamiliar sentences (10 sentences per category).
We use the feature map (which simply concatenates the one-hot-encodings of the words composing a sentence) to extract features from each sentence. These features are further normalized according to the formula
(113) |
so that they are centered around and are . We then use the SVC function of Scikit-learn [23], which itself relies on the LIBSVM library [8], in order to run a soft multiclass SVM algorithm on these features. We tried various values for the parameter controlling the regularization in the soft-SVM formulation, and found that the algorithm, on this task, is not sensitive to this choice — so we chose . The results are reported in the seventh row of both tables.
We also tried a soft SVM with Gaussian Kernel
applied on the top of features extracted by and normalized according to (113). We use the SVC function of Scikit-learn with regularization parameter set to . For the experiments in Table 3 (), the parameter involved in the definition of the kernel was set to when and to when . For the experiments in Table 4 (), it was set to when , to when , and finally to when .
E.4 SVM on Features Extracted by
Test Accuracy | |||
---|---|---|---|
6.1% | |||
6.1% | |||
6.6% | |||
Applying a SVM on the feature extracted by is equivalent to running a kernelized SVM with kernel . A naive implementation of such algorithm leads to very poor results on our data model. For such algorithm to not completely fail, it is important to carefully “rescale" so that the eigenvalues of the corresponding Gram matrix are well behaved. Recall that
(114) |
and let be a strictly increasing function. Since the nearest neighbor classification rule works by comparing the values of on various pairs of points, it is clear that using the kernel is equivalent to using the kernel In particular, choosing gives the following family of optimal kernels:
(115) |
To be clear, all these kernels are exactly equivalent to the the kernel when using a nearest neighbor classification rule. However, they lead to different algorithms when used for kernelized SVM. We have experimented with various choice of the function and found out that this logarithmic scaling works well for kernelized SVM.
For each of the 10 possible parameter settings in Table 3 and Table 4, we do experiments. For each experiment we generate:
-
•
A training set containing sentences.
-
•
A test set containing unfamiliar sentences (one sentences per category).
Let us denote by , , the data points in one of these training set, and by , , the data points in the corresponding test set. In order to run the kernelized SVM algorithm we need to form the Gram matrices
(116) |
Constructing each of these Gram matrices takes a few days on CPU. We then use the SVC function of Scikit-learn to run a soft multiclass kernelized-SVM algorithm. We tried various values for the parameter controlling the and found that the algorithm is not sensitive to this choice — so we chose . The algorithm, on the other hand, is quite sensitive to the choice of the hyperparamater defining the kernel . We experimented with various choices of and found that choosing the smallest that makes the Gram matrix positive definite works well (note that the Gram matrix should be positive semidefinite for the kernelized SVM method to make sense). In Table 5 we show an example, on a specific pair of train and test set444the training set and test set used in this experiment were generated by our data model with parameters , , , , , and , of how the eigenvalues of and the test accuracy depends on .