This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Long-Tailed Learning Requires Feature Learning

Thomas Laurent Loyola Marymount University, [email protected] James H. von Brecht Xavier Bresson National University of Singapore, [email protected]
Abstract

We propose a simple data model inspired from natural data such as text or images, and use it to study the importance of learning features in order to achieve good generalization. Our data model follows a long-tailed distribution in the sense that some rare subcategories have few representatives in the training set. In this context we provide evidence that a learner succeeds if and only if it identifies the correct features, and moreover derive non-asymptotic generalization error bounds that precisely quantify the penalty that one must pay for not learning features.

1 Introduction

Part of the motivation for deploying a neural network arises from the belief that algorithms that learn features/representations generalize better than algorithms that do not. We try to give some mathematical ballast to this notion by studying a data model where, at an intuitive level, a learner succeeds if and only if it manages to learn the correct features. The data model itself attempts to capture two key structures observed in natural data such as text or images. First, it is endowed with a latent structure at the patch or word level that is directly tied to a classification task. Second, the data distribution has a long-tail, in the sense that rare and uncommon instances collectively form a significant fraction of the data. We derive non-asymptotic generalization error bounds that quantify, within our framework, the penalty that one must pay for not learning features.

We first prove a two part result that quantifies precisely the necessity of learning features within the context of our data model. The first part shows that a trivial nearest neighbor classifier performs perfectly when given knowledge of the correct features. The second part shows it is impossible to a priori craft a feature map that generalizes well when using a nearest neighbor classification rule. In other words, success or failure depends only on the ability to identify the correct features and not on the underlying classification rule. Since this cannot be done a priori, the features must be learned.

Our theoretical results therefore support the idea that algorithms cannot generalize on long-tailed data if they do not learn features. Nevertheless, an algorithm that does learn features can generalize well. Specifically, the most direct neural network architecture for our data model generalizes almost perfectly when using either a linear classifier or a nearest neighbor classifier on the top of the learned features. Crucially, designing the architecture requires knowing only the meta structure of the problem, but no a priori knowledge of the correct features. This illustrates the built-in advantage of neural networks; their ability to learn features significantly eases the design burden placed on the practitioner.

Subcategories in commonly used visual recognition datasets tend to follow a long-tailed distribution [25, 28, 11]. Some common subcategories have a wealth of representatives in the training set, whereas many rare subcategories only have a few representatives. At an intuitive level, learning features seems especially important on a long-tailed dataset since features learned from the common subcategories help to properly classify test points from a rare subcategory. Our theoretical results help support this intuition.

We note that when considering complex visual recognition tasks, datasets are almost unavoidably long-tailed [19] — even if the dataset contains millions of images, it is to be expected that many subcategories will have few samples. In this setting, the classical approach of deriving asymptotic performance guarantees based on a large-sample limit is not a fruitful avenue. Generalization must be approached from a different point of view (c.f. [10] for very interesting work in this direction). In particular, the analysis must be non-asymptotic. One of our main contribution is to derive, within the context of our data model, generalization error bounds that are non-asymptotic and relatively tight — by this we mean that our results hold for small numbers of data samples and track reasonably well with empirically evaluated generalization error.

In Section 2 we introduce our data model and in Section 3 we discuss our theoretical results. For the simplicity of exposition, both sections focus on the case where each rare subcategory has a single representative in the training set. Section 4 is concerned with the general case in which each rare subcategory has few representatives. Section 5 provides an overview of our proof techniques. Finally, in Section 6, we investigate empirically a few questions that we couldn’t resolve analytically. In particular, our error bounds are restricted to the case in which a nearest neighbor classification rule is applied on the top of the features — we provide empirical evidence in this last section that replacing the nearest neighbor classifier by a linear classifier leads to very minimal improvement. This further support the notion that, on our data model, it is the ability to learn features that drives success, not the specific classification rule used on the top of the features.

Related work. By now, a rich literature has developed that studies the generalization abilities of neural networks. A major theme in this line of work is the use of the PAC learning framework to derive generalization bounds for neural networks (e.g. [6, 21, 14, 4, 22]), usually by proving a bound on the difference between the finite-sample empirical loss and true loss. While powerful in their generality, such approaches are usually task independent and asymptotic; that is, they are mostly agnostic to any idiosyncrasies in the data generating process and need a statistically meaningful number of samples in the training set. As such, the PAC learning framework is not well-tailored to our specific aim of studying generalization on long-tailed data distributions; indeed, in such setting, a rare subcategory might have only a handful of representatives in the training set.

After breakthrough results (e.g. [15, 9, 3, 16]) showed that vastly over-parametrized neural networks become kernel methods (the so-called Neural Tangent Kernel or NTK) in an appropriate limit, much effort has gone toward analyzing the extent to which neural networks outperform kernel methods [27, 26, 24, 12, 13, 17, 1, 2, 18, 20]. Our interest lies not in proving such a gap for its own sake, but rather in using the comparison to gain some understanding on the importance of learning features in computer vision and NLP contexts.

Analyses that shed theoretical light onto learning with long-tailed distributions [10, 7] or onto specific learning mechanisms [17] are perhaps closest to our own. The former analyses [10, 7] investigate the necessity of memorizing rare training examples in order to obtain near-optimal generalization error when the data distribution is long-tailed. Our analysis differs to the extent that we focus on the necessity of learning features and sharing representations in order to properly classify rare instances. Like us, the latter analysis [17] also considers a computer vision inspired task and uses it to compare a neural network to a kernel method, with the ultimate aim of studying the learning mechanism involved. Their object of study (finding a sparse signal in the presence of noise), however, markedly differs from our own (learning with long-tailed distributions).

Refer to caption
Figure 1: Data model with parameters set to L=5L=5, nw=12n_{w}=12, nc=3n_{c}=3, R=3R=3, and nspl=5n_{\text{spl}}=5.

2 The Data Model

We begin with a simple example to explain our data model and to illustrate, at an intuitive level, the importance of learning features when faced with a long-tailed data distribution. For the sake of exposition we adopt NLP terminology such as ‘words’ and ‘sentences,’ but the image-based terminology of ‘patches’ and ‘images’ would do as well.

The starting point is a very standard mechanism for generating observed data from some underlying collection of latent variables. Consider the data model depicted in Figure 1. We have a vocabulary of nw=12n_{w}=12 words and a set of nc=3n_{c}=3 concepts:

𝒱={potato, cheese, carrots, chicken,}and𝒞={vegetable, dairy, meat}.{\mathcal{V}}=\{\text{potato, cheese, carrots, chicken},\ldots\}\qquad\text{and}\qquad\mathcal{C}=\{\text{vegetable, dairy, meat}\}.

The 1212 words are partitioned into the 33 concepts as shown on the left of Figure 1. We also have 66 sequences of concepts of length L=5L=5. They are denoted by 𝐜1,𝐜2,𝐜3{\bf c}_{1},{\bf c}_{2},{\bf c}_{3} and 𝐜1,𝐜2,𝐜3{\bf c}^{\prime}_{1},{\bf c}^{\prime}_{2},{\bf c}^{\prime}_{3}. Sequences of concepts are latent variables that generate sequences of words. For example

[dairy,dairy,veggie,meat,veggie]generates[cheese,butter,lettuce,chicken,leek][\text{dairy},\text{dairy},\text{veggie},\text{meat},\text{veggie}]\;\;\overset{\text{generates}}{\longrightarrow}\;\;[\text{cheese},\text{butter},\text{lettuce},\text{chicken},\text{leek}]

The sequence of words on the right was obtained by sampling each word uniformly at random from the corresponding concept. For example, the first word was randomly chosen out of the dairy concept (butter, cheese, cream, yogurt), and the last word was randomly chosen out of the vegetable concept (potato, carrot, leek, lettuce.) Sequences of words will be referred to as sentences.

The non-standard aspect of our model comes from how we use the ‘latent-variable \to observed-datum’ process to form a training distribution. The training set in Figure 1 is made of 1515 sentences split into R=3R=3 categories. The latent variables 𝐜1,𝐜2,𝐜3{\bf c}^{\prime}_{1},{\bf c}^{\prime}_{2},{\bf c}^{\prime}_{3} each generate a single sentence, whereas the latent variables 𝐜1,𝐜2,𝐜3{\bf c}_{1},{\bf c}_{2},{\bf c}_{3} each generate 44 sentences. We will refer to 𝐜1,𝐜2,𝐜3{\bf c}_{1},{\bf c}_{2},{\bf c}_{3} as the familiar sequences of concepts since they generate most of the sentences encountered in the training set. On the other hand 𝐜1,𝐜2,𝐜3{\bf c}^{\prime}_{1},{\bf c}^{\prime}_{2},{\bf c}^{\prime}_{3} will be called unfamiliar. Similarly, a sentence generated by a familiar (resp. unfamiliar) sequence of concepts will be called a familiar (resp. unfamiliar) sentence. The former represents a datum sampled from the head of a distribution while the latter represents a datum sampled from its tail. We denote by 𝐱r,s{\bf x}_{r,s} the sths^{th} sentence of the rthr^{th} category, indexed so that the first sentence of each category is an unfamiliar sentence and the remaining ones are familiar.

Suppose now that we have trained a learning algorithm on the training set described above and that at inference time we are presented with a previously unseen sentence generated by the unfamiliar sequence of concept 𝐜1=[dairy,dairy,veggie,meat,veggie]{\bf c}^{\prime}_{1}=[\text{dairy},\text{dairy},\text{veggie},\text{meat},\text{veggie}]. To fix ideas, let’s say that sentence is:

𝐱test=[butter,yogurt,carrot,beef,lettuce]{\bf x}^{\text{test}}=[\text{butter},\text{yogurt},\text{carrot},\text{beef},\text{lettuce}] (1)

This sentence is hard to classify since there is a single sentence in the training set that has been generated by the same sequence of concepts, namely

𝐱1,1=[cheese,butter,lettuce,chicken,leek].{\bf x}_{1,1}=[\text{cheese},\text{butter},\text{lettuce},\text{chicken},\text{leek}]\;. (2)

Moreover these two sentences do not overlap at all (i.e. the ithi^{th} word of 𝐱test{\bf x}^{\text{test}} is different from the ithi^{th} word of 𝐱1,1{\bf x}_{1,1} for all ii.) To properly classify 𝐱test{\bf x}^{\text{test}}, the algorithm must have learned the equivalences butter \leftrightarrow cheese, yogurt \leftrightarrow butter, carrot \leftrightarrow lettuce, and so forth. In other words, the algorithm must have learned the underlying concepts.

Nevertheless, a neural network with a well-chosen architecture can easily succeed at such a classification task. Consider, for example, the network depicted on Figure 2. Each word of the input sentence, after being encoded into a one-hot-vector, goes through a multi-layer perceptron (MLP 1 on the figure) shared across words.

Refer to caption
Figure 2: A simple neural net.

The output is then normalized using LayerNorm [5] to produce a representation of the word. The word representations are then concatenated into a single vector that goes through a second multi-layer perceptron (MLP 2 on the figure). This network, if properly trained, will learn to give similar representations to words that belong to the same concept. Therefore, if it correctly classifies the train point 𝐱1,1{\bf x}_{1,1} given by (2), it will necessarily correctly classify the test point 𝐱test{\bf x}^{\text{test}} given by (1). So the neural network is able to classify the previously unseen sentence 𝐱test{\bf x}^{\text{test}} despite the fact that the training set contains a single example with the same underlying sequence of concepts. This comes from the fact that the neural network learns features and representations from the familiar part of the training set (generated by the head of the distribution), and uses these, at test time, to correctly classify the unfamiliar sentences (generated by the tail of the distribution). In other words, because it learns features, the neural network has no difficulty handling the long-tailed nature of the distribution.

To summarize, the variables L,nw,nc,RL,n_{w},n_{c},R and nspln_{\text{spl}} parametrize instances of our data model. They denote, respectively, the length of the sentences, the number of words in the vocabulary, the number of concepts, the number of categories, and the number of samples per category. So in the example presented in Figure 1 we have L=5L=5, nw=12n_{w}=12, nc=3n_{c}=3, R=3R=3 and nspl=5n_{\text{spl}}=5 (four familiar sentences and one unfamiliar sentence per category). The vocabulary 𝒱\mathcal{V} and set of concepts 𝒞\mathcal{C} are discrete sets with |𝒱|=nw|{\mathcal{V}}|=n_{w} and |𝒞|=nc|\mathcal{C}|=n_{c}, rendered as 𝒱={1,,nw}\mathcal{V}=\{1,\ldots,n_{w}\} and 𝒞={1,,nc}\mathcal{C}=\{1,\ldots,n_{c}\} for concreteness. A partition of the vocabulary into concepts, like the one depicted at the top of Figure 1, is encoded by a function φ:𝒱𝒞\varphi:\mathcal{V}\to\mathcal{C} that assigns words to concepts. We require that each concept contains the same number of words, so that φ\varphi satisfies

|φ1({c})|=|{w𝒱:φ(w)=c}|=nw/ncfor all c𝒞,|\varphi^{-1}(\{c\})|=|\{w\in\mathcal{V}:\varphi(w)=c\}|=n_{w}/n_{c}\qquad\text{for all }c\in\mathcal{C}, (3)

and we refer to such a function φ:𝒱𝒞\varphi:\mathcal{V}\to\mathcal{C} satisfying (3) as equipartition of the vocabulary. The set

Φ={All functions φ from 𝒱 to 𝒞 that satisfy (3}\Phi=\{\text{All functions $\varphi$ from $\mathcal{V}$ to $\mathcal{C}$ that satisfy (\ref{equipartition0}) }\}

denotes the collection of all such equipartitions, while the data space and latent space are denoted

𝒳=𝒱L and 𝒵=𝒞L,{\mathcal{X}}=\mathcal{V}^{L}\qquad\text{ and }\qquad\mathcal{Z}=\mathcal{C}^{L},

respectively. Elements of 𝒳\mathcal{X} are sentences of LL words and they take the form 𝐱=[x1,x2,,xL],{\bf x}=[x_{1},x_{2},\ldots,x_{L}], while elements of 𝒵\mathcal{Z} take the form 𝐜=[c1,c2,,cL]{\bf c}=[c_{1},c_{2},\ldots,c_{L}] and correspond to sequences of concepts.

In the context of this work, a feature map refers to any function ψ:𝒳\psi:{\mathcal{X}}\mapsto\mathcal{F} from data space to feature space. The feature space \mathcal{F} can be any Hilbert space (possibly infinite dimensional) and we denote by ,\langle\cdot,\cdot\rangle_{\mathcal{F}} the associated inner product. Our analysis applies to the case in which a nearest neighbor classification rule is applied on the top of the extracted features. Such rule works as follow: given a test point 𝐱{\bf x}, the inner products ψ(𝐱),ψ(𝐲)\langle\psi({\bf x}),\psi({\bf y})\rangle_{\mathcal{F}} are evaluated for all 𝐲{\bf y} in the training set; the test point 𝐱{\bf x} is then given the label of the training point 𝐲{\bf y} that led to the highest inner product.

3 Statement and Discussion of Main Results

Our main result states that, in the context of our data model, features must be tailored (i.e. learned) to each specific task. Specifically, it is not possible to find a universal feature map ψ:𝒳\psi:{\mathcal{X}}\to\mathcal{F} that performs well on a collection of tasks like the one depicted on Figure 1. In the context of this work, a task refers to a tuple

𝒯=(φ;𝐜1,,𝐜R;𝐜1,,𝐜R)Φ×𝒵2R\mathcal{T}=(\;\;\varphi\;\;;\;\;{\bf c}_{1},\ldots,{\bf c}_{R}\;\;;\;\;{\bf c}^{\prime}_{1},\ldots,{\bf c}^{\prime}_{R}\;\;)\;\in\;\Phi\times\mathcal{Z}^{2R} (4)

that prescribes a partition of the vocabulary into concepts, RR familiar sequences of concepts, and RR unfamiliar sequences of concepts. Given such a task 𝒯\mathcal{T} we generate a training set SS as described in the previous section. This training set contains R×nsplR\times n_{spl} sentences split over RR categories, and each category contains a single unfamiliar sentence. Randomly generating the training set SS from the task 𝒯\mathcal{T} corresponds to sampling S𝒟𝒯trainS\sim\mathcal{D}_{\mathcal{T}}^{\text{train}} from a distribution 𝒟𝒯train\mathcal{D}_{\mathcal{T}}^{\text{train}} defined on the space 𝒳R×nspl\mathcal{X}^{R\times n_{spl}} and parametrized by the variables in (4) (the appendix provides an explicit formula for this distribution). We measure performance of an algorithm by its ability to generalize on previously unseen unfamiliar sentences. Generating an unfamiliar sentence amounts to drawing a sample 𝐱𝒟𝒯test{\bf x}\sim\mathcal{D}_{\mathcal{T}}^{\text{test}} from a distribution 𝒟𝒯test\mathcal{D}_{\mathcal{T}}^{\text{test}} on the space 𝒳\mathcal{X} parametrized by the variables φ,𝐜1,,𝐜R\varphi,{\bf c}^{\prime}_{1},\ldots,{\bf c}^{\prime}_{R} in (4) that determine unfamiliar sequences of concepts. Finally, associated with every task 𝒯\mathcal{T} we have a labelling function f𝒯:𝒳{1,,R}f_{\mathcal{T}}:{\mathcal{X}}\to\{1,\ldots,R\} that assigns the label rr to sentences generated by either 𝐜r{\bf c}_{r} or 𝐜r{\bf c}_{r}^{\prime} (this function is ill-defined if two sequences of concepts from different categories are identical, but this issue is easily resolved by formal statements in the appendix). Summarizing our notations, for every task 𝒯Φ×𝒵2R\mathcal{T}\in\Phi\times\mathcal{Z}^{2R} we have a distribution 𝒟𝒯train\mathcal{D}_{\mathcal{T}}^{\text{train}} on the space 𝒳R×nspl\mathcal{X}^{R\times n_{spl}}, a distribution 𝒟𝒯test\mathcal{D}_{\mathcal{T}}^{\text{test}} on the space 𝒳\mathcal{X}, and a labelling function f𝒯.f_{\mathcal{T}}.

Given a feature space \mathcal{F}, a feature map ψ:𝒳\psi:{\mathcal{X}}\to\mathcal{F}, and a task 𝒯Φ×𝒵2R\mathcal{T}\in\Phi\times\mathcal{Z}^{2R}, the expected generalization error of the nearest neighbor classification rule on unfamiliar sentences is given by:

err(,ψ,𝒯)=𝔼S𝒟𝒯train[𝐱𝒟𝒯test[f𝒯(argmax𝐲Sψ(𝐱),ψ(𝐲))f𝒯(𝐱)]].\text{err}(\mathcal{F},\psi,\mathcal{T})=\underset{S\sim\mathcal{D}_{\mathcal{T}}^{\text{train}}}{\mathbb{E}}\Bigg{[}\underset{{\bf x}\sim\mathcal{D}_{\mathcal{T}}^{\text{test}}}{\mathbb{P}}\left[f_{\mathcal{T}}\left(\arg\max_{{\bf y}\in S}\langle\psi({\bf x}),\psi({\bf y})\rangle_{\mathcal{F}}\right)\neq f_{\mathcal{T}}({\bf x})\right]\Bigg{]}. (5)

For simplicity, if the test point has multiple nearest neighbors with inconsistent labels in the training set (and so the argmax\arg\max returns multiple training points 𝐲{\bf y}), we will count the classification as a failure for the nearest neighbor classification rule. We therefore replace (5) by the more formal (but more cumbersome) formula

err(,ψ,𝒯)=𝔼S𝒟𝒯train[𝐱𝒟𝒯test[𝐲argmax𝐲Sψ(𝐱),ψ(𝐲) such that f𝒯(𝐲)f𝒯(𝐱)]]\text{err}(\mathcal{F},\psi,\mathcal{T})=\underset{S\sim\mathcal{D}_{\mathcal{T}}^{\text{train}}}{\mathbb{E}}\Bigg{[}\underset{{\bf x}\sim\mathcal{D}_{\mathcal{T}}^{\text{test}}}{\mathbb{P}}\left[\exists{\bf y}\in\arg\max_{{\bf y}\in S}\langle\psi({\bf x}),\psi({\bf y})\rangle_{\mathcal{F}}\text{ such that }f_{\mathcal{T}}({\bf y})\neq f_{\mathcal{T}}({\bf x})\right]\Bigg{]} (6)

to make this explicit. Our main theoretical results concern performance of a learner not on a single task 𝒯\mathcal{T} but on a collection of tasks 𝔗={𝒯1,𝒯2,,𝒯Ntasks},\mathfrak{T}=\{\mathcal{T}_{1},\mathcal{T}_{2},\ldots,\mathcal{T}_{N_{\text{tasks}}}\}, and so we define

err¯(,ψ,𝔗)=1|𝔗|𝒯𝔗err(,ψ,𝒯)\overline{\text{err}}(\mathcal{F},\psi,\mathfrak{T})=\frac{1}{|\mathfrak{T}|}\sum_{\mathcal{T}\in\mathfrak{T}}\text{err}(\mathcal{F},\psi,\mathcal{T}) (7)

as the expected generalization error on such a collection 𝔗\mathfrak{T} of tasks. As a task refers to an element of the discrete set Φ×𝒵2R\Phi\times\mathcal{Z}^{2R}, any subset 𝔗Φ×𝒵2R\mathfrak{T}\subset\Phi\times\mathcal{Z}^{2R} defines a collection of tasks. Our main result concerns the case where the collection of tasks 𝔗=Φ×𝒵2R\mathfrak{T}=\Phi\times\mathcal{Z}^{2R} consists in all possible tasks that one might encounter. For concreteness, we choose specific values for the model parameters and state the following special case of our main theorem (Theorem 3 at the end of this section) —

Theorem 1.

Let L=9L=9, nw=150n_{w}=150, nc=5n_{c}=5, R=1000R=1000 and nspl2n_{\text{spl}}\geq 2. Let 𝔗=Φ×𝒵2R\mathfrak{T}=\Phi\times\mathcal{Z}^{2R}. Then

err¯(,ψ,𝔗)>98.4%\overline{\text{err}}(\mathcal{F},\psi,\mathfrak{T})>98.4\%

for all feature spaces \mathcal{F}, and all feature maps ψ:𝒳\psi:{\mathcal{X}}\mapsto\mathcal{F}.

In other words, for the model parameters specified above, it is not possible to design a ‘task-agnostic’ feature map ψ\psi that works well if we are uniformly uncertain about which specific task we will face. Indeed, the best possible feature map will fail at least 98.4%98.4\% of the time at classifying unfamiliar sentences (with a nearest-neighbor classification rule), where the probability is with respect to the random choices of the task, of the training set, and of the unfamiliar test sentence.

Interpretation: Our desire to understand learning demands that we consider a collection of tasks rather than a single one, for if we consider only a single task then the problem, in our setting, becomes trivial. Indeed, assume 𝔗={𝒯1}\mathfrak{T}=\{\mathcal{T}_{1}\} with 𝒯1=(φ;𝐜1,,𝐜R;𝐜1,,𝐜R)\mathcal{T}_{1}=(\varphi;{\bf c}_{1},\ldots,{\bf c}_{R};{\bf c}^{\prime}_{1},\ldots,{\bf c}^{\prime}_{R}) consists only of a single task. With knowledge of this task we can easily construct a feature map ψ:𝒳Lnc\psi:{\mathcal{X}}\to{}^{Ln_{c}} that performs perfectly. Indeed, the map

ψ([x1,,xL])=[𝐞φ(x1),,𝐞φ(xL)]\psi([x_{1},\ldots,x_{L}])=[{\bf e}_{\varphi(x_{1})},\ldots,{\bf e}_{\varphi(x_{L})}] (8)

that simply ‘replaces’ each word xx_{\ell} of the input sentence by the one-hot-encoding 𝐞φ(x){\bf e}_{\varphi(x_{\ell})} of its corresponding concept will do. A bit of thinking reveals that the nearest neighbor classification rule associated with feature map (8) perfectly solves the task 𝒯1\mathcal{T}_{1}. This is due to the fact that sentences generated by the same sequence of concepts are mapped by ψ\psi to the exact same location in feature space. As a consequence, the nearest neighbor classification rule will match the unfamiliar test sentence 𝐱{\bf x} to the unique training sentence 𝐲{\bf y} that occupies the same location in feature space, and this training sentence has the correct label by construction (assuming that sequences of concepts from different categories are distinct). To put it formally:

Theorem 2.

Given a task 𝒯Φ×𝒵2R\mathcal{T}\in\Phi\times\mathcal{Z}^{2R} satisfying 𝐜r𝐜s{\bf c}^{\prime}_{r}\neq{\bf c}^{\prime}_{s} and 𝐜r𝐜s{\bf c}^{\prime}_{r}\neq{\bf c}_{s} for all rsr\neq s, there exists a feature space \mathcal{F} and a feature map ψ:𝒳\psi:{\mathcal{X}}\mapsto\mathcal{F} such that err(,ψ,𝒯)=0{\text{err}}(\mathcal{F},\psi,\mathcal{T})=0.

Consider now the case where 𝔗={𝒯1,𝒯2}\mathfrak{T}=\{\mathcal{T}_{1},\mathcal{T}_{2}\} consists of two tasks. According to Theorem 2 there exists a ψ\psi that perfectly solves 𝒯1\mathcal{T}_{1}, but this ψ\psi might perform poorly on 𝒯2\mathcal{T}_{2}, and vice versa. So, it might not be possible to design good features if we do not know a priori which of these tasks we will face. Theorem 1 states that, in the extreme case where 𝔗\mathfrak{T} contains all possible tasks, this is indeed the case — the best possible ‘task-agnostic’ features ψ\psi will perform catastrophically on average. In other words, features must be task-dependent in order to succeed.

To draw a very approximate analogy, imagine once again that 𝔗={𝒯1}\mathfrak{T}=\{\mathcal{T}_{1}\} and that 𝒯1\mathcal{T}_{1} represents, say, a hand-written digit classification task. A practitioner, after years of experience, could hand-craft a very good feature map ψ\psi that performs almost perfectly for this task. If we then imagine the case 𝔗={𝒯1,𝒯2}\mathfrak{T}=\{\mathcal{T}_{1},\mathcal{T}_{2}\} where 𝒯1\mathcal{T}_{1} represents a hand-written digit classification task and 𝒯2\mathcal{T}_{2} represents, say, an animal classification task, then it becomes more difficult for a practitioner to handcraft a feature map ψ\psi that works well for both tasks. In this analogy, the size of the set 𝔗\mathfrak{T} encodes the amount of knowledge the practitioner has about the specific tasks she will face. The extreme choice 𝔗=Φ×𝒵2R\mathfrak{T}=\Phi\times\mathcal{Z}^{2R} corresponds to the practitioner knowing nothing beyond the fact that natural images are made of patches. Theorem 1 quantifies, in this extreme case, the impossibility of hand-crafting a feature map ψ\psi knowing only the range of possible tasks and not the specific task itself. In a realistic setting the collection of tasks 𝔗\mathfrak{T} is smaller, of course, and the data generative process itself is more coherent than in our simplified setup. Nonetheless, we hope our analysis sheds some light on some of the essential limitations of algorithms that do not learn features.

Finally, our empirical results (see Section 6) show that a simple algorithm that learns features does not face this obstacle. We do not need knowledge of the specific task 𝒯\mathcal{T} in order to design a good neural network architecture, but only of the family of tasks 𝔗=Φ×𝒵2R\mathfrak{T}=\Phi\times\mathcal{Z}^{2R} that we will face. Indeed, the architecture in Figure 2 succeeds at classifying unfamiliar test sentences more than 99%99\% of the time. This probability, which we empirically evaluate, is with respect to the choice of the task, the choice of the training set, and the choice of the unfamiliar test sentence (we use the values of L,nw,ncL,n_{w},n_{c} and RR from Theorem 1, and nspl=6,n_{\text{spl}}=6, for this experiment). Continuing with our approximate analogy, this means our hypothetical practitioner needs no domain specific knowledge beyond the patch structure of natural images when designing a successful architecture. In sum, successful feature design requires task-specific knowledge while successful architecture design requires only knowledge of the task family.

Main Theorem: Our main theoretical result extends Theorem 1 to arbitrary values of LL, nwn_{w}, ncn_{c}, nspln_{\rm spl} and RR. The resulting formula involves various combinatorial quantities. We denote by (nk){n\choose k} the binomial coefficients and by {nk}\genfrac{\{}{\}}{0.0pt}{}{n}{k} the Stirling numbers of the second kind. Let ={0,1,2,}\mathbb{N}=\{0,1,2,\ldots\} and let γ,γ^:L+1\gamma,\hat{\gamma}:\mathbb{N}^{L+1}\to\mathbb{N} be the functions defined by γ(𝐤):=i=1L+1(i1)ki\gamma({\bf k}):=\sum_{i=1}^{L+1}(i-1)k_{i} and γ^(𝐤):=i=1L+1iki,\hat{\gamma}({\bf k}):=\sum_{i=1}^{L+1}ik_{i}, respectively. We then define, for 0L0\leq\ell\leq L, the sets

𝒮:={𝐤L+1:γ^(𝐤)=nw and γ(𝐤)L}.\mathcal{S}_{\ell}:=\left\{{\bf k}\in\mathbb{N}^{L+1}:\;\;\;\hat{\gamma}({\bf k})=n_{w}\quad\text{ and }\quad\ell\leq\gamma({\bf k})\leq L\right\}.

We let 𝒮=𝒮0\mathcal{S}=\mathcal{S}_{0}, and we note that the inclusion 𝒮𝒮\mathcal{S}_{\ell}\subset\mathcal{S} always holds. Given 𝐤L+1{\bf k}\in\mathbb{N}^{L+1} we denote by

𝒜𝐤:={A(L+1)×nc:i=1L+1iAij=nw/ncfor all j and j=1ncAij=kifor all i}\mathcal{A}_{{\bf k}}:=\Big{\{}A\in\mathbb{N}^{(L+1)\times n_{c}}:\;\;\sum_{i=1}^{L+1}iA_{ij}=n_{w}/n_{c}\;\;\text{for all $j$}\;\;\;\text{ and }\;\;\;\sum_{j=1}^{n_{c}}A_{ij}=k_{i}\;\;\;\;\text{for all $i$}\Big{\}}

the set of 𝐤{\bf k}-admissible matrices. Finally, we let 𝔣,𝔤:𝒮\mathfrak{f},\mathfrak{g}:\mathcal{S}\to\real be the functions defined by

𝔣(𝐤):=((nw/nc)!)ncncLnw!A𝒜k(i=1L+1ki!Ai,1!Ai,2!Ai,nc!)and\displaystyle\mathfrak{f}({\bf k}):={\Big{(}(n_{w}/n_{c})!\Big{)}^{n_{c}}}\frac{n_{c}^{L}}{n_{w}!}\sum_{A\in\mathcal{A}_{k}}\left(\prod_{i=1}^{L+1}\frac{k_{i}!}{A_{i,1}!\;A_{i,2}!\;\cdots\;A_{i,n_{c}}!}\right)\qquad\text{and}\qquad
𝔤(𝐤):=γ(𝐤)!nw2L(nw!k1!k2!kL+1!i=2L+1(i(i2)i!)ki)(i=γ(𝐤)L(Li){iγ(𝐤)} 2inwLi),\displaystyle\mathfrak{g}({\bf k}):=\frac{\gamma({\bf k})!}{n_{w}^{2L}}\left(\frac{n_{w}!}{k_{1}!k_{2}!\cdots k_{L+1}!}\prod_{i=2}^{L+1}\left(\frac{i^{(i-2)}}{i!}\right)^{k_{i}}\right)\left(\sum_{i=\gamma(\bf k)}^{L}{L\choose i}\genfrac{\{}{\}}{0.0pt}{}{i}{\gamma({\bf k})}\;2^{i}n_{w}^{L-i}\right),

respectively. With these definitions at hand, we may now state our main theorem.

Theorem 3 (Main Theorem).

Let 𝔗=Φ×𝒵2R\mathfrak{T}=\Phi\times\mathcal{Z}^{2R}. Then

err¯(,ψ,𝔗)(𝐤𝒮𝔣(𝐤)𝔤(𝐤))1R(1+12max𝐤𝒮𝔣(𝐤))\overline{\text{err}}(\mathcal{F},\psi,\mathfrak{T})\geq\left(\sum_{{\bf k}\in\mathcal{S}_{\ell}}\mathfrak{f}({\bf k})\mathfrak{g}({\bf k})\right)-\frac{1}{R}\left(1+\frac{1}{2}\max_{{\bf k}\in\mathcal{S}_{\ell}}\;\mathfrak{f}({\bf k})\right) (9)

for all feature spaces \mathcal{F}, all feature maps ψ:𝒳\psi:{\mathcal{X}}\mapsto\mathcal{F}, and all 0L0\leq\ell\leq L.

The combinatorial quantities involved appear a bit daunting at a first glance, but, within the context of the proof, they all take a quite intuitive meaning. The heart of the proof involves the analysis of a measure of concentration that we call the permuted moment, and of an associated graph-cut problem. The combinatorial quantities arise quite naturally in the course of analyzing the graph cut problem. We provide a quick overview of the proof in Section 5, and refer to the appendix for full details. For now, it suffices to note that we have a formula (i.e. the right hand side of (9)) that can be exactly evaluated with a few lines code. This formula provides a relatively tight lower bound for the generalization error. Theorem 1 is then a direct consequence — plugging L=9L=9, nw=150n_{w}=150, nc=5n_{c}=5, R=1000R=1000 and =7\ell=7 in the right hand side of (9) gives the claimed 98.4%98.4\% lower bound.

4 Multiple Unfamiliar Sentences per Category

The two previous sections were concerned with the case in which each unfamiliar sequence of concepts has a single representative in the training set. In this section we consider the more general case in which each unfamiliar sequence of concepts has nn^{*} representatives in the training set, see Figure 3. Using a simple union bound, inequality (9) easily extends to this situation — the resulting formula is a bit cumbersome so we present it in the appendix (see Theorem 7). In the concrete case where L=9L=9, nw=150n_{w}=150, nc=5n_{c}=5, R=1000R=1000 this formula simplifies to

err¯(,ψ,𝔗)10.015n1/Rfor all  and all ψ,\overline{\text{err}}(\mathcal{F},\psi,\mathfrak{T})\geq 1-0.015\,n^{*}-1/R\qquad\text{for all $\mathcal{F}$ and all $\psi$,} (10)

therefore exhibiting an affine relationship between the error rate and the number nn^{*} of unfamiliar sentences per category.

Refer to caption
Figure 3: More general version of our data model with multiple unfamiliar sentences per category. The model parameters in this example are L=5L=5, nw=12n_{w}=12, nc=3n_{c}=3, R=3R=3, nspl=6n_{\text{spl}}=6 and n=2n^{*}=2.

Note that choosing n=1n^{*}=1 in (10) leads to a 98.4%98.4\% lower bound on the error rate, therefore recovering the result from Theorem 1. This lower bound then decreases by 1.5%1.5\% with each additional unfamiliar sentence per category in the training set.

We would like to emphasize one more time the importance of non-asymptotic analysis in the long-tailed learning setting. For example, in inequality (10), the difficulty lies in obtaining a value as small as possible for the coefficient in front of nn^{*}. We accomplish this via a careful analysis of the graph cut problem associated with our data model.

5 Proof Outline — Permuted Moment and Optimal Feature Map

The proof involves two main ingredients. First, the key insight of our analysis is the realization that generalization in our data model is closely tied to the permuted moment of a probability distribution. To state this central concept, it will prove convenient to think of probability distributions on 𝒳{\mathcal{X}} as vectors 𝐩N{\bf p}\in{}^{N} with N=|𝒳|N=|{\mathcal{X}}|, together with indices 0iN10\leq i\leq N-1 given by some arbitrary (but fixed) indexing of the elements of data space. Then pip_{i} denotes the probability of the ithi^{{\rm th}} element of 𝒳{\mathcal{X}} in this indexing. We use SNS_{N} to denote the set of permutations of {0,1,,N1}\{0,1,\ldots,N-1\} and σSN\sigma\in S_{N} to refer to a particular permutation. The ttht^{{th}} permuted moment of the probability vector 𝐩N{\bf p}\in{}^{N} is

t(𝐩):=maxσSNi=0N1(i/N)tpσ(i){\mathcal{H}}_{t}({\bf p})\;\;:=\;\;\max_{\sigma\in S_{N}}\;\;\sum_{i=0}^{N-1}\left(i/N\right)^{t}\,p_{\sigma(i)} (11)

Since (11) involves a maximum over all possible permutations, the definition clearly does not depend on the way the set 𝒳{\mathcal{X}} was indexed. In order to maximize the sum, the permutation σ\sigma must match the largest values of pip_{i} with the largest values of (i/N)t(i/N)^{t}, so the maximizing permutation simply orders the entries pip_{i} from smallest to largest. A very peaked distribution that gives large probability to only a handful of elements of 𝒳{\mathcal{X}} will have large permuted moment. Because of this, the permuted moment is akin to the negative entropy; it has large values for delta-like distributions and small values for uniform ones. From definition (11) it is clear that 0t(𝐩)10\leq{\mathcal{H}}_{t}({\bf p})\leq 1 for all probability vectors 𝐩{\bf p}, and it is easily verified that the permuted moment is convex. These properties, as well as various useful bounds for the permuted moment, are presented and proven in the appendix.

Second, we identify a specific feature map, ψ:𝒳\psi^{\star}:{\mathcal{X}}\to\mathcal{F}^{\star}, which is optimal for a collection of tasks closely related to the ones considered in our data model. Leveraging the optimality of ψ\psi^{\star} on these related tasks allows us to derive an error bound that holds for the tasks of interest. The feature map ψ\psi^{\star} is better understood through its associated kernel, which is given by the formula

K(𝐱,𝐲)=ψ(𝐱),ψ(𝐲)=ncLnwL|{φΦ:φ(x)=φ(y) for all 1L}||Φ|.K^{\star}({\bf x},{\bf y})=\;\langle\psi^{\star}({\bf x}),\psi^{\star}({\bf y})\rangle_{\mathcal{F}^{\star}}=\frac{n_{c}^{L}}{n_{w}^{L}}\;\;\frac{\big{|}\{\varphi\in\Phi:\varphi(x_{\ell})=\varphi(y_{\ell})\text{ for all }1\leq\ell\leq L\}\big{|}}{|\Phi|}. (12)

Up to normalization, K(𝐱,𝐲)K^{\star}({\bf x},{\bf y}) simply counts the number of equipartitions of the vocabulary for which sentences 𝐱{\bf x} and 𝐲{\bf y} have the same underlying sequence of concepts. Intuitively this makes sense, for the best possible kernel must leverage the only information we have at hand. We know the general structure of the problem (words are partitioned into concepts) but not the partition itself. So to try and determine if sentences (𝐱,𝐲)({\bf x},{\bf y}) were generated by the same sequence of concepts, the best we can do is to simply try all possible equipartitions of the vocabulary and count how many of them wind up generating (𝐱,𝐲)({\bf x},{\bf y}) from the same underlying sequence of concepts. A high count makes it more likely that (𝐱,𝐲)({\bf x},{\bf y}) were generated by the same sequence of concepts. The optimal kernel KK^{\star} does exactly this, and provides a good (actually optimal, see the appendix) measure of similarity between pairs of sentences.

For fixed 𝐱𝒳{\bf x}\in{\mathcal{X}}, the function 𝐲K(𝐱,𝐲){\bf y}\mapsto K^{\star}({\bf x},{\bf y}) defines a probability distribution on data space. The connection between generalization error, permuted moment, and optimal feature map, come from the fact that

sup,ψ[1err¯(,ψ,𝔗)]1|𝒳|𝐱𝒳2R1(K(𝐱,))+1R,\sup_{\mathcal{F},\psi}\left[1-\overline{\text{err}}(\mathcal{F},\psi,\mathfrak{T})\right]\leq\frac{1}{|{\mathcal{X}}|}\sum_{{\bf x}\in{\mathcal{X}}}\mathcal{H}_{2R-1}\left(K^{\star}({\bf x},\,\cdot)\right)+\frac{1}{R}, (13)

and so, up to a small error 1/R1/R, it is the permuted moments of KK^{\star} that determine the success rate. We then obtain the lower bound (9) by studying these moments in great detail. A simple union bound is then used to obtain inequalities such as (10).

6 Empirical Results

We conclude by presenting empirical results that complement our theoretical findings. The full details of these experiments (training procedure, hyperparameter choices, number of experiments ran to estimate the success rates, and standard deviations of these success rates), as well as additional experiments, can be found in Appendix E. Codes are available at https://anonymous.4open.science/r/Long_Tailed_Learning_Requires_Feature_Learning-17C4.

Parameter Settings. We consider five parameter settings for the data model depicted in Figure 3. Each setting corresponds to a column in Table 1. In all five settings, we set the parameters L=9L=9, nw=150n_{w}=150, nc=5n_{c}=5 and R=1000R=1000 to the values for which the error bound (10) holds. We choose values for the parameters nspln_{\rm spl} and nn^{*} so that the ithi^{th} column of the table corresponds to a setting in which the training set contains 55 familiar and ii unfamiliar sentences per category. Recall that nspln_{\rm spl} is the total number of samples per category in the training set. So the first column of the table corresponds to a setting in which each category contains 55 familiar sentences and 11 unfamiliar sentence, whereas the last column corresponds to a setting in which each category contains 55 familiar sentences and 55 unfamiliar sentences.

Algorithms. We evaluate empirically seven different algorithms. The first two rows of the table correspond to experiments in which the neural network in Figure 2 is trained with SGD and constant learning rate. At test time, we consider two different strategies to classify test sentences. The first row of the table considers the usual situation in which the trained neural network is used to classify test points. The second row considers the situation in which the trained neural network is only used to extract features (i.e. the concatenated words representation right before MLP2). The classification of test points is then accomplished by running a nearest neighbor classifier on these learned features. The third (resp. sixth) row of the table shows the results obtained when running a nearest neighbor algorithm (resp. SVM) on the features ψ\psi^{\star} of the optimal feature map. By the kernel trick, these algorithms only require the values of the optimal kernel ψ(𝐱),ψ(𝐲)\langle\psi^{\star}({\bf x}),\psi^{\star}({\bf y})\rangle_{\mathcal{F}^{\star}}, computed via (12), and not the features ψ\psi^{\star} themselves. The fourth (resp. seventh) row shows results obtained when running a nearest neighbor algorithm (resp. SVM) on features extracted by the simplest possible feature map, that is

ψonehot([x1,,xL])=[𝐞x1,,𝐞xL]\psi_{\rm one-hot}([x_{1},\ldots,x_{L}])=[{\bf e}_{x_{1}},\ldots,{\bf e}_{x_{L}}]

where 𝐞x{\bf e}_{x_{\ell}} denotes the one-hot-encoding of the th\ell^{th} word of the input sentence. Finally, the last row considers a SVM with Gaussian Kernel (also called RBF kernel).

Table 1: Success rate on unfamiliar test sentences.
n=1n^{*}\!=\!1 n=2n^{*}\!=\!2 n=3n^{*}\!=\!3 n=4n^{*}\!=\!4 n=5n^{*}\!=\!5
nspl=6n_{\rm spl}\!=\!6 nspl=7n_{\rm spl}\!=\!7 nspl=8n_{\rm spl}\!=\!8 nspl=9n_{\rm spl}\!=\!9 ​​nspl=10n_{\rm spl}\!=\!10
Neural network in Figure 2 99.8%99.8\% 99.9%99.9\% 99.9%99.9\% 99.9%99.9\% 100%100\%
Nearest neighb. on features learned by neural net 99.9%99.9\% 99.9%99.9\% 99.9%99.9\% 99.9%99.9\% 99.9%99.9\%
Nearest neighb. on features extracted by ψ\psi^{\star} 0.7%0.7\% 1.1%1.1\% 1.5%1.5\% 1.8%1.8\% 2.2%2.2\%
Nearest neighb. on features extracted by ψonehot\psi_{\rm one-hot} 0.6%0.6\% 1.1%1.1\% 1.4%1.4\% 1.7%1.7\% 2.1%2.1\%
Theoretical upper bound (0.015n+1/10000.015n^{*}+1/1000) 1.6%1.6\% 3.1%3.1\% 4.6%4.6\% 6.1%6.1\% 7.6%7.6\%
SVM on features extracted by ψ\psi^{\star} 0.6%0.6\% 1.5%1.5\% 2.2%2.2\% 3.2%3.2\% 4.2%4.2\%
SVM on features extracted by ψonehot\psi_{\rm one-hot} 0.5%0.5\% 1.1%1.1\% 1.9%1.9\% 2.8%2.8\% 3.8%3.8\%
SVM with Gaussian kernel 0.6%0.6\% 1.1%1.1\% 2.0%2.0\% 2.8%2.8\% 3.6%3.6\%

Results. The first two rows of the table correspond to algorithms that learn features from the data; the remaining rows correspond to algorithms that use a pre-determined (not learned) feature map. Table 1 reports the success rate of each algorithm on unfamiliar test sentences. A crystal-clear pattern emerges. Algorithms that learn features generalize almost perfectly, while algorithms that do not learn features catastrophically fail. Moreover, the specific classification rule matters little. For example, replacing MLP2 by a nearest neighbor classifier on the top of features learned by the neural network leads to equally accurate results. Similarly, replacing the nearest neighbor classifier by a SVM on the top of features extracted by ψ\psi^{\star} or ψonehot\psi_{\rm one-hot} leads to almost equally poor results. The only thing that matters is whether or not the features are learned. Finally, inequality (10) gives an upper bound of 0.015n+1/10000.015n^{*}+1/1000 on the success rate of the nearest neighbor classification rule applied on the top of any possible feature map (including ψ\psi^{\star} and ψonehot\psi_{\rm one-hot}). The fifth row of Table 1 compares this bound against the empirical accuracy obtained with ψ\psi^{\star} and ψonehot\psi_{\rm one-hot}, and the comparison shows that our theoretical upper bound is relatively tight.

When n=1n^{*}=1 our main theorem states that no feature map can succeed more than 1.6%1.6\% of the time on unfamiliar test sentences (fifth row of the table). At first glance this appears to contradict the empirical performance of the feature map extracted by the neural network, which succeeds 99%99\% of the time (second row of the table). The resolution of this apparent contradiction lies in the order of operations. The point here is to separate hand crafted or fixed features from learned features via the order of operations. If we choose the feature map before the random selection of the task then the algorithm performs poorly since it uses unlearned, task-independent features. By contrast, the neural network learns a feature map from the training set, and since the training set is generated by the task, this process takes place after the random selection of the task. It therefore uses task-dependent features, and the network performs almost perfectly for the specific task that generated its training set. But by our main theorem, it too must fail if the task changes but the features do not.

Acknowledgements

Xavier Bresson is supported by NRF Fellowship NRFF2017-10 and NUS-R-252-000-B97-133.

References

  • [1] Zeyuan Allen-Zhu and Yuanzhi Li. What can resnet learn efficiently, going beyond kernels? Advances in Neural Information Processing Systems, 32, 2019.
  • [2] Zeyuan Allen-Zhu and Yuanzhi Li. Backward feature correction: How deep learning performs deep learning. arXiv preprint arXiv:2001.04413, 2020.
  • [3] Zeyuan Allen-Zhu, Yuanzhi Li, and Zhao Song. A convergence theory for deep learning via over-parameterization. In International Conference on Machine Learning, pages 242–252. PMLR, 2019.
  • [4] Sanjeev Arora, Rong Ge, Behnam Neyshabur, and Yi Zhang. Stronger generalization bounds for deep nets via a compression approach. In International Conference on Machine Learning, pages 254–263. PMLR, 2018.
  • [5] Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E Hinton. Layer normalization. arXiv preprint arXiv:1607.06450, 2016.
  • [6] Peter L Bartlett, Dylan J Foster, and Matus J Telgarsky. Spectrally-normalized margin bounds for neural networks. Advances in neural information processing systems, 30, 2017.
  • [7] Gavin Brown, Mark Bun, Vitaly Feldman, Adam Smith, and Kunal Talwar. When is memorization of irrelevant training data necessary for high-accuracy learning? In Proceedings of the 53rd Annual ACM SIGACT Symposium on Theory of Computing, pages 123–132, 2021.
  • [8] Chih-Chung Chang and Chih-Jen Lin. Libsvm: a library for support vector machines. ACM transactions on intelligent systems and technology (TIST), 2(3):1–27, 2011.
  • [9] Simon S Du, Xiyu Zhai, Barnabas Poczos, and Aarti Singh. Gradient descent provably optimizes over-parameterized neural networks. arXiv preprint arXiv:1810.02054, 2018.
  • [10] Vitaly Feldman. Does learning require memorization? a short tale about a long tail. In Proceedings of the 52nd Annual ACM SIGACT Symposium on Theory of Computing, pages 954–959, 2020.
  • [11] Vitaly Feldman and Chiyuan Zhang. What neural networks memorize and why: Discovering the long tail via influence estimation. Advances in Neural Information Processing Systems, 33:2881–2891, 2020.
  • [12] Behrooz Ghorbani, Song Mei, Theodor Misiakiewicz, and Andrea Montanari. Limitations of lazy training of two-layers neural network. Advances in Neural Information Processing Systems, 32, 2019.
  • [13] Behrooz Ghorbani, Song Mei, Theodor Misiakiewicz, and Andrea Montanari. When do neural networks outperform kernel methods? Advances in Neural Information Processing Systems, 33:14820–14830, 2020.
  • [14] Noah Golowich, Alexander Rakhlin, and Ohad Shamir. Size-independent sample complexity of neural networks. In Conference On Learning Theory, pages 297–299. PMLR, 2018.
  • [15] Arthur Jacot, Franck Gabriel, and Clément Hongler. Neural tangent kernel: Convergence and generalization in neural networks. Advances in neural information processing systems, 31, 2018.
  • [16] Ziwei Ji and Matus Telgarsky. Polylogarithmic width suffices for gradient descent to achieve arbitrarily small test error with shallow relu networks. arXiv preprint arXiv:1909.12292, 2019.
  • [17] Stefani Karp, Ezra Winston, Yuanzhi Li, and Aarti Singh. Local signal adaptivity: Provable feature learning in neural networks beyond kernels. Advances in Neural Information Processing Systems, 34, 2021.
  • [18] Yuanzhi Li, Tengyu Ma, and Hongyang R Zhang. Learning over-parametrized two-layer neural networks beyond ntk. In Conference on learning theory, pages 2613–2682. PMLR, 2020.
  • [19] Ziwei Liu, Zhongqi Miao, Xiaohang Zhan, Jiayun Wang, Boqing Gong, and Stella X Yu. Large-scale long-tailed recognition in an open world. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 2537–2546, 2019.
  • [20] Eran Malach, Pritish Kamath, Emmanuel Abbe, and Nathan Srebro. Quantifying the benefit of using differentiable learning over tangent kernels. In International Conference on Machine Learning, pages 7379–7389. PMLR, 2021.
  • [21] Behnam Neyshabur, Srinadh Bhojanapalli, and Nathan Srebro. A Pac-Bayesian approach to spectrally-normalized margin bounds for neural networks. arXiv preprint arXiv:1707.09564, 2017.
  • [22] Behnam Neyshabur, Zhiyuan Li, Srinadh Bhojanapalli, Yann LeCun, and Nathan Srebro. Towards understanding the role of over-parametrization in generalization of neural networks. arXiv preprint arXiv:1805.12076, 2018.
  • [23] F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay. Scikit-learn: Machine learning in Python. Journal of Machine Learning Research, 12:2825–2830, 2011.
  • [24] Maria Refinetti, Sebastian Goldt, Florent Krzakala, and Lenka Zdeborová. Classifying high-dimensional Gaussian mixtures: Where kernel methods fail and neural networks succeed. In International Conference on Machine Learning, pages 8936–8947. PMLR, 2021.
  • [25] Ruslan Salakhutdinov, Antonio Torralba, and Josh Tenenbaum. Learning to share visual appearance for multiclass object detection. In CVPR 2011, pages 1481–1488. IEEE, 2011.
  • [26] Colin Wei, Jason D Lee, Qiang Liu, and Tengyu Ma. Regularization matters: Generalization and optimization of neural nets vs their induced kernel. Advances in Neural Information Processing Systems, 32, 2019.
  • [27] Gilad Yehudai and Ohad Shamir. On the power and limitations of random features for understanding neural networks. Advances in Neural Information Processing Systems, 32, 2019.
  • [28] Xiangxin Zhu, Dragomir Anguelov, and Deva Ramanan. Capturing long-tail distributions of object subcategories. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 915–922, 2014.

Appendix

In Section A we prove a few elementary properties of the permuted moment (11). Section B is devoted to the proof of inequality (13), which we restate here for convenience:

sup,ψ[1err¯(,ψ,𝔗)]1|𝒳|𝐱𝒳2R1(K(𝐱,))+1R\sup_{\mathcal{F},\psi}\left[1-\overline{\text{err}}(\mathcal{F},\psi,\mathfrak{T})\right]\leq\frac{1}{|{\mathcal{X}}|}\sum_{{\bf x}\in{\mathcal{X}}}\mathcal{H}_{2R-1}\left(K^{\star}({\bf x},\,\cdot)\right)+\frac{1}{R} (14)

where the collection of tasks 𝔗=Φ×𝒵2R\mathfrak{T}=\Phi\times\mathcal{Z}^{2R} consists in all possible tasks that one might encounter. Inequality (14) plays a central role in our work as it establishes the connection between the generalization error, the permuted moment, and the optimal kernel KK^{\star} defined by (12). The proof is non-technical and easily accessible. In Section C we provide the following upper bound on the permuted moment of the optimal kernel:

1|𝒳|𝐱𝒳2R1(K(𝐱,))(1𝐤𝒮𝔣(𝐤)𝔤(𝐤))+12R(max𝐤𝒮𝔣(𝐤))\frac{1}{|{\mathcal{X}}|}\sum_{{\bf x}\in{\mathcal{X}}}{\mathcal{H}}_{2R-1}\left(K^{\star}({\bf x},\,\cdot)\right)\leq\left(1-\sum_{{\bf k}\in\mathcal{S}_{\ell}}\;\mathfrak{f}({\bf k})\mathfrak{g}({\bf k})\right)+\frac{1}{2R}\left(\max_{{\bf k}\in\mathcal{S}_{\ell}}\;\mathfrak{f}({\bf k})\right) (15)

for all 0L.0\leq\ell\leq L. The proof is combinatorial in nature, and involves the analysis of a graph-cut problem. Combining (14) and (15) establishes Theorem 3. In Section D we consider the case in which each unfamiliar sequence of concepts has nn^{*} representatives in the training set. A simple union bound shows that, in this situation, inequality (14) becomes

sup,ψ[1err¯(,ψ,𝔗)]n|𝒳|𝐱𝒳2R1(K(𝐱,))+1R\sup_{\mathcal{F},\psi}\left[1-\overline{\text{err}}(\mathcal{F},\psi,\mathfrak{T})\right]\leq\frac{n^{*}}{|{\mathcal{X}}|}\sum_{{\bf x}\in{\mathcal{X}}}\mathcal{H}_{2R-1}\left(K^{\star}({\bf x},\,\cdot)\right)+\frac{1}{R} (16)

Combining (16) and (15) then provides our most general error bound, see Theorem 7. Inequality (10) in the main body of the paper is just a special case of Theorem 7. Finally, in Section E, we provide the full details of the experiments.

Appendix A Properties of the Permuted Moment

The permuted moment, in Section 5, was defined for probability vectors only. It will prove convenient to consider the permuted moment of nonnegative vectors as well. We denote by =+[0,+){}_{+}=[0,+\infty) the nonnegative real numbers, and by N+{}_{+}^{N} the vectors with NN nonnegative real entries indexed from i=0i=0 to i=N1i=N-1. The permuted moment of 𝐮+N{\bf u}\in{}_{+}^{N} is then given by

t(𝐮):=maxσSNi=0N1(i/N)tuσ(i).{\mathcal{H}}_{t}({\bf u})\;\;:=\;\;\max_{\sigma\in S_{N}}\sum_{i=0}^{N-1}\left(i/N\right)^{t}\,u_{\sigma(i)}. (17)

where SNS_{N} denote the set of permutations of {0,1,,N1}\{0,1,\ldots,N-1\}. The concept of an ordering permutation will prove useful in the next lemma.

Definition 1.

σSN\sigma\in S_{N} is said to be an ordering permutation of 𝐮N{\bf u}\in{}^{N} if

uσ(0)uσ(1)uσ(N1).\displaystyle u_{\sigma(0)}\leq u_{\sigma(1)}\leq\ldots\leq u_{\sigma(N-1)}. (18)

The lemma below shows that the permutation maximizing (17) is the one that sorts the entries uiu_{i} from smallest to largest.

Lemma 1.

Let 𝐮+N{\bf u}\in{}_{+}^{N} and let σ\sigma^{*} be an ordering permutation of 𝐮{\bf u}. Then

σargmaxσSNi=0N1(i/N)tuσ(i).\displaystyle\sigma^{*}\;\;\in\;\;\arg\max_{\sigma\in S_{N}}\;\;\sum_{i=0}^{N-1}\left(i/N\right)^{t}\,u_{\sigma(i)}. (19)
Proof.

The optimization problem (19) can be formulated as finding a pairing between the uiu_{i}’s and the (i/N)t(i/N)^{t}’s that maximizes the sum of the product of the pairs. An ordering permutation of 𝐮{\bf u} corresponds to pairing the smallest entry of 𝐮{\bf u} to (0/N)t(0/N)^{t}, the second smallest entry to (1/N)t(1/N)^{t}, the third smallest entry to (2/N)t(2/N)^{t}, and so forth. This pairing is clearly optimal. ∎

In light of the previous lemma, we see that computing the permuted moment of a vector 𝐮{\bf u} can be accomplished as follow: 1) sort the entries of 𝐮{\bf u} from smallest to largest; 2) compute the dot product between this sorted vector and the vector

[(0N)t(1N)t(2N)t(N1N)t].\begin{bmatrix}\left(\frac{0}{N}\right)^{t}&\left(\frac{1}{N}\right)^{t}&\left(\frac{2}{N}\right)^{t}&\ldots&\left(\frac{N-1}{N}\right)^{t}\end{bmatrix}. (20)

Let us now focus on the case where 𝐮{\bf u} is a probability distribution. If 𝐮{\bf u} is very peaked, it must have a large permuted moment since, after sorting, most of the mass concentrates on the high values of (20) located on the right. On the contrary, if 𝐮{\bf u} is very spread, it must have small permuted moment since it ‘wastes’ its mass on small values of (20). Because of this, the permuted moment is akin to the negative entropy; it has large values for delta-like distributions and small values for uniform ones.

We now show that the permuted moment is subaddiditive and one-homogeneous on N+{}_{+}^{N} (as a consequence it is convex on the set of probability vectors) and we derive some elementary 1\ell_{1} and \ell_{\infty} bounds. We denote by 𝐮p\|{\bf u}\|_{p} the p\ell_{p}-norm of a vector 𝐮{\bf u}. In particular, if 𝐮+N{\bf u}\in{}^{N}_{+}, we have

𝐮1:=i=0N1ui and 𝐮:=max0iN1ui.\|{\bf u}\|_{1}:=\sum_{i=0}^{N-1}u_{i}\qquad\text{ and }\qquad\|{\bf u}\|_{\infty}:=\max_{0\leq i\leq N-1}u_{i}.

With this notation in hand, we can now state our lemma:

Lemma 2.
  1. (i)

    t(𝐮+𝐯)t(𝐮)+t(𝐯){\mathcal{H}}_{t}({\bf u}+{\bf v})\leq{\mathcal{H}}_{t}({\bf u})+{\mathcal{H}}_{t}({\bf v})  for all 𝐮,𝐯+N{\bf u},{\bf v}\in{}_{+}^{N}.

  2. (ii)

    t(c𝐮)=ct(𝐮){\mathcal{H}}_{t}(c\,{\bf u})=c\,{\mathcal{H}}_{t}({\bf u})   for all 𝐮+N{\bf u}\in{}_{+}^{N} and all c0c\geq 0.

  3. (iii)

    t(𝐮)𝐮1{\mathcal{H}}_{t}({\bf u})\leq\|{\bf u}\|_{1}  for all 𝐮+N{\bf u}\in{}_{+}^{N}.

  4. (iv)

    t(𝐮)Nt+1𝐮{\mathcal{H}}_{t}({\bf u})\leq\frac{N}{t+1}\|{\bf u}\|_{\infty}   for all 𝐮+N{\bf u}\in{}_{+}^{N}.

Proof.

Properties (i) and (ii) are obvious. To prove (iii) and (iv), define wi=(i/N)tw_{i}=(i/N)^{t} and note that

𝐰1and𝐰1=N(1Ni=0N1(i/N)t)N01xt𝑑t=Nt+1\|{\bf w}\|_{\infty}\leq 1\qquad\text{and}\qquad\|{\bf w}\|_{1}=N\left(\frac{1}{N}\sum_{i=0}^{N-1}\left(i/N\right)^{t}\right)\leq N\int_{0}^{1}x^{t}dt=\frac{N}{t+1}

Then (iii) comes from t(𝐮)𝐰𝐮1{\mathcal{H}}_{t}({\bf u})\leq\|{\bf w}\|_{\infty}\|{\bf u}\|_{1} whereas (iv) comes from t(𝐮)𝐰1𝐮.{\mathcal{H}}_{t}({\bf u})\leq\|{\bf w}\|_{1}\|{\bf u}\|_{\infty}.

We conclude this section with a slightly more sophisticated bound that holds for probability vectors — this bound will play a central role in Section C.

Lemma 3.

Suppose 𝐩+N{\bf p}\in{}_{+}^{N}, and suppose i=1Npi=1.\sum_{i=1}^{N}p_{i}=1. Then

t(𝐩)(1i=0N1min{pi,λ})+λNt+1 for all λ0.{\mathcal{H}}_{t}({\bf p})\leq\left(1-\sum_{i=0}^{N-1}\min\{p_{i},\lambda\}\right)+\frac{\lambda N}{t+1}\qquad\text{ for all }\lambda\geq 0.
Proof.

Fix a λ0\lambda\geq 0 and define the vectors 𝐮{\bf u} and 𝐯{\bf v} as follow:

ui=min{pi,λ} andvi=pimin{pi,λ} for all 0iN1u_{i}=\min\{p_{i},\lambda\}\qquad\text{ and}\qquad v_{i}=p_{i}-\min\{p_{i},\lambda\}\qquad\text{ for all }0\leq i\leq N-1

Note that this two vectors are non-negative and sum to 𝐩{\bf p}. We can therefore use Lemma 2 to obtain

t(𝐩)=t(𝐮+𝐯)t(𝐮)+t(𝐯)Nt+1𝐮+𝐯1{\mathcal{H}}_{t}({\bf p})={\mathcal{H}}_{t}({\bf u}+{\bf v})\leq{\mathcal{H}}_{t}({\bf u})+{\mathcal{H}}_{t}({\bf v})\leq\frac{N}{t+1}\|{\bf u}\|_{\infty}+\|{\bf v}\|_{1}

To conclude, we note that 𝐮λ\|{\bf u}\|_{\infty}\leq\lambda, and 𝐯1=1i=0N1min{pi,λ}.\|{\bf v}\|_{1}=1-\sum_{i=0}^{N-1}\min\{p_{i},\lambda\}.

Appendix B Permuted Moment of KK^{\star} and Generalization Error

This section is devoted to the proof of inequality (14). We start by recalling a few definitions. The vocabulary, set of concepts, data space, and latent space are

𝒱={1,,nw},𝒞={1,,nc},𝒳=𝒱L and 𝒵=𝒞L\mathcal{V}=\{1,\ldots,n_{w}\},\qquad\mathcal{C}=\{1,\ldots,n_{c}\},\qquad{\mathcal{X}}=\mathcal{V}^{L}\qquad\text{ and }\qquad\mathcal{Z}=\mathcal{C}^{L}

respectively. Elements of 𝒳\mathcal{X} are sentences of LL words and they take the form 𝐱=[x1,x2,,xL],{\bf x}=[x_{1},x_{2},\ldots,x_{L}], while elements of 𝒵\mathcal{Z} take the form 𝐜=[c1,c2,,cL]{\bf c}=[c_{1},c_{2},\ldots,c_{L}] and correspond to sequences of concepts. We also recall that the collection of all equipartitions of the vocabulary is denoted by

Φ={All functions φ from 𝒱 to 𝒞 that satisfy |φ1({c})|=sc for all c }\Phi=\big{\{}\text{All functions $\varphi$ from $\mathcal{V}$ to $\mathcal{C}$ that satisfy $|\varphi^{-1}(\{c\})|=s_{c}$ for all $c$ }\big{\}}

where sc:=nw/ncs_{c}:=n_{w}/n_{c} denote the size of the concepts. Given φΦ\varphi\in\Phi, we denote by φ̊:𝒳𝒞\mathring{\varphi}:{\mathcal{X}}\to\mathcal{C} the function

φ̊([x1,x2,,xL]):=[φ(x1),φ(x2),,φ(xL)]\mathring{\varphi}\big{(}[x_{1},\,x_{2},\,\ldots,\,x_{L}]\big{)}:=\big{[}\varphi(x_{1}),\,\varphi(x_{2}),\,\ldots,\,\varphi(x_{L})\big{]}

that operates on sentences element-wise. The informal statement “the sentence 𝐱{\bf x} is randomly generated by the sequence of concepts 𝐜{\bf c}” means that 𝐱{\bf x} is sampled uniformly at random from the set

φ̊1({𝐜})={𝐱𝒳:φ̊(𝐱)=𝐜}.\mathring{\varphi}^{-1}(\{{\bf c}\})=\{{\bf x}\in{\mathcal{X}}:\mathring{\varphi}({\bf x})={\bf c}\}. (21)

We will often do the abuse of notation of writing φ̊1(𝐜)\mathring{\varphi}^{-1}({\bf c}) instead of φ̊1({𝐜})\mathring{\varphi}^{-1}(\{{\bf c}\}). We now formally define the sampling process associated with our main data model.

Sampling Process DM:

(i) Sample 𝒯=(φ;𝐜1,,𝐜R;𝐜1,,𝐜R)\mathcal{T}=(\;\varphi\;;\;{\bf c}_{1},\ldots,{\bf c}_{R}\;;\;{\bf c}^{\prime}_{1},\ldots,{\bf c}^{\prime}_{R}\;) uniformly at random in 𝔗=Φ×𝒵2R\mathfrak{T}=\Phi\times\mathcal{Z}^{2R}. (ii) For r=1,,Rr=1,\ldots,R: Sample (𝐱r,1,,𝐱r,nunf)({\bf x}_{r,1},\ldots,{\bf x}_{r,n_{\rm unf}}) uniformly at random in φ̊1(𝐜r)××φ̊1(𝐜r)\mathring{\varphi}^{-1}({\bf c}^{\prime}_{r})\times\ldots\times\mathring{\varphi}^{-1}({\bf c}^{\prime}_{r}). Sample (𝐱r,nunf+1,,𝐱r,nspl)({\bf x}_{r,n_{\rm unf}+1},\ldots,{\bf x}_{r,n_{\rm spl}}) uniformly at random in φ̊1(𝐜r)××φ̊1(𝐜r)\mathring{\varphi}^{-1}({\bf c}_{r})\times\ldots\times\mathring{\varphi}^{-1}({\bf c}_{r}). (iii) Sample 𝐱test{\bf x}^{\text{test}} uniformly at random in φ̊1(𝐜1)\mathring{\varphi}^{-1}({\bf c}^{\prime}_{1}).

Step (i) of the above sampling process consists in selecting at random a task 𝒯\mathcal{T} among all possible tasks. Step (ii) consists in generating a training set111We refer to SS as the ‘training set’. In our formalism however it is not a set, but an element of 𝒳R×nspl{\mathcal{X}}^{R\times n_{spl}}. S𝒳R×nsplS\in{\mathcal{X}}^{R\times n_{spl}} exactly as depicted on Figure 1: each unfamiliar sequence of concept 𝐜r{\bf c}_{r}^{\prime} generates nunfn_{\rm unf} sentences, whereas each familiar sequence of concept 𝐜r{\bf c}_{r} generates nfamn_{\rm fam} sentences (recall that the number of samples per category is nspl=nunf+nfamn_{\rm spl}=n_{\rm unf}+n_{\rm fam}). Finally step (iii) consists in randomly generating an unfamiliar test sentence 𝐱test𝒳{\bf x}^{\text{test}}\in{\mathcal{X}}. Without loss of generality we assume that this test sentence is generated by the unfamiliar sequence of concept 𝐜1{\bf c}_{1}^{\prime}.

We denote by ϱDM\varrho_{{\rm DM}} the p.d.f. of the sampling process DM. This function is defined on the sample space

ΩDM:=(Φ×𝒞2R)×𝒳R×nspl×𝒳.\Omega_{\rm DM}:=\left(\Phi\times\mathcal{C}^{2R}\right)\times{\mathcal{X}}^{R\times n_{spl}}\times{\mathcal{X}}\;.

A sample from ΩDM\Omega_{\rm DM} takes the form

ω=(φ;𝐜1,,𝐜R;𝐜1,,𝐜RThe task;𝐱1,1,,𝐱1,nspl;;𝐱R,1,,𝐱R,nsplThe training sentences;𝐱testThe test sentence)\omega=\Big{(}\underbrace{\varphi\;;\;{\bf c}_{1},\ldots,{\bf c}_{R}\;;\;{\bf c}^{\prime}_{1},\ldots,{\bf c}^{\prime}_{R}}_{\text{The task}}\;;\;\underbrace{{\bf x}_{1,1},\ldots,{\bf x}_{1,n_{\text{spl}}}\;;\;\ldots\;;\;{\bf x}_{R,1},\ldots,{\bf x}_{R,n_{\text{spl}}}}_{\text{The training sentences}}\;;\;\underbrace{{\bf x^{\text{test}}}}_{\text{The test sentence}}\Big{)}

and we have the following formula for ϱDM\varrho_{{\rm DM}}

ϱDM(ω):=1|Φ||𝒞|2Rr=1R(𝟏φ̊1(𝐜r)(𝐱r, 1)|φ̊1(𝐜r)|s=2nspl𝟏φ̊1(𝐜r)(𝐱r,s)|φ̊1(𝐜r)|)𝟏φ̊1(𝐜1)(𝐱test)|φ̊1(𝐜1)|\varrho_{{\rm DM}}(\omega):=\frac{1}{|\Phi||\mathcal{C}|^{2R}}\prod^{R}_{r=1}\left(\frac{{\bf 1}_{\mathring{\varphi}^{-1}({\bf c}^{\prime}_{r})}({\bf x}_{r,\,1})}{\left|\mathring{\varphi}^{-1}({\bf c}^{\prime}_{r})\right|}\prod^{n_{\text{spl}}}_{s=2}\frac{{\bf 1}_{\mathring{\varphi}^{-1}({\bf c}_{r})}({\bf x}_{r,\,s})}{\left|\mathring{\varphi}^{-1}({\bf c}_{r})\right|}\right)\frac{{\bf 1}_{\mathring{\varphi}^{-1}({\bf c}^{\prime}_{1})}({\bf x}^{\text{test}})}{\left|\mathring{\varphi}^{-1}({\bf c}^{\prime}_{1})\right|} (22)

where 𝟏φ̊1(𝐜r){\bf 1}_{\mathring{\varphi}^{-1}({\bf c}_{r})} and 𝟏φ̊1(𝐜r){\bf 1}_{\mathring{\varphi}^{-1}({\bf c}_{r}^{\prime})} denote the indicator functions of the set φ̊1(𝐜r)\mathring{\varphi}^{-1}({\bf c}_{r}) and φ̊1(𝐜r)\mathring{\varphi}^{-1}({\bf c}_{r}) respectively. Let us compute a few marginals of ϱDM\varrho_{{\rm DM}} in order to verify that it is indeed the p.d.f. of the sampling process DM. Writing ω=(𝒯,S,𝐱test)\omega=(\mathcal{T},S,{\bf x^{\text{test}}}), summing over the variables SS and 𝐱test{\bf x^{\text{test}}}, and using the fact that 𝐱𝒳𝟏φ̊1(𝐜)(𝐱)=|φ̊1(𝐜)|,\sum_{{\bf x}\in{\mathcal{X}}}{\bf 1}_{\mathring{\varphi}^{-1}({\bf c})}({\bf x})=|\mathring{\varphi}^{-1}({\bf c})|, we obtain

S𝒞2R𝐱test𝒳ϱDM(𝒯,S,𝐱test)=1|Φ||𝒞|2R.\sum_{S\in\mathcal{C}^{2R}}\sum_{{\bf x^{\text{test}}}\in{\mathcal{X}}}\varrho_{{\rm DM}}(\mathcal{T},S,{\bf x}^{\text{test}})=\frac{1}{|\Phi||\mathcal{C}|^{2R}}.

This shows that each task 𝒯\mathcal{T} is equiprobable. Summing over the variable SS gives

S𝒞2RϱDM(𝒯,S,𝐱test)=1|Φ||𝒞|2R𝟏φ̊1(𝐜1)(𝐱test)|φ̊1(𝐜1)|\sum_{S\in\mathcal{C}^{2R}}\varrho_{{\rm DM}}(\mathcal{T},S,{\bf x}^{\text{test}})=\frac{1}{|\Phi||\mathcal{C}|^{2R}}\frac{{\bf 1}_{\mathring{\varphi}^{-1}({\bf c}^{\prime}_{1})}({\bf x}^{\text{test}})}{\left|\mathring{\varphi}^{-1}({\bf c}^{\prime}_{1})\right|}

This shows that, given a task 𝒯\mathcal{T}, the test sentence 𝐱test{\bf x^{\text{test}}} is obtained by sampling uniformly at random from φ̊1(𝐜1)\mathring{\varphi}^{-1}({\bf c}^{\prime}_{1}). A similar calculation shows that, given a task 𝒯\mathcal{T}, the train sentence 𝐱r,s{\bf x}_{r,s} is obtained by sampling uniformly at random from φ̊1(𝐜r)\mathring{\varphi}^{-1}({\bf c}_{r}) if s1s\geq 1, and from φ̊1(𝐜r)\mathring{\varphi}^{-1}({\bf c}_{r}^{\prime}) if s=1s=1.

Given a feature space \mathcal{F} and a feature map ψ:𝒳\psi:{\mathcal{X}}\to\mathcal{F}, we define the events E,ψΩDME_{\mathcal{F},\psi}\subset\Omega_{\rm DM} as follow:

E,ψ={ωΩDM:There exists 1snspl such that ψ(𝐱test),ψ(𝐱1,s)>ψ(𝐱test),ψ(𝐱r,s) for all 2rR and all 1snspl}.E_{\mathcal{F},\psi}=\Big{\{}\omega\in\Omega_{\rm DM}:\;\;\text{There exists }1\leq s^{*}\leq n_{\text{spl}}\text{ such that }\\ \left\langle\psi({\bf x}^{\text{test}}),\psi({\bf x}_{1,s^{*}})\right\rangle_{\mathcal{F}}>\left\langle\psi({\bf x}^{\text{test}}),\psi({\bf x}_{r,s})\right\rangle_{\mathcal{F}}\text{ for all }2\leq r\leq R\text{ and all }1\leq s\leq n_{\text{spl}}\Big{\}}. (23)

Note that this event consists in all the outcomes ω=(𝒯,S,𝐱test)\omega=(\mathcal{T},S,{\bf x^{\text{test}}}) for which the feature map ψ\psi associates the test sentence 𝐱test{\bf x^{\text{test}}} to a train point 𝐱r,s{\bf x}_{r,s} from the first category. Since by construction, 𝐱test{\bf x^{\text{test}}} belongs to the first category, E,ψE_{\mathcal{F},\psi} consists in all the outcomes for which the nearest neighbor classification rule is ‘successful’. As a consequence, when 𝔗=Φ×𝒵2R\mathfrak{T}=\Phi\times\mathcal{Z}^{2R}, the generalization error can be expressed as

err¯(,ψ,𝔗)=1DM[E,ψ]\overline{\text{err}}(\mathcal{F},\psi,\mathfrak{T})=1-\mathbb{P}_{{\rm DM}}\Big{[}E_{\mathcal{F},\psi}\Big{]} (24)

where DM\mathbb{P}_{\rm DM} denote the probability measure on ΩDM\Omega_{\rm DM} induced by ϱDM\varrho_{\rm DM}. Equation (24) should be viewed as our ‘fully formal’ definition of the quantity err¯(,ψ,𝔗)\overline{\text{err}}(\mathcal{F},\psi,\mathfrak{T}), as opposed to the more informal definition given earlier by equations (6) and (7).

The goal of this section is to prove inequality (14), which, in light of (24) is equivalent to

sup,ψDM[E,ψ]1|𝒳|𝐱𝒳2R1(K(𝐱,))+1R.\sup_{\mathcal{F},\psi}\;\mathbb{P}_{{\rm DM}}\Big{[}E_{\mathcal{F},\psi}\Big{]}\leq\frac{1}{|{\mathcal{X}}|}\sum_{{\bf x}\in{\mathcal{X}}}\mathcal{H}_{2R-1}\left(K^{\star}({\bf x},\,\cdot)\right)+\frac{1}{R}. (25)

which in turn equivalent to

supK:𝒳×𝒳K pos. semi-def.DM[EK]1|𝒳|𝐱𝒳2R1(K(𝐱,))+1R\sup_{\begin{subarray}{c}K:{\mathcal{X}}\times{\mathcal{X}}\to\real\\ K\text{ pos. semi-def.}\end{subarray}}\mathbb{P}_{{\rm DM}}\Big{[}E_{K}\Big{]}\leq\frac{1}{|{\mathcal{X}}|}\sum_{{\bf x}\in{\mathcal{X}}}\mathcal{H}_{2R-1}\left(K^{\star}({\bf x},\,\cdot)\right)+\frac{1}{R} (26)

where the event EKE_{K} is defined by

EK={ωΩDM:There exists 1snspl such that K(𝐱test,𝐱1,s)>K(𝐱test,𝐱r,s) for all 2rR and all 1snspl}E_{K}=\Big{\{}\omega\in\Omega_{\rm DM}:\;\;\text{There exists }1\leq s^{*}\leq n_{\text{spl}}\text{ such that }\\ K({\bf x}^{\text{test}},{\bf x}_{1,s^{*}})>K({\bf x}^{\text{test}},{\bf x}_{r,s})\text{ for all }2\leq r\leq R\text{ and all }1\leq s\leq n_{\text{spl}}\Big{\}} (27)

and where the supremum is taken over all kernels K:𝒳×𝒳K:{\mathcal{X}}\times{\mathcal{X}}\to\real which are symmetric positive semidefinite. We will actually prove a slightly stronger result, namely

supK:𝒳×𝒳Kis symmetricDM[EK]1|𝒳|𝐱𝒳2R1(K(𝐱,))+1R\sup_{\begin{subarray}{c}K:{\mathcal{X}}\times{\mathcal{X}}\to\real\\ K\text{is symmetric}\end{subarray}}\mathbb{P}_{{\rm DM}}\Big{[}E_{K}\Big{]}\leq\frac{1}{|{\mathcal{X}}|}\sum_{{\bf x}\in{\mathcal{X}}}\mathcal{H}_{2R-1}\left(K^{\star}({\bf x},\,\cdot)\right)+\frac{1}{R} (28)

where the supremum is taken over all functions K:𝒳×𝒳K:{\mathcal{X}}\times{\mathcal{X}}\to\real that satisfy K(𝐱,𝐲)=K(𝐲,𝐱)K({\bf x},{\bf y})=K({\bf y},{\bf x}) for all (𝐱,𝐲)𝒳×𝒳.({\bf x},{\bf y})\in{\mathcal{X}}\times{\mathcal{X}}. The rest of the section is devoted to proving (28).

In Subsection B.1 we start by considering a simpler data model — for this simpler data model we are able to show that the function ψ\psi^{\star} implicitly defined by (12) is the best possible feature map (we actually only work with the associated kernel KK^{\star}, and never need ψ\psi^{\star} itself). We also show that the success rate is exactly equal to the permuted moment of KK^{\star} — see Theorem 4, which is is the central result of this section. In the remaining subsections, namely Subsection B.2 and Subsection B.3, we leverage the bound obtained for the simpler data model in order to obtain bound (28) for the main data model. These two subsections are mostly notational. The core of the analysis takes place in Subsection B.1.

B.1 A Simpler Data Model

We start by presenting the sampling process associated with our simpler datamodel.

Sampling Process SDM:

(i) Sample φ\varphi uniformly at random in Φ\Phi. Sample 𝐜1,𝐜2,,𝐜t+1{\bf c}_{1},{\bf c}_{2},\ldots,{\bf c}_{t+1} uniformly at random in 𝒵\mathcal{Z}. (ii) For 1rt+11\leq r\leq t+1: Sample 𝐱r{\bf x}_{r} uniformly at random in φ̊1(𝐜r)\mathring{\varphi}^{-1}({\bf c}_{r}). (iii) Sample 𝐱test{\bf x^{\text{test}}} uniformly at random in φ̊1(𝐜1)\mathring{\varphi}^{-1}({\bf c}_{1}).

The function

ϱSDM(φ;𝐜1,,𝐜t+1;𝐱1,,𝐱t+1;𝐱test):=1|Φ||𝒞|t+1(r=1t+1𝟏φ̊1(𝐜r)(𝐱r)|φ̊1(𝐜r)|)𝟏φ̊1(𝐜1)(𝐱test)|φ̊1(𝐜1)|\varrho_{\rm SDM}(\varphi;{\bf c}_{1},\ldots,{\bf c}_{t+1};{\bf x}_{1},\ldots,{\bf x}_{t+1};{\bf x}^{\text{test}}):=\frac{1}{|\Phi||\mathcal{C}|^{t+1}}\left(\prod^{t+1}_{r=1}\frac{{\bf 1}_{\mathring{\varphi}^{-1}({\bf c}_{r})}({\bf x}_{r})}{\left|\mathring{\varphi}^{-1}({\bf c}_{r})\right|}\right)\frac{{\bf 1}_{\mathring{\varphi}^{-1}({\bf c}_{1})}({\bf x}^{\text{test}})}{\left|\mathring{\varphi}^{-1}({\bf c}_{1})\right|} (29)

on ΩSDM:=Φ×𝒞t+1×𝒳t+2\Omega_{\rm SDM}:=\Phi\times\mathcal{C}^{t+1}\times{\mathcal{X}}^{t+2} is the p.d.f. of the above sampling process. We use SDM\mathbb{P}_{\rm SDM} to denote the probability measure on ΩSDM\Omega_{\rm SDM} induced by this function. The identity in the next theorem is the central result of this section.

Theorem 4.

Let 𝒦\mathcal{K} denote the set of all symmetric functions from 𝒳×𝒳{\mathcal{X}}\times{\mathcal{X}} to . Then

supK𝒦SDM[K(𝐱test,𝐱1)>K(𝐱test,𝐱r) for all 2rt+1]=1|𝒳|𝐱𝒳t(K𝐱)\sup_{K\in\mathcal{K}}\;\;\mathbb{P}_{\rm SDM}\Big{[}K({\bf x^{\text{test}}},{\bf x}_{1})>K({\bf x^{\text{test}}},{\bf x}_{r})\text{ for all }2\leq r\leq t+1\Big{]}=\frac{1}{|{\mathcal{X}}|}\sum_{{\bf x}\in{\mathcal{X}}}\mathcal{H}_{t}(K^{\star}_{\bf x}) (30)

In (30), K𝐱K_{\bf x}^{\star} stands for the function K(𝐱,)K({\bf x},\cdot). Theorem 4 establishes an intimate connection between the permuted moment and the ability of any fixed feature map (or equivalently, any fixed kernel) to generalize well in our framework. The sampling process considered in this theorem involves two points, 𝐱test{\bf x}^{\text{test}} and 𝐱1{\bf x}_{1}, generated by the same sequence of concepts 𝐜1{\bf c}_{1}, and tt ‘distractor’ points 𝐱2,,𝐱t+1{\bf x}_{2},\,\ldots,{\bf x}_{t+1} generated by different sequences of concepts. Success for the kernel KK means correctly recognizing that 𝐱test{\bf x}^{\text{test}} is more ‘similar’ to 𝐱1{\bf x}_{1} than any of the distractors, and the success rate in (30) precisely quantifies its ability to do so as a function of the number tt of distractors. The theorem shows that the probability of success for the best possible kernel at this task is exactly equal to the averaged ttht^{th}-permuted moment of K𝐱K^{\star}_{\bf x}, so it elegantly quantifies the generalization ability of the best possible fixed feature map in term of the permuted moment. We also provide an explicit construction for a kernel K(𝐱,𝐲)K({\bf x},{\bf y}) that achieves the supremum in (30) — First, choose a kernel ε(𝐱,𝐲)\varepsilon({\bf x},{\bf y}) that satisfies

  1. (i)

    ε(𝐱,𝐲)ε(𝐱,𝐳)\varepsilon({\bf x},{\bf y})\neq\varepsilon({\bf x},{\bf z}) for all 𝐱,𝐲,𝐳𝒳{\bf x},{\bf y},{\bf z}\in{\mathcal{X}} with 𝐲𝐳{\bf y}\neq{\bf z}.

  2. (ii)

    0ε(𝐱,𝐲)10\leq\varepsilon({\bf x},{\bf y})\leq 1 for all 𝐱,𝐲𝒳{\bf x},{\bf y}\in{\mathcal{X}}.

and then define the following perturbation

K(𝐱,𝐲)=K(𝐱,𝐲)+ε(𝐱,𝐲)/(2scL|Φ|)K({\bf x},{\bf y})=K^{\star}({\bf x},{\bf y})+\varepsilon({\bf x},{\bf y})/(2s_{c}^{L}|\Phi|) (31)

of KK^{\star}. Any such kernel is a maximizer of the optimization problem in (30), so we may think of perturbations of KK^{\star} as bona-fide optimal.

The rest of this subsection is devoted to the proof of Theorem 4, and we also show, in the course of the proof, that (31) is a maximizer of the optimization problem in (30). We use 𝒦\mathcal{K} to denote the set of all symmetric functions from 𝒳×𝒳{\mathcal{X}}\times{\mathcal{X}} to . We will refers to such functions as ‘kernel’ despite the fact that these functions are not necessarily positive semi-definite.

Proving Theorem 4 requires that we study the following optimization problem:

Maximize 𝔈(K):=SDM[K(𝐱test,𝐱1)>K(𝐱test,𝐱r) for all 2rt+1]\displaystyle\text{Maximize }\;\;\mathfrak{E}(K):=\mathbb{P}_{SDM}\Big{[}K({\bf x}^{\text{test}},{\bf x}_{1})>K({\bf x}^{\text{test}},{\bf x}_{r})\text{ for all }2\leq r\leq t+1\Big{]} (32)
over all kernels K𝒦K\in\mathcal{K}. (33)

We recall the definition of the optimal kernel,

K(𝐱,𝐲)=1scL|{φΦ:φ(x)=φ(y) for all 1L}||Φ|K^{\star}({\bf x},{\bf y})=\;\frac{1}{s_{c}^{L}}\;\;\frac{\big{|}\{\varphi\in\Phi:\varphi(x_{\ell})=\varphi(y_{\ell})\text{ for all }1\leq\ell\leq L\}\big{|}}{|\Phi|} (34)

where sc=nw/scs_{c}=n_{w}/s_{c} denotes the size of a concept. We start with the following simple lemma:

Lemma 4.

The function K𝐱()=K(𝐱,)K^{\star}_{\bf x}(\cdot)=K^{\star}({\bf x},\cdot) is a probability distribution on 𝒳{\mathcal{X}}.

Proof.

First note that KK^{\star} can be written as

K(𝐱,𝐲)=1scL|{φΦ:φ̊(𝐱)=φ̊(𝐲)}||Φ|=1scL|Φ|φΦ𝟏{φ̊(𝐱)=φ̊(𝐲)}K^{\star}({\bf x},{\bf y})=\frac{1}{s_{c}^{L}}\frac{|\{\varphi\in\Phi:\mathring{\varphi}({\bf x})=\mathring{\varphi}({\bf y})\}|}{|\Phi|}=\frac{1}{s_{c}^{L}|\Phi|}\sum_{\varphi\in\Phi}{\bf 1}_{\{\mathring{\varphi}({\bf x})=\mathring{\varphi}({\bf y})\}} (35)

Since φ\varphi maps exactly scs_{c} words to each concept c{1,,nc}c\in\{1,\ldots,n_{c}\}, we have that

|{𝐱𝒳:φ̊(𝐱)=𝐜}|=scL for all 𝐜𝒞.|\{{\bf x}\in\mathcal{X}:\mathring{\varphi}({\bf x})={\bf c}\}|=s_{c}^{L}\qquad\text{ for all ${\bf c}\in\mathcal{C}$.} (36)

Therefore

𝐲𝒳K(𝐱,𝐲)=1scL|Φ|φΦ𝐲𝒳𝟏{φ̊(𝐱)=φ̊(𝐲)}=1scL|Φ|φΦ|{𝐲𝒳:φ̊(𝐲)=φ̊(𝐱)}|=1\sum_{{\bf y}\in{\mathcal{X}}}K^{\star}({\bf x},{\bf y})=\frac{1}{s_{c}^{L}|\Phi|}\sum_{\varphi\in\Phi}\sum_{{\bf y}\in{\mathcal{X}}}{\bf 1}_{\{\mathring{\varphi}({\bf x})=\mathring{\varphi}({\bf y})\}}=\frac{1}{s_{c}^{L}|\Phi|}\sum_{\varphi\in\Phi}|\{{\bf y}\in\mathcal{X}:\mathring{\varphi}({\bf y})=\mathring{\varphi}({\bf x})\}|=1

We now show that the marginal of the p.d.f. ϱSDM\varrho_{\rm SDM} is related to KK^{\star}.

Lemma 5.

For all 𝐱1,,𝐱t+1{\bf x}_{1},\ldots,{\bf x}_{t+1} and 𝐱test{\bf x^{\text{test}}} in 𝒳{\mathcal{X}} we have

φΦ𝐜1𝒞𝐜t+1𝒞ϱSDM(φ;𝐜1,,𝐜t+1;𝐱1,,𝐱t+1;𝐱test)=1|𝒳|t+1K(𝐱1,𝐱test).\sum_{\varphi\in\Phi}\;\sum_{{\bf c}_{1}\in\mathcal{C}}\cdots\sum_{{\bf c}_{t+1}\in\mathcal{C}}\varrho_{\rm SDM}(\varphi;{\bf c}_{1},\ldots,{\bf c}_{t+1};{\bf x}_{1},\ldots,{\bf x}_{t+1};{\bf x}^{\text{test}})=\frac{1}{|{\mathcal{X}}|^{t+1}}K^{\star}({\bf x}_{1},{\bf x^{\text{test}}}).
Proof.

Identity (36) can be expressed as |φ̊1(𝐜)|=scL\left|\mathring{\varphi}^{-1}({\bf c})\right|=s_{c}^{L} for all 𝐜𝒞{\bf c}\in\mathcal{C}. As a consequence, definition (29) of ϱSDM(ω)\varrho_{\rm SDM}(\omega) simplifies to

ϱSDM(ω)=α(r=1t+1𝟏φ̊1(𝐜r)(𝐱r))𝟏φ̊1(𝐜1)(𝐱test)\varrho_{\rm SDM}(\omega)=\alpha\left(\prod^{t+1}_{r=1}{\bf 1}_{\mathring{\varphi}^{-1}({\bf c}_{r})}({\bf x}_{r})\right){\bf 1}_{\mathring{\varphi}^{-1}({\bf c}_{1})}({\bf x}^{\text{test}}) (37)

where the constant α\alpha is given by

α=1|Φ||𝒞|t+1scL(t+2)=1|Φ|ncL(t+1)scL(t+2)=1|Φ||𝒳|t+1scL\alpha=\frac{1}{|\Phi||\mathcal{C}|^{t+1}s_{c}^{L(t+2)}}=\frac{1}{|\Phi|n_{c}^{L(t+1)}s_{c}^{L(t+2)}}=\frac{1}{|\Phi||{\mathcal{X}}|^{t+1}s_{c}^{L}}

In the above we have used the fact that |𝒞|=ncL|\mathcal{C}|=n_{c}^{L} and |𝒳|=nwL|{\mathcal{X}}|=n_{w}^{L}. We then note that the identity 𝟏φ̊1(𝐜)(𝐱)=𝟏{φ̊(𝐱)=𝐜}{\bf 1}_{\mathring{\varphi}^{-1}({\bf c})}({\bf x})={\bf 1}_{\{\mathring{\varphi}({\bf x})={\bf c}\}} implies

𝐜𝒞𝟏φ̊1(𝐜)(𝐱)=𝐜𝒞𝟏{φ̊(𝐱)=𝐜}=1\displaystyle\sum_{{\bf c}\in\mathcal{C}}{\bf 1}_{\mathring{\varphi}^{-1}({\bf c})}({\bf x})=\sum_{{\bf c}\in\mathcal{C}}{\bf 1}_{\{\mathring{\varphi}({\bf x})={\bf c}\}}=1 (38)
𝐜𝒞(𝟏φ̊1(𝐜)(𝐱) 1φ̊1(𝐜)(𝐲))=𝐜𝒞(𝟏{φ̊(𝐱)=𝐜} 1{φ̊(𝐲)=𝐜})=  1{φ̊(𝐱)=φ̊(𝐲)}\displaystyle\sum_{{\bf c}\in\mathcal{C}}\Big{(}{\bf 1}_{\mathring{\varphi}^{-1}({\bf c})}({\bf x})\;{\bf 1}_{\mathring{\varphi}^{-1}({\bf c})}({\bf y})\Big{)}\;\;=\;\;\sum_{{\bf c}\in\mathcal{C}}\Big{(}{\bf 1}_{\{\mathring{\varphi}({\bf x})={\bf c}\}}\;{\bf 1}_{\{\mathring{\varphi}({\bf y})={\bf c}\}}\Big{)}\;\;=\;\;{\bf 1}_{\{\mathring{\varphi}({\bf x})=\mathring{\varphi}({\bf y})\}} (39)

for all 𝐱,𝐲𝒳{\bf x},{\bf y}\in{\mathcal{X}}. Summing (37) over the variables 𝐜1,,𝐜t+1{\bf c}_{1},\ldots,{\bf c}_{t+1} we obtain

𝐜1𝒞\displaystyle\sum_{{\bf c}_{1}\in\mathcal{C}}\cdots 𝐜t+1𝒞ϱSDM(ω)=α𝐜1𝒞𝐜t+1𝒞(𝟏φ̊1(𝐜1)(𝐱1)  1φ̊1(𝐜1)(𝐱test)r=2t+1𝟏φ̊1(𝐜r)(𝐱r))\displaystyle\sum_{{\bf c}_{t+1}\in\mathcal{C}}\varrho_{\rm SDM}(\omega)=\alpha\sum_{{\bf c}_{1}\in\mathcal{C}}\cdots\sum_{{\bf c}_{t+1}\in\mathcal{C}}\left({\bf 1}_{\mathring{\varphi}^{-1}({\bf c}_{1})}({\bf x}_{1})\;\;{\bf 1}_{\mathring{\varphi}^{-1}({\bf c}_{1})}({\bf x}^{\text{test}})\;\;\prod^{t+1}_{r=2}{\bf 1}_{\mathring{\varphi}^{-1}({\bf c}_{r})}({\bf x}_{r})\right)
=α𝐜1𝒞(𝟏φ̊1(𝐜1)(𝐱1) 1φ̊1(𝐜1)(𝐱test))𝐜2𝒞𝐜t+1𝒞(r=2t+1𝟏φ̊1(𝐜r)(𝐱r))\displaystyle\quad\quad=\alpha\sum_{{\bf c}_{1}\in\mathcal{C}}\Bigg{(}{\bf 1}_{\mathring{\varphi}^{-1}({\bf c}_{1})}({\bf x}_{1})\;{\bf 1}_{\mathring{\varphi}^{-1}({\bf c}_{1})}({\bf x}^{\text{test}})\Bigg{)}\;\;\;\;\sum_{{\bf c}_{2}\in\mathcal{C}}\cdots\sum_{{\bf c}_{t+1}\in\mathcal{C}}\left(\prod^{t+1}_{r=2}{\bf 1}_{\mathring{\varphi}^{-1}({\bf c}_{r})}({\bf x}_{r})\right)
=α 1{φ̊(𝐱1)=φ̊(𝐱test)}\displaystyle\quad\quad=\alpha\;{\bf 1}_{\{\mathring{\varphi}({\bf x}_{1})=\mathring{\varphi}({\bf x^{\text{test}}})\}}

where we have used (38) and (39) to obtain the last equality. Summing the above over the variable φ\varphi gives K(𝐱1,𝐱test)/|𝒳|t+1K^{\star}({\bf x}_{1},{\bf x^{\text{test}}})/|{\mathcal{X}}|^{t+1}. ∎

The next lemma provides a purely algebraic (as opposed to probabilistic) formulation for the functional 𝔈(K)\mathfrak{E}(K) defined in (32).

Lemma 6.

The functional 𝔈:𝒦\mathfrak{E}:\mathcal{K}\to\real can be expressed as

𝔈(K)=1|𝒳|𝐱𝒳𝐲𝒳K(𝐱,𝐲)(|{𝐳𝒳:K(𝐱,𝐳)<K(𝐱,𝐲)}||𝒳|)t.\mathfrak{E}(K)=\frac{1}{|{\mathcal{X}}|}\sum_{{\bf x}\in{\mathcal{X}}}\sum_{{\bf y}\in{\mathcal{X}}}K^{\star}({\bf x},{\bf y})\left(\frac{|\{{\bf z}\in{\mathcal{X}}:K({\bf x},{\bf z})<K({\bf x},{\bf y})\}|}{|{\mathcal{X}}|}\right)^{t}. (40)
Proof.

Let g:𝒳t+2×𝒦{0,1}g:\mathcal{{\mathcal{X}}}^{t+2}\times\mathcal{K}\to\{0,1\} be the indicator function defined by

g(𝐱1,,𝐱t+1,𝐱test,K)={1 if K(𝐱test,𝐱1)>K(𝐱test,𝐱r) for all 2rt+10otherwiseg({\bf x}_{1},\ldots,{\bf x}_{t+1},{\bf x}^{\text{test}},K)=\begin{cases}1&\text{ if }K({\bf x^{\text{test}}},{\bf x}_{1})>K({\bf x^{\text{test}}},{\bf x}_{r})\;\text{ for all }2\leq r\leq t+1\\ 0&\text{otherwise}\end{cases}

Let ω\omega denote the sample (φ;𝐜1,,𝐜t+1;𝐱1,,𝐱t+1;𝐱test)(\varphi;{\bf c}_{1},\ldots,{\bf c}_{t+1};{\bf x}_{1},\ldots,{\bf x}_{t+1};{\bf x}^{\text{test}}). Since gg only depends on the last t+2t+2 variables of ω\omega, we have

𝔈(K)=SDM[K(𝐱test,𝐱1)>K(𝐱test,𝐱r) for all 2rt+1]\displaystyle\mathfrak{E}(K)=\mathbb{P}_{SDM}\Big{[}K({\bf x}^{\text{test}},{\bf x}_{1})>K({\bf x}^{\text{test}},{\bf x}_{r})\text{ for all }2\leq r\leq t+1\Big{]} (41)
=φΦ𝐜1𝒞𝐜t+1𝒞𝐱1𝒳𝐱t+1𝒳𝐱test𝒳g(𝐱1,,𝐱t+1,𝐱test,K)ϱSDM(ω)\displaystyle=\sum_{\varphi\in\Phi}\;\sum_{{\bf c}_{1}\in\mathcal{C}}\cdots\sum_{{\bf c}_{t+1}\in\mathcal{C}}\;\sum_{{\bf x}_{1}\in{\mathcal{X}}}\cdots\sum_{{\bf x}_{t+1}\in{\mathcal{X}}}\;\sum_{{\bf x^{\text{test}}}\in{\mathcal{X}}}g({\bf x}_{1},\ldots,{\bf x}_{t+1},{\bf x}^{\text{test}},K)\;\varrho_{\rm SDM}(\omega) (42)
=𝐱1𝒳𝐱t+1𝒳𝐱test𝒳g(𝐱1,,𝐱t+1,𝐱test,K)(φΦ𝐜1𝒞𝐜t+1𝒞ϱSDM(ω))\displaystyle=\sum_{{\bf x}_{1}\in{\mathcal{X}}}\cdots\sum_{{\bf x}_{t+1}\in{\mathcal{X}}}\;\sum_{{\bf x^{\text{test}}}\in{\mathcal{X}}}g({\bf x}_{1},\ldots,{\bf x}_{t+1},{\bf x}^{\text{test}},K)\left(\sum_{\varphi\in\Phi}\;\sum_{{\bf c}_{1}\in\mathcal{C}}\cdots\sum_{{\bf c}_{t+1}\in\mathcal{C}}\varrho_{\rm SDM}(\omega)\right) (43)
=𝐱1𝒳𝐱t+1𝒳𝐱test𝒳g(𝐱1,,𝐱t+1,𝐱test,K)1|𝒳|t+1K(𝐱1,𝐱test)\displaystyle=\sum_{{\bf x}_{1}\in{\mathcal{X}}}\cdots\sum_{{\bf x}_{t+1}\in{\mathcal{X}}}\;\sum_{{\bf x^{\text{test}}}\in{\mathcal{X}}}g({\bf x}_{1},\ldots,{\bf x}_{t+1},{\bf x}^{\text{test}},K)\;\;\frac{1}{|{\mathcal{X}}|^{t+1}}\;\;K^{\star}({\bf x}_{1},{\bf x^{\text{test}}}) (44)
=1|𝒳|𝐱1𝒳𝐱test𝒳K(𝐱1,𝐱test)(1|𝒳|t𝐱2𝒳𝐱t+1𝒳g(𝐱1,,𝐱t+1,𝐱test,K))\displaystyle=\frac{1}{|{\mathcal{X}}|}\sum_{{\bf x}_{1}\in{\mathcal{X}}}\sum_{{\bf x^{\text{test}}}\in{\mathcal{X}}}K^{\star}({\bf x}_{1},{\bf x^{\text{test}}})\left(\frac{1}{|{\mathcal{X}}|^{t}}\sum_{{\bf x}_{2}\in{\mathcal{X}}}\cdots\sum_{{\bf x}_{t+1}\in{\mathcal{X}}}g({\bf x}_{1},\ldots,{\bf x}_{t+1},{\bf x}^{\text{test}},K)\right) (45)

where we have used Lemma 5 to go from (43) to (44). Writing the indicator function gg as a product of indicator functions,

g(𝐱1,,𝐱t+1,𝐱test,K)\displaystyle g({\bf x}_{1},\ldots,{\bf x}_{t+1},{\bf x}^{\text{test}},K) =r=2t+1𝟏{K(𝐱test,𝐱1)>K(𝐱test,𝐱r)}\displaystyle=\prod_{r=2}^{t+1}{\bf 1}_{\{K({\bf x^{\text{test}}},{\bf x}_{1})>K({\bf x^{\text{test}}},{\bf x}_{r})\}}

we obtain the following expression for the term appearing between parentheses in (45):

1|𝒳|t𝐱2𝒳𝐱t+1𝒳g(𝐱1,,𝐱t+1,𝐱test,K)\displaystyle\frac{1}{|{\mathcal{X}}|^{t}}\sum_{{\bf x}_{2}\in{\mathcal{X}}}\cdots\sum_{{\bf x}_{t+1}\in{\mathcal{X}}}g({\bf x}_{1},\ldots,{\bf x}_{t+1},{\bf x}^{\text{test}},K) =1|𝒳|tr=2t+1(𝐱r𝒳𝟏{K(𝐱test,𝐱1)>K(𝐱test,𝐱r)})\displaystyle=\frac{1}{|{\mathcal{X}}|^{t}}\prod_{r=2}^{t+1}\left(\sum_{{\bf x}_{r}\in{\mathcal{X}}}{\bf 1}_{\{K({\bf x^{\text{test}}},{\bf x}_{1})>K({\bf x^{\text{test}}},{\bf x}_{r})\}}\right)
=1|𝒳|t(𝐳𝒳𝟏{K(𝐱test,𝐱1)>K(𝐱test,𝐳)})t\displaystyle=\frac{1}{|{\mathcal{X}}|^{t}}\left(\sum_{{\bf z}\in{\mathcal{X}}}{\bf 1}_{\{K({\bf x^{\text{test}}},{\bf x}_{1})>K({\bf x^{\text{test}}},{\bf z})\}}\right)^{t}
=(|{𝐳𝒳:K(𝐱test,𝐱1)>K(𝐱test,𝐳)}||𝒳|)t\displaystyle=\left(\frac{|\{{\bf z}\in{\mathcal{X}}:K({\bf x^{\text{test}}},{\bf x}_{1})>K({\bf x^{\text{test}}},{\bf z})\}|}{|{\mathcal{X}}|}\right)^{t}

Changing the name of variables 𝐱test,𝐱1{\bf x^{\text{test}}},{\bf x}_{1} to 𝐱,𝐲{\bf x},{\bf y} gives (40). ∎

We now use expression (40) for 𝔈(K)\mathfrak{E}(K) and reformulate optimization problem (32)-(33) into an equivalent optimization problem over symmetric matrices. Putting an arbitrary ordering on the set 𝒳\mathcal{X} (starting with i=0i=0) and denoting by KijK^{\star}_{ij} the value of the kernel KK^{\star} on the pair that consists of the ithi^{th} and jthj^{th} element of 𝒳{\mathcal{X}}, we see that optimization problem (32)-(33) can be written as

Maximize 𝔈(K):=1Ni=0N1j=0N1Kij(|{j[N]:Kij<Kij}|N)t\displaystyle\text{Maximize }\quad\mathfrak{E}(K):=\frac{1}{N}\sum_{i=0}^{N-1}\sum_{j=0}^{N-1}K^{\star}_{ij}\left(\frac{|\{j^{\prime}\in[N]:K_{ij^{\prime}}<K_{ij}\}|}{N}\right)^{t} (46)
over all symmetric matrices KN×N\displaystyle\text{over all symmetric matrices }K\in{}^{N\times N} (47)

In the above we have used the letter NN to denote the cardinality of 𝒳{\mathcal{X}}, that is N=nwLN=n_{w}^{L}, and we have used the notation [N]={0,1,,N1}.[N]=\{0,1,\ldots,N-1\}. Before solving the matrix optimization problem (46)-(47), we start with a simpler vector optimization problem. Let 𝐩{\bf p}^{\star} be a probability vector, that is 𝐩+N{\bf p}^{\star}\in{}_{+}^{N} with i=1Npi=1\sum_{i=1}^{N}p_{i}=1, and consider the optimization problem:

Maximize 𝔢(𝐯):=j=0N1pj(|{j[N]:vj<vj}|N)t\displaystyle\text{Maximize }\quad\mathfrak{e}({\bf v}):=\sum_{j=0}^{N-1}p^{\star}_{j}\;\left(\frac{|\{j^{\prime}\in[N]:v_{j^{\prime}}<v_{j}\}|}{N}\right)^{t} (48)
over all vector 𝐯.N\displaystyle\text{over all vector }{\bf v}\in{}^{N}. (49)

Recall from Definition 1 that an ordering permutation of a vector 𝐯{\bf v} is a permutation that sorts its entries from smallest to largest. We will say that two vectors 𝐯,𝐰N{\bf v},{\bf w}\in{}^{N} have the same ordering if there exist σSN\sigma\in S_{N} which is ordering for both 𝐯{\bf v} and 𝐰{\bf w}. The following lemma is key — it shows that the optimization problem (48)–(49) has a simple solution.

Lemma 7.

The following identity

sup𝐯N𝔢(𝐯)=t(𝐩)\sup_{{\bf v}\in{}^{N}}\mathfrak{e}({\bf v})={\mathcal{H}}_{t}({\bf p}^{\star})

holds. Moreover, the supremum is achieved by any vector 𝐯N{\bf v}\in{}^{N} that has mutually distinct entries222That is, vivjv_{i}\neq v_{j} for all iji\neq j. and that has the same ordering than 𝐩{\bf p}^{\star}.

Proof.

Let Distinct()N\text{Distinct}({}^{N}) denote the vectors of N that have mutually distinct entries. We will first show that

sup𝐯N𝔢(𝐯)=sup𝐯Distinct()N𝔢(𝐯).\sup_{{\bf v}\in{}^{N}}\mathfrak{e}({\bf v})=\sup_{{\bf v}\in\text{Distinct}({}^{N})}\mathfrak{e}({\bf v}). (50)

To do this we show that for any 𝐯N{\bf v}\in{}^{N}, there exists 𝐰Distinct()N{\bf w}\in\text{Distinct}({}^{N}) such that

|{j[N]:vj<vj}||{j[N]:wj<wj}|for all 0jN1.|\{j^{\prime}\in[N]:v_{j^{\prime}}<v_{j}\}|\leq|\{j^{\prime}\in[N]:w_{j^{\prime}}<w_{j}\}|\qquad\text{for all }0\leq j\leq N-1. (51)

There are many ways to construct such a 𝐰{\bf w}. One way is to simply set wj=σ1(j)w_{j}=\sigma^{-1}(j) for some permutation σ\sigma that orders vv. Indeed, note that σ1(j)\sigma^{-1}(j) provides the position of vjv_{j} in the sequence of inequality (18). Therefore if vj<vjv_{j^{\prime}}<v_{j} we must have that σ1(j)<σ1(j)\sigma^{-1}(j^{\prime})<\sigma^{-1}(j). This implies

{j[N]:vj<vj}{j[N]:σ1(j)<σ1(j)}for all j[N]\{j^{\prime}\in[N]:v_{j^{\prime}}<v_{j}\}\subset\{j^{\prime}\in[N]:\sigma^{-1}(j^{\prime})<\sigma^{-1}(j)\}\qquad\text{for all }j\in[N]

which in turn implies (51).

Because of (50) we can now restrict our attention to 𝐯Distinct()N{\bf v}\in\text{Distinct}({}^{N}). Note that if 𝐯Distinct()N{\bf v}\in\text{Distinct}({}^{N}), then it has a unique ordering permutation σ\sigma,

vσ(0)<vσ(1)<vσ(2)<vσ(3)<<vσ(N1)v_{\sigma(0)}<v_{\sigma(1)}<v_{\sigma(2)}<v_{\sigma(3)}<\ldots<v_{\sigma(N-1)}

and, recalling that σ1(j)\sigma^{-1}(j) provide the position of vjv_{j} in the above ordering, we clearly have that

|{j[N]:vj<vj}|=σ1(j).|\{j^{\prime}\in[N]:v_{j}^{\prime}<v_{j}\}|=\sigma^{-1}(j).

Therefore, if 𝐯Distinct()N{\bf v}\in\text{Distinct}({}^{N}) and if σ\sigma denotes its unique ordering permutation, 𝔢(𝐯)\mathfrak{e}({\bf v}) can be expressed as

𝔢(𝐯)=j=0N1pj(|{j[N]:vj<vj}|N)t=j=0N1pj(σ1(j)N)t=j=0N1pσ(j)(j/N)t\displaystyle\mathfrak{e}({\bf v})=\sum_{j=0}^{N-1}p^{\star}_{j}\;\left(\frac{|\{j^{\prime}\in[N]:v_{j^{\prime}}<v_{j}\}|}{N}\right)^{t}=\sum_{j=0}^{N-1}p^{\star}_{j}\;\left(\frac{\sigma^{-1}(j)}{N}\right)^{t}=\sum_{j=0}^{N-1}p^{\star}_{\sigma(j)}\;(j/N)^{t} (52)

Looking at definition (11) of the permuted moment, it is then clear that 𝔢(𝐯)t(𝐩)\mathfrak{e}({\bf v})\leq{\mathcal{H}}_{t}({\bf p}^{\star}) for all 𝐯Distinct()N{\bf v}\in\text{Distinct}({}^{N}). We then note that if 𝐯Distinct()N{\bf v}\in\text{Distinct}({}^{N}) has the same ordering than 𝐩{\bf p}^{\star}, then its unique ordering permutation σ\sigma must also be an ordering permutation of 𝐩{\bf p}^{\star}. Then (52) combined with Lemma 1 implies that 𝔢(𝐯)=t(𝐩)\mathfrak{e}({\bf v})={\mathcal{H}}_{t}({\bf p}^{\star}). This concludes the proof. ∎

Relaxing the symmetric constraint in the optimization problem (46)-(47) gives the following unconstrained problem over all NN-by-NN matrices:

Maximize 𝔈(K):=1Ni=0N1j=0N1Kij(|{j[N]:Kij<Kij}|N)t\displaystyle\text{Maximize }\quad\mathfrak{E}(K):=\frac{1}{N}\sum_{i=0}^{N-1}\sum_{j=0}^{N-1}K^{\star}_{ij}\left(\frac{|\{j^{\prime}\in[N]:K_{ij^{\prime}}<K_{ij}\}|}{N}\right)^{t} (53)
over all matrices KN×N\displaystyle\text{over all matrices }K\in{}^{N\times N} (54)

Let us denote by Ki,:K^{\star}_{i,:} the ithi^{th} row of the matrix KK^{\star} and remark that Ki,:K^{\star}_{i,:} is a probability vector (because K(𝐱,)K^{\star}({\bf x},\cdot) is a probability distribution on 𝒳{\mathcal{X}}, see Lemma 4). We then note that the above unconstrained problem decouples into NN separate optimization problems of the type (48)-(49) in which the probability vector 𝐩{\bf p}^{\star} must be replaced by the probability vector Ki,:K^{\star}_{i,:}. Using Lemma 7 we therefore have that any KN×NK\in{}^{N\times N} that satisfies, for each 0iN10\leq i\leq N-1,

  1. (a)

    The entries of Ki,:K_{i,:} are mutually distinct,

  2. (b)

    Ki,:K_{i,:} and Ki,:K^{\star}_{i,:} have the same ordering,

must be a solution of (53)-(54). Lemma 7 also gives:

supKN×N𝔈(K)=1Ni=0N1t(Ki,:).\sup_{K\in{}^{N\times N}}\mathfrak{E}(K)=\frac{1}{N}\sum_{i=0}^{N-1}{\mathcal{H}}_{t}(K^{\star}_{i,:}).

Now let εN×N\varepsilon\in{}^{N\times N} be a symmetric matrix that satisfies:

  • (i)

    εijεij\varepsilon_{ij}\neq\varepsilon_{ij^{\prime}} for all i,j,j[N]i,j,j^{\prime}\in[N] with jjj\neq j^{\prime},

  • (ii)

    0εij10\leq\varepsilon_{ij}\leq 1 for all i,j[N]i,j\in[N],

and define the following perturbation of the matrix KK^{\star}:

K=K+0.5scL|Φ|εK=K^{\star}+\frac{0.5}{s_{c}^{L}|\Phi|}\;\;\varepsilon (55)

Recalling definition (35) of the kernel KK^{\star}, it is clear that for each i,j[N]i,j\in[N], we have

Kij=scL|Φ| for some integer .K^{\star}_{ij}=\frac{\ell}{s_{c}^{L}|\Phi|}\qquad\text{ for some integer }\ell. (56)

As a consequence perturbing KK^{\star} by adding to its entries quantities smaller than 1/(scL|Φ|)1/(s_{c}^{L}|\Phi|) can not change the ordering of its rows. Therefore the kernel KK defined by (55) satisfies (b). It also satisfies (a). Indeed, if Kij=KijK^{\star}_{ij}=K^{\star}_{ij^{\prime}} and jjj\neq j^{\prime}, then we clearly have that KijKijK_{ij}\neq K_{ij^{\prime}} due to (i). On the other hand if KijKijK^{\star}_{ij}\neq K^{\star}_{ij^{\prime}}, then KijKijK_{ij}\neq K_{ij^{\prime}} due to (ii) and (56).

We have therefore constructed a symmetric matrix that is a solution of the optimization problem (53)-(54). As a consequence we have

supK𝒦𝔈(K)=supKN×N𝔈(K)=1Ni=0N1t(Ki,:)\sup_{K\in\mathcal{K}}\mathfrak{E}(K)=\sup_{K\in{}^{N\times N}}\mathfrak{E}(K)=\frac{1}{N}\sum_{i=0}^{N-1}{\mathcal{H}}_{t}(K^{\star}_{i,:})

where 𝒦\mathcal{K} should now be interpreted as the set of NN-by-NN symmetric matrices. The above equality proves Theorem 4, and we have also shown that the perturbed kernel (55) achieves the supremum.

B.2 Connection Between the Two Sampling Processes

In this subsection we show that the p.d.f. of Sampling Process SDM can be obtained by marginalizing the p.d.f. of Sampling Process DM over a subset of the variables. We also compute another marginal of ϱDM\varrho_{\rm DM} that will prove useful in the next subsection. Recall that

ϱDM(ω)=1|Φ||𝒞|2Rr=1R(𝟏φ̊1(𝐜r)(𝐱r, 1)|φ̊1(𝐜r)|s=2nspl𝟏φ̊1(𝐜r)(𝐱r,s)|φ̊1(𝐜r)|)𝟏φ̊1(𝐜1)(𝐱test)|φ̊1(𝐜1)|\varrho_{{\rm DM}}(\omega)=\frac{1}{|\Phi||\mathcal{C}|^{2R}}\prod^{R}_{r=1}\left(\frac{{\bf 1}_{\mathring{\varphi}^{-1}({\bf c}^{\prime}_{r})}({\bf x}_{r,\,1})}{\left|\mathring{\varphi}^{-1}({\bf c}^{\prime}_{r})\right|}\prod^{n_{\text{spl}}}_{s=2}\frac{{\bf 1}_{\mathring{\varphi}^{-1}({\bf c}_{r})}({\bf x}_{r,\,s})}{\left|\mathring{\varphi}^{-1}({\bf c}_{r})\right|}\right)\frac{{\bf 1}_{\mathring{\varphi}^{-1}({\bf c}^{\prime}_{1})}({\bf x^{\text{test}}})}{\left|\mathring{\varphi}^{-1}({\bf c}^{\prime}_{1})\right|} (57)

on ΩDM:=Φ×𝒞2R×𝒳R×nspl+1\Omega_{\rm DM}:=\Phi\times\mathcal{C}^{2R}\times{\mathcal{X}}^{R\times n_{\text{spl}}+1} is the p.d.f. of the sampling process for our main data model. Samples from ΩDM\Omega_{\rm DM} take the form

ω=(φ;𝐜1,𝐜2,𝐜3,,𝐜R;𝐜1,𝐜2,𝐜3,,𝐜R;𝐱1,1,𝐱1,2,𝐱1,3,,𝐱1,nspl;;𝐱R,1,𝐱R,2,𝐱R,3,,𝐱R,nspl;𝐱test)\omega=(\varphi\;\;\;;\;\;\;{\bf c}_{1},{\bf c}_{2},{\bf c}_{3},\ldots,{\bf c}_{R}\;\;\;;\;\;\;{\bf c}^{\prime}_{1},{\bf c}_{2}^{\prime},{\bf c}_{3}^{\prime},\ldots,{\bf c}^{\prime}_{R}\;\;;\\ {\bf x}_{1,1},{\bf x}_{1,2},{\bf x}_{1,3},\ldots,{\bf x}_{1,n_{\text{spl}}}\;\;\;;\;\;\;\ldots\;\;\;;\;\;\;{\bf x}_{R,1},{\bf x}_{R,2},{\bf x}_{R,3},\ldots,{\bf x}_{R,n_{\text{spl}}}\;\;\;;\;\;\;{\bf x^{\text{test}}})

We separate these variables into two groups, ω=(ωa,ωb)\omega=(\omega_{a},\omega_{b}), where

ωa\displaystyle\omega_{a} =(φ;𝐜1,𝐜2,𝐜3,,𝐜R;𝐜1,𝐜2,𝐜3,,𝐜R;𝐱1,1,𝐱1,2;;𝐱R,1,𝐱R,2;𝐱test)\displaystyle=(\varphi\;\;\;;\;\;\;{\bf c}_{1},{\bf c}_{2},{\bf c}_{3},\ldots,{\bf c}_{R}\;\;\;;\;\;\;{\bf c}^{\prime}_{1},{\bf c}_{2}^{\prime},{\bf c}_{3}^{\prime},\ldots,{\bf c}^{\prime}_{R}\;\;;\;\;{\bf x}_{1,1},{\bf x}_{1,2}\;\;;\;\;\;\ldots\;\;;\;\;{\bf x}_{R,1},{\bf x}_{R,2}\;\;\;;\;\;\;{\bf x^{\text{test}}}) (58)
ωb\displaystyle\omega_{b} =(𝐱1,3,𝐱1,4,,𝐱1,nspl;;𝐱R,3,𝐱R,4,,𝐱R,nspl)\displaystyle=({\bf x}_{1,3},{\bf x}_{1,4},\ldots,{\bf x}_{1,n_{\text{spl}}}\;\;\;;\;\;\;\ldots\;\;;\;\;{\bf x}_{R,3},{\bf x}_{R,4},\ldots,{\bf x}_{R,n_{\text{spl}}}) (59)

The variable ωa\omega_{a} belongs to Ωa=Φ×𝒞2R×𝒳2R+1\Omega_{a}=\Phi\times\mathcal{C}^{2R}\times{\mathcal{X}}^{2R+1}, and the variable ωb\omega_{b} belongs to Ωb=𝒳R(nspl2).\Omega_{b}={\mathcal{X}}^{R(n_{\text{spl}}-2)}. Note that the variables in ωa\omega_{a} contains, among other, 2R2R sequences of concepts and 2R2R training points (the first and second training points of each category). Each of these 2R2R training points is generated by one of the 2R2R sequences of concepts. So the variables involved in ωa\omega_{a} are generated by a process similar to the one involved in the simpler data model. The following lemma shows that p.d.f.p.d.f. of ωa\omega_{a}, after marginalizing ωb\omega_{b}, is indeed ϱSDM\varrho_{SDM}.

Lemma 8.

For all ωaΩa\omega_{a}\in\Omega_{a} we have

ωbΩbϱDM(ωa,ωb)=1|Φ||𝒞|2R(r=1R𝟏φ̊1(𝐜r)(𝐱r, 1)|φ̊1(𝐜r)|𝟏φ̊1(𝐜r)(𝐱r, 2)|φ̊1(𝐜r)|)(𝟏φ̊1(𝐜1)(𝐱test)|φ̊1(𝐜1)|)\sum_{\omega_{b}\in\Omega_{b}}\varrho_{\rm DM}(\omega_{a},\omega_{b})=\frac{1}{|\Phi||\mathcal{C}|^{2R}}\left(\prod^{R}_{r=1}\frac{{\bf 1}_{\mathring{\varphi}^{-1}({\bf c}^{\prime}_{r})}({\bf x}_{r,\,1})}{\left|\mathring{\varphi}^{-1}({\bf c}^{\prime}_{r})\right|}\frac{{\bf 1}_{\mathring{\varphi}^{-1}({\bf c}_{r})}({\bf x}_{r,\,2})}{\left|\mathring{\varphi}^{-1}({\bf c}_{r})\right|}\right)\Bigg{(}\frac{{\bf 1}_{\mathring{\varphi}^{-1}({\bf c}^{\prime}_{1})}({\bf x}^{\text{test}})}{\left|\mathring{\varphi}^{-1}({\bf c}^{\prime}_{1})\right|}\Bigg{)}

Recalling the definition (29) of ϱSDM\varrho_{\rm SDM}, and letting t+1=2Rt+1=2R, we see that the above lemma states that

ωbΩbϱDM(ωa,ωb)=ϱSDM(ωa)\sum_{\omega_{b}\in\Omega_{b}}\varrho_{\rm DM}(\omega_{a},\omega_{b})=\varrho_{\rm SDM}(\omega_{a}) (60)

and Ωa=ΩSDM\Omega_{a}=\Omega_{\rm SDM}.

Proof of Lemma 8.

We start by reorganizing the terms involved in the product defining ϱDM\varrho_{\rm DM} so that the variables in ωa\omega_{a} and ωb\omega_{b} are clearly separated:

ϱDM(ω)=1|Φ||𝒞|2R(r=1R𝟏φ̊1(𝐜r)(𝐱r, 1)|φ̊1(𝐜r)|𝟏φ̊1(𝐜r)(𝐱r, 2)|φ̊1(𝐜r)|)(𝟏φ̊1(𝐜1)(𝐱test)|φ̊1(𝐜1)|)(r=1Rs=3nspl𝟏φ̊1(𝐜r)(𝐱r,s)|φ̊1(𝐜r)|)\varrho_{\rm DM}(\omega)=\\ \frac{1}{|\Phi||\mathcal{C}|^{2R}}\left(\prod^{R}_{r=1}\frac{{\bf 1}_{\mathring{\varphi}^{-1}({\bf c}^{\prime}_{r})}({\bf x}_{r,\,1})}{\left|\mathring{\varphi}^{-1}({\bf c}^{\prime}_{r})\right|}\frac{{\bf 1}_{\mathring{\varphi}^{-1}({\bf c}_{r})}({\bf x}_{r,\,2})}{\left|\mathring{\varphi}^{-1}({\bf c}_{r})\right|}\right)\Bigg{(}\frac{{\bf 1}_{\mathring{\varphi}^{-1}({\bf c}^{\prime}_{1})}({\bf x}^{\text{test}})}{\left|\mathring{\varphi}^{-1}({\bf c}^{\prime}_{1})\right|}\Bigg{)}\left(\prod^{R}_{r=1}\prod^{n_{\text{spl}}}_{s=3}\frac{{\bf 1}_{\mathring{\varphi}^{-1}({\bf c}_{r})}({\bf x}_{r,\,s})}{\left|\mathring{\varphi}^{-1}({\bf c}_{r})\right|}\right)

To demonstrate the process, let us start by summing the above formula over the first variable of ωb\omega_{b}, namely 𝐱1,3{\bf x}_{1,3}. Since this variable only occurs in the last term of the above product, we have:

𝐱1,3𝒳ϱDM(ω)=1|Φ||𝒞|2R(r=1R𝟏φ̊1(𝐜r)(𝐱r, 1)|φ̊1(𝐜r)|𝟏φ̊1(𝐜r)(𝐱r, 2)|φ̊1(𝐜r)|)(𝟏φ̊1(𝐜1)(𝐱test)|φ̊1(𝐜1)|)(1rR3snspl(r,s)(1,3)𝟏φ̊1(𝐜r)(𝐱r,s)|φ̊1(𝐜r)|)(𝐱1,3𝒳𝟏φ̊1(𝐜1)(𝐱1,3)|φ̊1(𝐜1)|)\sum_{{\bf x}_{1,3}\in{\mathcal{X}}}\varrho_{\rm DM}(\omega)=\frac{1}{|\Phi||\mathcal{C}|^{2R}}\left(\prod^{R}_{r=1}\frac{{\bf 1}_{\mathring{\varphi}^{-1}({\bf c}^{\prime}_{r})}({\bf x}_{r,\,1})}{\left|\mathring{\varphi}^{-1}({\bf c}^{\prime}_{r})\right|}\frac{{\bf 1}_{\mathring{\varphi}^{-1}({\bf c}_{r})}({\bf x}_{r,\,2})}{\left|\mathring{\varphi}^{-1}({\bf c}_{r})\right|}\right)\Bigg{(}\frac{{\bf 1}_{\mathring{\varphi}^{-1}({\bf c}^{\prime}_{1})}({\bf x}^{\text{test}})}{\left|\mathring{\varphi}^{-1}({\bf c}^{\prime}_{1})\right|}\Bigg{)}\\ \left(\prod_{\begin{subarray}{c}1\leq r\leq R\\ 3\leq s\leq n_{\text{spl}}\\ (r,s)\neq(1,3)\end{subarray}}\frac{{\bf 1}_{\mathring{\varphi}^{-1}({\bf c}_{r})}({\bf x}_{r,\,s})}{\left|\mathring{\varphi}^{-1}({\bf c}_{r})\right|}\right)\left(\sum_{{\bf x}_{1,3}\in{\mathcal{X}}}\frac{{\bf 1}_{\mathring{\varphi}^{-1}({\bf c}_{1})}({\bf x}_{1,3})}{\left|\mathring{\varphi}^{-1}({\bf c}_{1})\right|}\right)

Since 𝐱𝒳𝟏φ̊1(𝐜1)(𝐱)=|φ̊1(𝐜1)|\sum_{{\bf x}\in{\mathcal{X}}}{\bf 1}_{\mathring{\varphi}^{-1}({\bf c}_{1})}({\bf x})=|\mathring{\varphi}^{-1}({\bf c}_{1})|, the last term of the above product is equal to 11 and can therefore be omitted. Repeating this process for all the 𝐱r,s{\bf x}_{r,s} that constitute ωb\omega_{b} leads to the desired result. ∎

In the next subsection we will need the marginal of ϱDM\varrho_{\rm DM} with respect to another set of variables. To this aim we write ω=(ωc,ωd)\omega=(\omega_{c},\omega_{d}) where

ωc=(φ;𝐱1,2,𝐱1,3,,𝐱1,nspl;;𝐱R,2,𝐱R,3,,,𝐱R,nspl;𝐱test)\displaystyle\omega_{c}=(\varphi\;\;\;;\;\;\;{\bf x}_{1,2},{\bf x}_{1,3},\ldots,{\bf x}_{1,n_{\text{spl}}}\;\;\;;\;\;\;\ldots\;\;\;;\;\;\;{\bf x}_{R,2},{\bf x}_{R,3},,\ldots,{\bf x}_{R,n_{\text{spl}}}\;\;\;;\;\;\;{\bf x^{\text{test}}}) (61)
ωd=(𝐜1,,𝐜R;𝐜1,,𝐜R;𝐱1,1;;𝐱R,1)\displaystyle\omega_{d}=({\bf c}_{1},\ldots,{\bf c}_{R}\;\;;\;\;{\bf c}^{\prime}_{1},\ldots,{\bf c}^{\prime}_{R}\;\;;\;\;{\bf x}_{1,1}\;\;;\;\;\ldots\;\;;\;\;{\bf x}_{R,1}) (62)

Note that all the unfamiliar training points are contained in ωd\omega_{d}. The test point and the familiar training points are in ωc\omega_{c}. We also let Ωc=Φ×𝒳R(nspl1)+1\Omega_{c}=\Phi\times{\mathcal{X}}^{R(n_{\text{spl}}-1)+1} and Ωd=𝒞2R×𝒳R\Omega_{d}=\mathcal{C}^{2R}\times{\mathcal{X}}^{R}.

Lemma 9.

For all ωcΩc\omega_{c}\in\Omega_{c} we have

ωdΩdϱDM(ωc,ωd)=1|Φ||𝒳|R+1scLR(nspl2)r=1R𝟏{φ̊(𝐱r,2)=φ̊(𝐱r,3)==φ̊(𝐱r,nspl)}\sum_{\omega_{d}\in\Omega_{d}}\varrho_{\rm DM}(\omega_{c},\omega_{d})=\frac{1}{|\Phi||{\mathcal{X}}|^{R+1}s_{c}^{LR(n_{\text{spl}}-2)}}\prod^{R}_{r=1}{\bf 1}_{\{\mathring{\varphi}({\bf x}_{r,2})=\mathring{\varphi}({\bf x}_{r,3})=\ldots=\mathring{\varphi}({\bf x}_{r,n_{\text{spl}}})\}}
Proof.

We reorganizing the terms involved in the product defining ϱDM\varrho_{\rm DM} so that the variables in ωc\omega_{c} and ωd\omega_{d} are separated:

ϱDM(ω)\displaystyle\varrho_{\rm DM}(\omega) =1|Φ||𝒞|2R(r=1Rs=2nspl𝟏φ̊1(𝐜r)(𝐱r,s)|φ̊1(𝐜r)|)(𝟏φ̊1(𝐜1)(𝐱test)|φ̊1(𝐜1)|)(r=1R𝟏φ̊1(𝐜r)(𝐱r, 1)|φ̊1(𝐜r)|)\displaystyle=\frac{1}{|\Phi||\mathcal{C}|^{2R}}\left(\prod^{R}_{r=1}\prod^{n_{\text{spl}}}_{s=2}\frac{{\bf 1}_{\mathring{\varphi}^{-1}({\bf c}_{r})}({\bf x}_{r,\,s})}{\left|\mathring{\varphi}^{-1}({\bf c}_{r})\right|}\right)\Bigg{(}\frac{{\bf 1}_{\mathring{\varphi}^{-1}({\bf c}^{\prime}_{1})}({\bf x}^{\text{test}})}{\left|\mathring{\varphi}^{-1}({\bf c}^{\prime}_{1})\right|}\Bigg{)}\left(\prod^{R}_{r=1}\frac{{\bf 1}_{\mathring{\varphi}^{-1}({\bf c}^{\prime}_{r})}({\bf x}_{r,\,1})}{\left|\mathring{\varphi}^{-1}({\bf c}^{\prime}_{r})\right|}\right)

Summing the above formula over the last variable involved in ωd\omega_{d}, namely 𝐱R,1{\bf x}_{R,1}, gives

𝐱R,1𝒳ϱDM(ω)=1|Φ||𝒞|2R(r=1Rs=2nspl𝟏φ̊1(𝐜r)(𝐱r,s)|φ̊1(𝐜r)|)(𝟏φ̊1(𝐜1)(𝐱test)|φ̊1(𝐜1)|)(r=1R1𝟏φ̊1(𝐜r)(𝐱r, 1)|φ̊1(𝐜r)|)(𝐱R,1𝒳𝟏φ̊1(𝐜R)(𝐱R,1)|φ̊1(𝐜R)|)\sum_{{\bf x}_{R,1}\in{\mathcal{X}}}\varrho_{\rm DM}(\omega)=\frac{1}{|\Phi||\mathcal{C}|^{2R}}\left(\prod^{R}_{r=1}\prod^{n_{\text{spl}}}_{s=2}\frac{{\bf 1}_{\mathring{\varphi}^{-1}({\bf c}_{r})}({\bf x}_{r,\,s})}{\left|\mathring{\varphi}^{-1}({\bf c}_{r})\right|}\right)\Bigg{(}\frac{{\bf 1}_{\mathring{\varphi}^{-1}({\bf c}^{\prime}_{1})}({\bf x}^{\text{test}})}{\left|\mathring{\varphi}^{-1}({\bf c}^{\prime}_{1})\right|}\Bigg{)}\\ \left(\prod^{R-1}_{r=1}\frac{{\bf 1}_{\mathring{\varphi}^{-1}({\bf c}^{\prime}_{r})}({\bf x}_{r,\,1})}{\left|\mathring{\varphi}^{-1}({\bf c}^{\prime}_{r})\right|}\right)\Bigg{(}\sum_{{\bf x}_{R,1}\in{\mathcal{X}}}\frac{{\bf 1}_{\mathring{\varphi}^{-1}({\bf c}^{\prime}_{R})}({\bf x}_{R,1})}{\left|\mathring{\varphi}^{-1}({\bf c}^{\prime}_{R})\right|}\Bigg{)}

The last term in the above product is equal to 11 and can therefore be omitted. Iterating this process gives

𝐱1,1𝒳𝐱R,1𝒳ϱDM(ω)=1|Φ||𝒞|2R(r=1Rs=2nspl𝟏φ̊1(𝐜r)(𝐱r,s)|φ̊1(𝐜r)|)(𝟏φ̊1(𝐜1)(𝐱test)|φ̊1(𝐜1)|)\displaystyle\sum_{{\bf x}_{1,1}\in{\mathcal{X}}}\cdots\sum_{{\bf x}_{R,1}\in{\mathcal{X}}}\varrho_{\rm DM}(\omega)=\frac{1}{|\Phi||\mathcal{C}|^{2R}}\left(\prod^{R}_{r=1}\prod^{n_{\text{spl}}}_{s=2}\frac{{\bf 1}_{\mathring{\varphi}^{-1}({\bf c}_{r})}({\bf x}_{r,\,s})}{\left|\mathring{\varphi}^{-1}({\bf c}_{r})\right|}\right)\Bigg{(}\frac{{\bf 1}_{\mathring{\varphi}^{-1}({\bf c}^{\prime}_{1})}({\bf x}^{\text{test}})}{\left|\mathring{\varphi}^{-1}({\bf c}^{\prime}_{1})\right|}\Bigg{)}

We then use the fact that c1𝒞𝟏φ̊1(𝐜1)(𝐱test)=1,\sum_{c_{1}^{\prime}\in\mathcal{C}}{\bf 1}_{\mathring{\varphi}^{-1}({\bf c}^{\prime}_{1})}({\bf x^{\text{test}}})=1, see (38), together with |φ̊1(𝐜1)|=scL\left|\mathring{\varphi}^{-1}({\bf c}_{1}^{\prime})\right|=s_{c}^{L}, see (36), to obtain

c1𝒞𝐱1,1𝒳𝐱R,1𝒳ϱDM(ω)=1|Φ||𝒞|2RscL(r=1Rs=2nspl𝟏φ̊1(𝐜r)(𝐱r,s)|φ̊1(𝐜r)|)\displaystyle\sum_{c_{1}^{\prime}\in\mathcal{C}}\sum_{{\bf x}_{1,1}\in{\mathcal{X}}}\cdots\sum_{{\bf x}_{R,1}\in{\mathcal{X}}}\varrho_{\rm DM}(\omega)=\frac{1}{|\Phi||\mathcal{C}|^{2R}s_{c}^{L}}\left(\prod^{R}_{r=1}\prod^{n_{\text{spl}}}_{s=2}\frac{{\bf 1}_{\mathring{\varphi}^{-1}({\bf c}_{r})}({\bf x}_{r,\,s})}{\left|\mathring{\varphi}^{-1}({\bf c}_{r})\right|}\right)

We then sum over 𝐜2,,𝐜R{\bf c}^{\prime}_{2},\ldots,{\bf c}^{\prime}_{R}. Since these variables are not involved in the above formula we get

c1𝒞cR𝒞𝐱1,1𝒳𝐱R,1𝒳ϱDM(ω)\displaystyle\sum_{c_{1}^{\prime}\in\mathcal{C}}\cdots\sum_{c_{R}^{\prime}\in\mathcal{C}}\sum_{{\bf x}_{1,1}\in{\mathcal{X}}}\cdots\sum_{{\bf x}_{R,1}\in{\mathcal{X}}}\varrho_{\rm DM}(\omega) =1|Φ||𝒞|R+1scL(r=1Rs=2nspl𝟏φ̊1(𝐜r)(𝐱r,s)|φ̊1(𝐜r)|)\displaystyle=\frac{1}{|\Phi||\mathcal{C}|^{R+1}s_{c}^{L}}\left(\prod^{R}_{r=1}\prod^{n_{\text{spl}}}_{s=2}\frac{{\bf 1}_{\mathring{\varphi}^{-1}({\bf c}_{r})}({\bf x}_{r,\,s})}{\left|\mathring{\varphi}^{-1}({\bf c}_{r})\right|}\right)
=1|Φ||𝒞|R|𝒳|(r=1Rs=2nspl𝟏φ̊1(𝐜r)(𝐱r,s)|φ̊1(𝐜r)|)\displaystyle=\frac{1}{|\Phi||\mathcal{C}|^{R}|{\mathcal{X}}|}\left(\prod^{R}_{r=1}\prod^{n_{\text{spl}}}_{s=2}\frac{{\bf 1}_{\mathring{\varphi}^{-1}({\bf c}_{r})}({\bf x}_{r,\,s})}{\left|\mathring{\varphi}^{-1}({\bf c}_{r})\right|}\right)

where we have used |𝒞|scL=ncLscL=|𝒳||\mathcal{C}|s_{c}^{L}=n_{c}^{L}s_{c}^{L}=|{\mathcal{X}}| to obtain the last equality. Summing over 𝐜1{\bf c}_{1} gives

𝐜1𝒞c1𝒞\displaystyle\sum_{{\bf c}_{1}\in\mathcal{C}}\sum_{c_{1}^{\prime}\in\mathcal{C}} cR𝒞𝐱1,1𝐱R,1𝒳ϱDM(ω)=1|Φ||𝒞|R|𝒳|𝐜1𝒞(r=1Rs=2nspl𝟏φ̊1(𝐜r)(𝐱r,s)|φ̊1(𝐜r)|)\displaystyle\cdots\sum_{c_{R}^{\prime}\in\mathcal{C}}\sum_{{\bf x}_{1,1}}\cdots\sum_{{\bf x}_{R,1}\in{\mathcal{X}}}\varrho_{\rm DM}(\omega)=\frac{1}{|\Phi||\mathcal{C}|^{R}|{\mathcal{X}}|}\sum_{{\bf c}_{1}\in\mathcal{C}}\left(\prod^{R}_{r=1}\prod^{n_{\text{spl}}}_{s=2}\frac{{\bf 1}_{\mathring{\varphi}^{-1}({\bf c}_{r})}({\bf x}_{r,\,s})}{\left|\mathring{\varphi}^{-1}({\bf c}_{r})\right|}\right)
=1|Φ||𝒞|R|𝒳|(r=2Rs=2nspl𝟏φ̊1(𝐜r)(𝐱r,s)|φ̊1(𝐜r)|)(𝐜1𝒞s=2nspl𝟏φ̊1(𝐜1)(𝐱1,s)|φ̊1(𝐜1)|)\displaystyle=\frac{1}{|\Phi||\mathcal{C}|^{R}|{\mathcal{X}}|}\left(\prod^{R}_{r=2}\prod^{n_{\text{spl}}}_{s=2}\frac{{\bf 1}_{\mathring{\varphi}^{-1}({\bf c}_{r})}({\bf x}_{r,\,s})}{\left|\mathring{\varphi}^{-1}({\bf c}_{r})\right|}\right)\left(\sum_{{\bf c}_{1}\in\mathcal{C}}\prod^{n_{\text{spl}}}_{s=2}\frac{{\bf 1}_{\mathring{\varphi}^{-1}({\bf c}_{1})}({\bf x}_{1,\,s})}{\left|\mathring{\varphi}^{-1}({\bf c}_{1})\right|}\right)
=1|Φ||𝒞|R|𝒳|(r=2Rs=2nspl𝟏φ̊1(𝐜r)(𝐱r,s)|φ̊1(𝐜r)|)(𝟏{φ̊(𝐱1,2)=φ̊(𝐱1,3)==φ̊(𝐱1,nspl)}|φ̊1(𝐜1)|nspl1)\displaystyle=\frac{1}{|\Phi||\mathcal{C}|^{R}|{\mathcal{X}}|}\left(\prod^{R}_{r=2}\prod^{n_{\text{spl}}}_{s=2}\frac{{\bf 1}_{\mathring{\varphi}^{-1}({\bf c}_{r})}({\bf x}_{r,\,s})}{\left|\mathring{\varphi}^{-1}({\bf c}_{r})\right|}\right)\left(\frac{{\bf 1}_{\{\mathring{\varphi}({\bf x}_{1,2})=\mathring{\varphi}({\bf x}_{1,3})=\ldots=\mathring{\varphi}({\bf x}_{1,n_{\text{spl}}})\}}}{\left|\mathring{\varphi}^{-1}({\bf c}_{1})\right|^{n_{\text{spl}}-1}}\right)

To obtain the last equality we have used (39) but for a product of nspl1n_{\text{spl}}-1 indicator functions instead of just two. Iterating this process we obtain

𝐜1𝒞𝐜R𝒞c1𝒞cR𝒞𝐱1,1𝒳𝐱R,1𝒳ϱDM(ω)\displaystyle\sum_{{\bf c}_{1}\in\mathcal{C}}\cdots\sum_{{\bf c}_{R}\in\mathcal{C}}\sum_{c_{1}^{\prime}\in\mathcal{C}}\cdots\sum_{c_{R}^{\prime}\in\mathcal{C}}\sum_{{\bf x}_{1,1}\in{\mathcal{X}}}\cdots\sum_{{\bf x}_{R,1}\in{\mathcal{X}}}\varrho_{\rm DM}(\omega) =1|Φ||𝒞|R|𝒳|r=1R𝟏{φ̊(𝐱r,2)=φ̊(𝐱1,3)==φ̊(𝐱r,nspl)}|φ̊1(𝐜r)|nspl1\displaystyle=\frac{1}{|\Phi||\mathcal{C}|^{R}|{\mathcal{X}}|}\prod^{R}_{r=1}\frac{{\bf 1}_{\{\mathring{\varphi}({\bf x}_{r,2})=\mathring{\varphi}({\bf x}_{1,3})=\ldots=\mathring{\varphi}({\bf x}_{r,n_{\text{spl}}})\}}}{\left|\mathring{\varphi}^{-1}({\bf c}_{r})\right|^{n_{\text{spl}}-1}}

Using one more time that |φ̊1(𝐜r)|=scL\left|\mathring{\varphi}^{-1}({\bf c}_{r})\right|=s_{c}^{L} and |𝒞|scL=|𝒳||\mathcal{C}|s_{c}^{L}=|{\mathcal{X}}| gives to the desired result. ∎

B.3 Conclusion of Proof

We now establish the desired upper bound (28), which we restate below for convenience:

supK𝒦DM[EK]1|𝒳|𝐱𝒳2R1(K(𝐱,))+1R\sup_{K\in\mathcal{K}}\mathbb{P}_{{\rm DM}}\Big{[}E_{K}\Big{]}\leq\frac{1}{|{\mathcal{X}}|}\sum_{{\bf x}\in{\mathcal{X}}}\mathcal{H}_{2R-1}\left(K^{\star}({\bf x},\,\cdot)\right)+\frac{1}{R} (63)

where

EK={ωΩDM:There exists 1snspl such that K(𝐱test,𝐱1,s)>K(𝐱test,𝐱r,s) for all 2rR and all 1snspl}E_{K}=\Big{\{}\omega\in\Omega_{\rm DM}:\;\;\text{There exists }1\leq s^{*}\leq n_{\text{spl}}\text{ such that }\\ K\left({\bf x}^{\text{test}},{\bf x}_{1,s^{*}}\right)>K\left({\bf x}^{\text{test}},{\bf x}_{r,s}\right)\text{ for all }2\leq r\leq R\text{ and all }1\leq s\leq n_{\text{spl}}\Big{\}} (64)
Refer to caption
Figure 4: The test point 𝐱test{\bf x}^{\text{test}} and the train point 𝐱1,1{\bf x}_{1,1} are generated by the same sequence of concepts.

We recall that the test point 𝐱test{\bf x}^{\text{test}} is generated by the unfamiliar sequence of concepts 𝐜1{\bf c}_{1}^{\prime} and that it belongs to category 1, see Figure 4. The event EKE_{K} describes all the outcomes in which the training point most similar to 𝐱test{\bf x}^{\text{test}} (where similarity is measured with respect to the kernel KK) belongs to the first category. There are two very distinct cases within the event EKE_{K}: the training point most similar to 𝐱test{\bf x}^{\text{test}} can be 𝐱1,1{\bf x}_{1,1} — this corresponds to a ‘meaningful success’ in which the learner recognizes that 𝐱1,1{\bf x}_{1,1} is generated by the same sequence of concepts than 𝐱test{\bf x}^{\text{test}}, see Figure 4. Or the training point most similar to 𝐱test{\bf x}^{\text{test}} can be one of the points 𝐱1,2,,𝐱1,nspl{\bf x}_{1,2},\ldots,{\bf x}_{1,n_{\text{spl}}} — this corresponds to a ‘lucky success’ because 𝐱1,2,,𝐱1,nspl{\bf x}_{1,2},\ldots,{\bf x}_{1,n_{\text{spl}}} are not related to 𝐱test{\bf x}^{\text{test}} (they are generated by a different sequence of concept, see Figure 4). To make this discussion formal, we fix a kernel K𝒦K\in\mathcal{K}, and we partition the event EKE_{K} as follow

EK=EmeaningfulEluckE_{K}=E_{\text{meaningful}}\cup E_{\text{luck}} (65)

where

Emeaningful=EK{ωΩDM:K(𝐱test,𝐱1,1)>K(𝐱test,𝐱1,s) for all 2snspl}\displaystyle E_{\text{meaningful}}=E_{K}\;\cap\;\Big{\{}\omega\in\Omega_{\rm DM}:\;\;K\left({\bf x}^{\text{test}},{\bf x}_{1,1}\right)>K\left({\bf x}^{\text{test}},{\bf x}_{1,s}\right)\text{ for all }2\leq s\leq n_{\text{spl}}\Big{\}}
Eluck=EK{ωΩDM:K(𝐱test,𝐱1,1)K(𝐱test,𝐱1,s) for some 2snspl}\displaystyle E_{\text{luck}}=E_{K}\;\cap\;\Big{\{}\omega\in\Omega_{\rm DM}:\;\;K\left({\bf x}^{\text{test}},{\bf x}_{1,1}\right)\leq K\left({\bf x}^{\text{test}},{\bf x}_{1,s}\right)\text{ for some }2\leq s\leq n_{\text{spl}}\Big{\}}

The next two lemmas provide upper bounds for the probability of the events EmeaningfulE_{\text{meaningful}} and Eluck.E_{\text{luck}}.

Lemma 10.

DM[Emeaningful]1|𝒳|𝐱𝒳2R1(K𝐱).\displaystyle\mathbb{P}_{\rm DM}[E_{\text{meaningful}}]\leq\frac{1}{|{\mathcal{X}}|}\sum_{{\bf x}\in{\mathcal{X}}}\mathcal{H}_{2R-1}(K^{\star}_{\bf x}).

Proof.

Define the event

A:={ωΩDM:K(𝐱test,𝐱1,1)>K(𝐱test,𝐱r,1) for all 2rR}{ωΩDM:K(𝐱test,𝐱1,1)>K(𝐱test,𝐱r,2) for all 1rR}.A:=\Big{\{}\omega\in\Omega_{\rm DM}:\;\;K\left({\bf x}^{\text{test}},{\bf x}_{1,1}\right)>K\left({\bf x}^{\text{test}},{\bf x}_{r,1}\right)\text{ for all }2\leq r\leq R\Big{\}}\\ \cap\Big{\{}\omega\in\Omega_{\rm DM}:\;\;K\left({\bf x}^{\text{test}},{\bf x}_{1,1}\right)>K\left({\bf x}^{\text{test}},{\bf x}_{r,2}\right)\text{ for all }1\leq r\leq R\Big{\}}.

This events involves only the first two training points of each category. On the example depicted on Figure 4, that would be the points 𝐱1,1{\bf x}_{1,1} and 𝐱1,2{\bf x}_{1,2}, the points 𝐱2,1{\bf x}_{2,1} and 𝐱2,2{\bf x}_{2,2}, and finally the points 𝐱3,1{\bf x}_{3,1} and 𝐱3,2{\bf x}_{3,2}. The event AA consists in all the outcomes in which, among these 2R2R training points, 𝐱1,1{\bf x}_{1,1} is most similar to 𝐱test{\bf x}^{\text{test}}. We then make two key remarks. First, these 2R2R points are generated by 2R2R distinct sequences on concepts — so if we restrict our attention to these 2R2R points, we are in a situation very similar to the simpler data model ϱSDM\varrho_{SDM} (i.e. we first generate 2R2R sequences of concepts, then from each sequence of concepts we generate a single training point, and finally we generate a test point from the first sequence of concepts.) We will make this intuition precise by appealing to the fact that ϱSDM\varrho_{SDM} is the marginal of ϱDM,\varrho_{DM}, and this will allow us to obtain a bound for DM[A]\mathbb{P}_{\rm DM}[A] in term of the permuted moment of K.K^{\star}. The second remark is that EmeaningfulE_{\text{meaningful}} is clearly contained in AA, and therefore we have

DM[Emeaningful]DM[A]\mathbb{P}_{\rm DM}[E_{\text{meaningful}}]\leq\mathbb{P}_{\rm DM}[A] (66)

so an upper bound for DM[A]\mathbb{P}_{\rm DM}[A] is also an upper bound for DM[Emeaningful]\mathbb{P}_{\rm DM}[E_{\text{meaningful}}].

Let us rename some of the variables. We define 𝐝1,,𝐝2R{\bf d}_{1},\ldots,{\bf d}_{2R}, and 𝐲1,,𝐲2R{\bf y}_{1},\ldots,{\bf y}_{2R} as follow:

𝐝2r1=𝐜r and 𝐝2r=𝐜r for r=1,,R\displaystyle{\bf d}_{2r-1}={\bf c}^{\prime}_{r}\quad\;\;\,\text{ and }\quad{\bf d}_{2r}={\bf c}_{r}\qquad\;\;\text{ for }r=1,\ldots,R
𝐲2r1=𝐱r,1 and 𝐲2r=𝐱r,2 for r=1,,R\displaystyle{\bf y}_{2r-1}={\bf x}_{r,1}\quad\text{ and }\quad{\bf y}_{2r}={\bf x}_{r,2}\qquad\text{ for }r=1,\ldots,R

On the example depicted on Figure 4, that would be:

𝐲1=𝐱1,1,𝐲2=𝐱1,2,𝐲3=𝐱2,1,𝐲4=𝐱2,2,𝐲5=𝐱3,1,𝐲6=𝐱3,2𝐝1=𝐜1,𝐝2=𝐜1,𝐝3=𝐜2,𝐝4=𝐜2,𝐝5=𝐜3,𝐝6=𝐜3\begin{matrix}{\bf y}_{1}={\bf x}_{1,1},&\;\;{\bf y}_{2}={\bf x}_{1,2},&\;\;{\bf y}_{3}={\bf x}_{2,1},&\;\;{\bf y}_{4}={\bf x}_{2,2},&\;\;{\bf y}_{5}={\bf x}_{3,1},&\;\;{\bf y}_{6}={\bf x}_{3,2}\\ {\bf d}_{1}={\bf c}_{1},&\;\;{\bf d}_{2}={\bf c}_{1}^{\prime},&\;\;{\bf d}_{3}={\bf c}_{2},&\;\;{\bf d}_{4}={\bf c}_{2}^{\prime},&\;\;{\bf d}_{5}={\bf c}_{3},&\;\;{\bf d}_{6}={\bf c}_{3}^{\prime}\end{matrix}

In other words, the 𝐲r{\bf y}_{r}’s are the first two training points of each category and the 𝐝r{\bf d}_{r}’s are their corresponding sequence of concepts. With these notations it is clear that the training points 𝐲r{\bf y}_{r} are generated by distinct sequences of concepts, and that the test point 𝐱test{\bf x}^{\text{test}} is generated by the same sequence of concepts than 𝐲1{\bf y}_{1}. Moreover the event AA can now be conveniently written as

A={ωΩDM:K(𝐱test,𝐲1)>K(𝐱test,𝐲r) for all 2r2R}.A=\{\omega\in\Omega_{\rm DM}:\;\;K\left({\bf x}^{\text{test}},{\bf y}_{1}\right)>K\left({\bf x}^{\text{test}},{\bf y}_{r}\right)\text{ for all }2\leq r\leq 2R\}.

Let h:𝒳2R+1h:{\mathcal{X}}^{2R+1}\to\real be the indicator function defined by

h(𝐲1,,𝐲2R,𝐱test)={1 if K(𝐱test,𝐲1)>K(𝐱test,𝐱r) for all 2r2R0otherwiseh({\bf y}_{1},\ldots,{\bf y}_{2R},{\bf x^{\text{test}}})=\begin{cases}1&\text{ if }K\left({\bf x}^{\text{test}},{\bf y}_{1}\right)>K\left({\bf x}^{\text{test}},{\bf x}_{r}\right)\text{ for all }2\leq r\leq 2R\\ 0&\text{otherwise}\end{cases}

We now recall the splitting ω=(ωa,ωb)\omega=(\omega_{a},\omega_{b}) described in (58)-(59) and note that ωa\omega_{a} can be written as

ωa=(φ;𝐝1,,𝐝2R;𝐲1,,𝐲2R;𝐱test)\omega_{a}=(\varphi\;\;;\;\;{\bf d}_{1},\ldots,{\bf d}_{2R}\;\;;\;\;{\bf y}_{1},\ldots,{\bf y}_{2R}\;\;;\;\;{\bf x^{\text{test}}})

Since hh only depends on the variables involved in ωa\omega_{a}, and since, according to Lemma 8, ωbϱDM(ωa,ωb)=ϱSDM(ωa)\sum_{\omega_{b}}\varrho_{\rm DM}(\omega_{a},\omega_{b})=\varrho_{\rm SDM}(\omega_{a}), we obtain

DM[A]\displaystyle\mathbb{P}_{\rm DM}[A] =ωaΩaωbΩbh(𝐲1,,𝐲2R,𝐱test)ϱDM(ωa,ωb)\displaystyle=\sum_{\omega_{a}\in\Omega_{a}}\sum_{\omega_{b}\in\Omega_{b}}h({\bf y}_{1},\ldots,{\bf y}_{2R},{\bf x^{\text{test}}})\;\varrho_{\rm DM}(\omega_{a},\omega_{b})
=ωaΩah(𝐲1,,𝐲2R,𝐱test)ϱSDM(ωa)\displaystyle=\sum_{\omega_{a}\in\Omega_{a}}h({\bf y}_{1},\ldots,{\bf y}_{2R},{\bf x^{\text{test}}})\;\varrho_{\rm SDM}(\omega_{a})
=SDM[ωaΩa:K(𝐱test,𝐲1)>K(𝐱test,𝐲r) for all 2r2R]\displaystyle=\mathbb{P}_{\rm SDM}[\omega_{a}\in\Omega_{a}:\;\;K\left({\bf x}^{\text{test}},{\bf y}_{1}\right)>K\left({\bf x}^{\text{test}},{\bf y}_{r}\right)\text{ for all }2\leq r\leq 2R]
1|𝒳|𝐱𝒳2R1(K𝐱)\displaystyle\leq\frac{1}{|{\mathcal{X}}|}\sum_{{\bf x}\in{\mathcal{X}}}\mathcal{H}_{2R-1}(K^{\star}_{\bf x})

where we have used Theorem 4 in order to get the last inequality (with the understanding that t+1=2Rt+1=2R.) Combining the above bound with (66) concludes the proof. ∎

We now estimate the probability of the event Eluck.E_{\text{luck}}.

Lemma 11.

DM[Eluck]1R.\displaystyle\mathbb{P}_{\rm DM}[E_{\text{luck}}]\leq\frac{1}{R}.

Proof.

For 1rR1\leq r\leq R, we define the events

Br=1rRrr{ωΩDM:max2snsplK(𝐱test,𝐱r,s)>max2snsplK(𝐱test,𝐱r,s)}\displaystyle B_{r}=\bigcap_{\begin{subarray}{c}1\leq r^{\prime}\leq R\\ r^{\prime}\neq r\end{subarray}}\left\{\omega\in\Omega_{\rm DM}:\;\;\max_{2\leq s\leq n_{\text{spl}}}K({\bf x^{\text{test}}},{\bf x}_{r,s})>\max_{2\leq s^{\prime}\leq n_{\text{spl}}}K({\bf x^{\text{test}}},{\bf x}_{r^{\prime},s^{\prime}})\right\}

Note that the events BrB_{r} involve only the training points with an index s2s\geq 2: these are the familiar training points. On the example depicted on Figure 4, these are the training points generated by 𝐜1,𝐜2{\bf c}_{1},{\bf c}_{2} and 𝐜3{\bf c}_{3}. Let us pursue with this example. The event B1B_{1} consists in all the outcomes in which one of the points generated by 𝐜1{\bf c}_{1} is more similar to 𝐱test{\bf x}^{\text{test}} than any of the points generated by 𝐜2{\bf c}_{2} and 𝐜3{\bf c}_{3}. The event B2B_{2} consists in all the outcomes in which one of the points generated by 𝐜2{\bf c}_{2} is more similar to 𝐱test{\bf x}^{\text{test}} than any of the points generated by 𝐜1{\bf c}_{1} and 𝐜3{\bf c}_{3}. And finally the event B3B_{3} consists in all the outcomes in which one of the points generated by 𝐜3{\bf c}_{3} is more similar to 𝐱test{\bf x}^{\text{test}} than any of the points generated by 𝐜1{\bf c}_{1} and 𝐜2{\bf c}_{2}. Importantly, the test point 𝐱test{\bf x}^{\text{test}} is generated by the unfamiliar sequence of concepts 𝐜1{\bf c}_{1}^{\prime}, and this sequence of concept is unrelated to the sequence 𝐜1,𝐜2{\bf c}_{1},{\bf c}_{2} and 𝐜3{\bf c}_{3}. So one would expect that, from simple symmetry, that

DM[B1]=DM[B2]=DM[B3].\mathbb{P}_{\rm DM}[B_{1}]=\mathbb{P}_{\rm DM}[B_{2}]=\mathbb{P}_{\rm DM}[B_{3}]. (67)

We will prove (67) rigorously, for general RR, using Lemma 9 from the previous subsection. But before to do so, let us show that (67) implies the desired upperbound on the probability of Eluck.E_{\text{luck}}. First, note that EluckB1E_{\text{luck}}\subset B_{1} and therefore

DM[Eluck]DM[B1].\mathbb{P}_{\rm DM}[E_{\text{luck}}]\leq\mathbb{P}_{\rm DM}[B_{1}]. (68)

Then, note that B1B_{1}, B2B_{2} and B3B_{3} are mutually disjoints, and therefore, continuing with the same example,

DM[B1]+DM[B2]+DM[B3]=DM[B1B2B3]1\mathbb{P}_{\rm DM}[B_{1}]+\mathbb{P}_{\rm DM}[B_{2}]+\mathbb{P}_{\rm DM}[B_{3}]=\mathbb{P}_{\rm DM}[B_{1}\cup B_{2}\cup B_{3}]\leq 1

which, combined with (67) and (68), gives DM[Eluck]1/3\mathbb{P}_{\rm DM}[E_{\text{luck}}]\leq 1/3 as desired.

We now provide a formal proof of (67). As in the proof of the previous lemma, it is convenient to rename some of the variables. Let denote by 𝐟𝐚𝐦r{\bf fam}_{r} the variable that consists in the familiar training points from category rr:

𝐟𝐚𝐦r=(𝐱r,2,,𝐱r,nspl)𝒳nspl1{\bf fam}_{r}=({\bf x}_{r,2},\ldots,{\bf x}_{r,n_{\text{spl}}})\in{\mathcal{X}}^{n_{\text{spl}}-1}

With this notation we have 𝐟𝐚𝐦r,s=𝐱r,s+1.{\bf fam}_{r,s}={\bf x}_{r,s+1}. We now recall the splitting ω=(ωc,ωd)\omega=(\omega_{c},\omega_{d}) described in (61)-(62), and note that ωc\omega_{c} can be written as

ωc=(φ;𝐟𝐚𝐦1;;𝐟𝐚𝐦R;𝐱test).\omega_{c}=(\varphi;\;\;{\bf fam}_{1}\;\;;\;\;\ldots\;\;;\;\;{\bf fam}_{R}\;\;;\;\;{\bf x^{\text{test}}}). (69)

Using Lemma 9 we have

ωdΩdϱDM(ωc,ωd)\displaystyle\sum_{\omega_{d}\in\Omega_{d}}\varrho_{\rm DM}(\omega_{c},\omega_{d}) =αr=1R𝟏{φ̊(𝐱r,2)=φ̊(𝐱r,3)==φ̊(𝐱r,nspl)}\displaystyle=\alpha\prod^{R}_{r=1}{\bf 1}_{\{\mathring{\varphi}({\bf x}_{r,2})=\mathring{\varphi}({\bf x}_{r,3})=\ldots=\mathring{\varphi}({\bf x}_{r,n_{\text{spl}}})\}} (70)
=αr=1R𝟏{φ̊(𝐟𝐚𝐦r,1)=φ̊(𝐟𝐚𝐦r,2)==φ̊(𝐟𝐚𝐦r,nspl1)}\displaystyle=\alpha\prod^{R}_{r=1}{\bf 1}_{\{\mathring{\varphi}({\bf fam}_{r,1})=\mathring{\varphi}({\bf fam}_{r,2})=\ldots=\mathring{\varphi}({\bf fam}_{r,n_{\text{spl}}-1})\}} (71)
=αr=1Rh(φ,𝐟𝐚𝐦r)\displaystyle=\alpha\prod^{R}_{r=1}h(\varphi,{\bf fam}_{r}) (72)

where α\alpha is the constant appearing in front of the product in Lemma 9 an h:Φ×𝒳nspl1{0,1}h:\Phi\times\mathcal{{\mathcal{X}}}^{n_{\text{spl}}-1}\to\{0,1\} is the indicator function implicitly defined in equality (71)-(72). With the slight abuse of notation of viewing 𝐟𝐚𝐦r{\bf fam}_{r} as a set instead of a tuple, let us rewrite the event BrB_{r} as

Br=1rRrr{ωΩDM:max𝐱𝐟𝐚𝐦rK(𝐱test,𝐱)>max𝐱𝐟𝐚𝐦rK(𝐱test,𝐱)}\displaystyle B_{r}=\bigcap_{\begin{subarray}{c}1\leq r^{\prime}\leq R\\ r^{\prime}\neq r\end{subarray}}\left\{\omega\in\Omega_{\rm DM}:\;\;\max_{{\bf x}\in{\bf fam}_{r}}K({\bf x^{\text{test}}},{\bf x})>\max_{{\bf x}\in{\bf fam}_{r^{\prime}}}K({\bf x^{\text{test}}},{\bf x})\right\}

We also define the corresponding indicator function

g(𝐟𝐚𝐦r,𝐟𝐚𝐦r,𝐱test)={1 if max𝐱𝐟𝐚𝐦rK(𝐱test,𝐱)>max𝐱𝐟𝐚𝐦rK(𝐱test,𝐱)0otherwiseg({\bf fam}_{r},{\bf fam}_{r^{\prime}},{\bf x^{\text{test}}})=\begin{cases}1&\text{ if }\displaystyle\max_{{\bf x}\in{\bf fam}_{r}}K({\bf x^{\text{test}}},{\bf x})>\max_{{\bf x}\in{\bf fam}_{r^{\prime}}}K({\bf x^{\text{test}}},{\bf x})\\ \\ 0&\text{otherwise}\end{cases}

We now compute DM(B1)\mathbb{P}_{\rm DM}(B_{1}) (the formula for the other DM(Br)\mathbb{P}_{\rm DM}(B_{r}) are obtained in a similar manner.) Recall from (69) that the variables involved in 𝐟𝐚𝐦r{\bf fam}_{r} only appear in ωc\omega_{c}. Therefore

DM[B1]\displaystyle\mathbb{P}_{\rm DM}[B_{1}] =ωcΩcωdΩd(r=2Rg(𝐟𝐚𝐦1,𝐟𝐚𝐦r,𝐱test))ϱDM(ωc,ωd)\displaystyle=\sum_{\omega_{c}\in\Omega_{c}}\sum_{\omega_{d}\in\Omega_{d}}\left(\prod_{r=2}^{R}g({\bf fam}_{1},{\bf fam}_{r},{\bf x^{\text{test}}})\right)\ \varrho_{\rm DM}(\omega_{c},\omega_{d}) (73)
=ωcΩc(r=2Rg(𝐟𝐚𝐦1,𝐟𝐚𝐦r,𝐱test))ωdΩdϱDM(ωc,ωd)\displaystyle=\sum_{\omega_{c}\in\Omega_{c}}\left(\prod_{r=2}^{R}g({\bf fam}_{1},{\bf fam}_{r},{\bf x^{\text{test}}})\right)\ \sum_{\omega_{d}\in\Omega_{d}}\varrho_{\rm DM}(\omega_{c},\omega_{d}) (74)
=αωcΩc(r=2Rg(𝐟𝐚𝐦1,𝐟𝐚𝐦r,𝐱test))(r=1Rh(φ,𝐟𝐚𝐦r))\displaystyle=\alpha\sum_{\omega_{c}\in\Omega_{c}}\left(\prod_{r=2}^{R}g({\bf fam}_{1},{\bf fam}_{r},{\bf x^{\text{test}}})\right)\ \Bigg{(}\prod^{R}_{r^{\prime}=1}h(\varphi,{\bf fam}_{r^{\prime}})\bigg{)} (75)

where we have used (72) to obtain the last equality. Let us now compare DM(B1)\mathbb{P}_{\rm DM}(B_{1}) and DM(B2)\mathbb{P}_{\rm DM}(B_{2}). Letting 𝒵:=𝒳nspl1\mathcal{Z}:={\mathcal{X}}^{n_{\text{spl}}-1}, and recalling that ωc=(φ;𝐟𝐚𝐦1,,𝐟𝐚𝐦R;𝐱test)\omega_{c}=(\varphi;\;\;{\bf fam}_{1}\;\;,\;\;\ldots\;\;,\;\;{\bf fam}_{R}\;\;;\;\;{\bf x^{\text{test}}}), we obtain:

DM[B1]=φΦ𝐟𝐚𝐦1𝒵𝐟𝐚𝐦R𝒵𝐱test𝒳(1rRr1g(𝐟𝐚𝐦1,𝐟𝐚𝐦r,𝐱test))(r=1Rh(ϕ,𝐟𝐚𝐦r))\displaystyle\mathbb{P}_{\rm DM}[B_{1}]=\sum_{\varphi\in\Phi}\sum_{{\bf fam}_{1}\in\mathcal{Z}}\cdots\sum_{{\bf fam}_{R}\in\mathcal{Z}}\sum_{{\bf x^{\text{test}}}\in\mathcal{X}}\left(\prod_{\begin{subarray}{c}1\leq r\leq R\\ r\neq 1\end{subarray}}g({\bf fam}_{1},{\bf fam}_{r},{\bf x^{\text{test}}})\right)\left(\prod_{r^{\prime}=1}^{R}h(\phi,{\bf fam}_{r}^{\prime})\right)
DM[B2]=φΦ𝐟𝐚𝐦1𝒵𝐟𝐚𝐦R𝒵𝐱test𝒳(1rRr2g(𝐟𝐚𝐦2,𝐟𝐚𝐦r,𝐱test))(r=1Rh(ϕ,𝐟𝐚𝐦r))\displaystyle\mathbb{P}_{\rm DM}[B_{2}]=\sum_{\varphi\in\Phi}\sum_{{\bf fam}_{1}\in\mathcal{Z}}\cdots\sum_{{\bf fam}_{R}\in\mathcal{Z}}\sum_{{\bf x^{\text{test}}}\in\mathcal{X}}\left(\prod_{\begin{subarray}{c}1\leq r\leq R\\ r\neq 2\end{subarray}}g({\bf fam}_{2},{\bf fam}_{r},{\bf x^{\text{test}}})\right)\left(\prod_{r^{\prime}=1}^{R}h(\phi,{\bf fam}_{r}^{\prime})\right)

From the above it is clear that DM[B1]=DM[B2]\mathbb{P}_{\rm DM}[B_{1}]=\mathbb{P}_{\rm DM}[B_{2}] (as can be seen by exchanging the name of the variables 𝐟𝐚𝐦1{\bf fam}_{1} and 𝐟𝐚𝐦2{\bf fam}_{2}). Similar reasoning shows that the events BrB_{r} are all equiprobable, which concludes the proof. ∎

Combining Lemma 10 and 11, together with the fact that EK=EcorrectEluckE_{K}=E_{\text{correct}}\cup E_{\text{luck}}, concludes the proof of (63). Inequality (63) implies inequality (26), which itself is equivalent to inequality (14).

Appendix C Upper bound for the Permuted Moment of KK^{\star}

This section is devoted to the proof of inequality (15), which we state below as a theorem for convenience.

Theorem 5.

For all 0L0\leq\ell\leq L, we have the upper bound

1|𝒳|𝐱𝒳t(K𝐱)(1𝐤𝒮𝔣(𝐤)𝔤(𝐤))+1t+1(max𝐤𝒮𝔣(𝐤)).\frac{1}{|{\mathcal{X}}|}\sum_{{\bf x}\in{\mathcal{X}}}{\mathcal{H}}_{t}(K^{\star}_{\bf x})\leq\left(1-\sum_{{\bf k}\in\mathcal{S}_{\ell}}\mathfrak{f}({\bf k})\mathfrak{g}({\bf k})\right)+\frac{1}{t+1}\left(\max_{{\bf k}\in\mathcal{S}_{\ell}}\;\mathfrak{f}({\bf k})\right). (76)

The rather intricate formula for the function 𝔣\mathfrak{f} and 𝔤\mathfrak{g} can be found in the main body of the paper, but we will recall them as we go through the proof.

We also recall that the optimal kernel is given by the formula:

K(𝐱,𝐲)=1scL|{φΦ:φ(x)=φ(y) for all 1L}||Φ|K^{\star}({\bf x},{\bf y})=\;\frac{1}{s_{c}^{L}}\;\;\frac{\big{|}\{\varphi\in\Phi:\varphi(x_{\ell})=\varphi(y_{\ell})\text{ for all }1\leq\ell\leq L\}\big{|}}{|\Phi|} (77)

The key insight to derive the upper bound (76) is to note that each pair of sentences (𝐱,𝐲)({\bf x},{\bf y}) induces a graph on the vocabulary {1,2,,nw}\{1,2,\ldots,n_{w}\}, and that the quantity

|{φΦ:φ(x)=φ(y) for all 1L}|\big{|}\{\varphi\in\Phi:\varphi(x_{\ell})=\varphi(y_{\ell})\text{ for all }1\leq\ell\leq L\}\big{|}

can be interpreted as the number of equipartitions of the vocabulary that do not sever any of the edges of the graph. This graph-cut interpretation of the optimal kernel is presented in detail in Subsection C.1. In Subsection C.2 we derive a formula for KK^{\star} which is more tractable than (77). To do this we partition 𝒳×𝒳{\mathcal{X}}\times{\mathcal{X}} into subsets on which KK^{\star} is constant, then provide a formula for the value of KK^{\star} on each of these subsets (c.f. Lemma 16). With this formula at hand, we then appeal to Lemma 3 to derive a first bound for the permuted moment of KK^{\star} (c.f. Lemma 17). This first bound is not fully explicit because it involves the size of the subsets on which KK^{\star} is constant. In section C.3 we appeal to Cayley’s formula, a classical result from graph theory, to estimate the size of these subsets (c.f. Lemma 18) and therefore conclude the proof of Theorem 5.

We now introduce the combinatorial notations that will be used in this section, and we recall a few basics combinatorial facts. We denote by ={0,1,2,}\mathbb{N}=\{0,1,2,\ldots\} the nonnegative integers. We use the standard notations

(nk):=n!k!(nk)! and (nk1,k2,,km):=n!k1!k2!kn!{n\choose k}:=\frac{n!}{k!(n-k)!}\qquad\text{ and }\qquad{n\choose k_{1},k_{2},\dots,k_{m}}:=\frac{n!}{k_{1}!k_{2}!\cdots k_{n}!}

for the binomial and multinomial coefficients, with the understanding that 0!=10!=1. We recall that multinomial coefficients can be interpreted as the number of ways of placing nn distinct objects into mm distinct bins, with the constraint that k1k_{1} objects must go in the first bin, k2k_{2} objects must go in the second bin, and so forth.

The Stirling numbers of the second kind {nk}\genfrac{\{}{\}}{0.0pt}{}{n}{k} are close relatives of the binomial coefficients. {nk}\genfrac{\{}{\}}{0.0pt}{}{n}{k} stands for the number of ways to partition a set of nn objects into kk nonempty subsets. To give a simple example, {42}=7\genfrac{\{}{\}}{0.0pt}{}{4}{2}=7 because there are 7 ways to partition the set {1,2,3,4}\{1,2,3,4\} into two non-empty subsets, namely:

{1}{2,3,4},{2}{1,3,4},{3}{1,2,4},{4}{1,2,3},\displaystyle\{1\}\cup\{2,3,4\},\qquad\{2\}\cup\{1,3,4\},\qquad\{3\}\cup\{1,2,4\},\qquad\{4\}\cup\{1,2,3\},
{1,2}{3,4},{1,3}{2,4},{1,4}{3,4}.\displaystyle\{1,2\}\cup\{3,4\},\qquad\{1,3\}\cup\{2,4\},\qquad\{1,4\}\cup\{3,4\}.

Stirling numbers are easily computed via the following variant of Pascal’s recurrence formula 333Alternatively, Stirling numbers can be defined through the formula {nk}=1k!i=0k(1)i(kki)(ki)n\genfrac{\{}{\}}{0.0pt}{}{n}{k}=\frac{1}{k!}\sum_{i=0}^{k}(-1)^{i}{k\choose k-i}(k-i)^{n}:

{n1}=1,{nn}=1 for n1,\displaystyle\genfrac{\{}{\}}{0.0pt}{}{n}{1}=1,\qquad\genfrac{\{}{\}}{0.0pt}{}{n}{n}=1\qquad\text{ for }n\geq 1,
{nk}={n1k1}+k{n1k} for 2kn1.\displaystyle\genfrac{\{}{\}}{0.0pt}{}{n}{k}=\genfrac{\{}{\}}{0.0pt}{}{n-1}{k-1}+k\;\genfrac{\{}{\}}{0.0pt}{}{n-1}{k}\qquad\text{ for }2\leq k\leq n-1.

The above formula is easily derived from the definition of the Stirling numbers as providing the number of ways to partition a set of nn objects into kk nonempty subsets (see for example chapter 6 of [graham1989concrete]). Table 2 shows the first few Stirling numbers.

Table 2: First five rows of the Strirling triangle for the Stirling numbers {nk}\genfrac{\{}{\}}{0.0pt}{}{n}{k}.
n\kn\backslash k 1 2 3 4 5
1 1
2 1 1
3 1 3 1
4 1 7 6 1
5 1 15 25 10 1

We recall that an undirected graph is an ordered pair 𝒢=(V,E)\mathcal{G}=(V,E), where VV is the vertex set and

E{{v,v}:v,vV and vv}E\subset\{\{v,v^{\prime}\}:v,v^{\prime}\in V\text{ and }v\neq v^{\prime}\}

is the edge set. Edges are unordered pairs of distinct vertices (so loops are not allowed.) A tree is a connected graph with no cycles. A tree on nn vertices has exactly n1n-1 edges. Cayley’s formula states that there are nn2n^{n-2} ways to put n1n-1 edges on nn labeled vertices in order to make a tree. We formally state this classical result below:

Lemma 12 (Cayley’s formula).

There are nn2n^{n-2} trees on nn labeled vertices.

C.1 Graph-Cut Formulation of the Optimal Kernel

In this section we consider undirected graphs on the vertex set

𝒱:={1,2,,nw}.{\mathcal{V}}:=\{1,2,\ldots,n_{w}\}.

Since the data space 𝒳{\mathcal{X}} consists of sentences of length LL, graphs that have at most LL edges will be of particular importance. We therefore define:

𝔊:={All graphs on 𝒱 that have at most L edges}.\mathfrak{G}:=\{\text{All graphs on ${\mathcal{V}}$ that have at most $L$ edges}\}.

In other words, 𝔊\mathfrak{G} consists in all the graphs 𝒢=(𝒱,E)\mathcal{G}=({\mathcal{V}},E) whose edge set EE has cardinality less or equal to LL. Since these graphs all have the same vertex set, we will often identify them with their edge set. We now introduce a mapping between pairs of sentences containing LL words, and graphs containing at most LL edges.

Definition 2.

The function ζ:𝒳×𝒳𝔊\zeta:{\mathcal{X}}\times{\mathcal{X}}\to\mathfrak{G} is defined by

ζ(𝐱,𝐲):=1Lxy{{x,y}}.\zeta({\bf x},{\bf y}):=\bigcup_{\begin{subarray}{c}1\leq\ell\leq L\\ x_{\ell}\neq y_{\ell}\end{subarray}}\{\{x_{\ell},y_{\ell}\}\big{\}}. (78)

The right hand side of (78) is a set of at most LL edges. Since graphs in 𝔊\mathfrak{G} are identified with their edge set, ζ\zeta indeed define a mapping from 𝒳×𝒳{\mathcal{X}}\times{\mathcal{X}} to 𝔊.\mathfrak{G}. Let us give a few examples illustrating how the map ζ\zeta works. Suppose we have a vocabulary of nw=10n_{w}=10 words and sentences of length L=6L=6. Consider the pair of sentences (𝐱,𝐲)𝒳×𝒳({\bf x},{\bf y})\in{\mathcal{X}}\times{\mathcal{X}} where

𝐱=[2,2,8,5,9,7]𝐲=[2,5,8,2,2,1]\begin{matrix}&{\bf x}=[&2,&2,&8,&5,&9,&7&]\\ &{\bf y}=[&2,&5,&8,&2,&2,&1&]\end{matrix} (79)

Then ζ(𝐱,𝐲)\zeta({\bf x},{\bf y}) is the set of 3 edges

ζ(𝐱,𝐲)={{2,5},{9,2},{7,1}}.\zeta({\bf x},{\bf y})=\Big{\{}\quad\{2,5\},\quad\{9,2\},\quad\{7,1\}\quad\Big{\}}.

which indeed define a graph on 𝒱{\mathcal{V}}. Note that position =2\ell=2 and =4\ell=4 of (𝐱,𝐲)({\bf x},{\bf y}) ‘code’ for the same edge {2,5}\{2,5\}, position 55 codes for the edge {9,2}\{9,2\}, and position 66 codes for the edge {7,1}\{7,1\}. On the other hand, position 11 and 33 do not code for any edge: indeed, since x1=y1x_{1}=y_{1} and x3=y3x_{3}=y_{3}, these two positions do not contribute any edges to the edge set defined by (78). We will say that positions 11 and 33 are silent. We make this terminology formal in the definition below:

Definition 3.

Let (𝐱,𝐲)𝒳×𝒳({\bf x},{\bf y})\in{\mathcal{X}}\times{\mathcal{X}}. If x=yx_{\ell}=y_{\ell} for some 1L1\leq\ell\leq L, we say that position \ell of the pair (𝐱,𝐲)({\bf x},{\bf y}) is silent. If xyx_{\ell}\neq y_{\ell} for some 1L1\leq\ell\leq L, we say that position \ell of the pair (𝐱,𝐲)({\bf x},{\bf y}) codes for the edge {x,y}\{x_{\ell},y_{\ell}\}.

Note that if (𝐱,𝐲)({\bf x},{\bf y}) has some silent positions, or if multiple positions codes for the same edge, then the graph ζ(𝐱,𝐲)\zeta({\bf x},{\bf y}) will have strictly less than LL edges. On the other hand, if none of these take place, then ζ(𝐱,𝐲)\zeta({\bf x},{\bf y}) will have exactly LL edges. For example the pair of sentences

𝐱=[1,1,1,5,6,7]𝐲=[2,3,4,6,7,1]\begin{matrix}&{\bf x}=[&1,&1,&1,&5,&6,&7]\\ &{\bf y}=[&2,&3,&4,&6,&7,&1]\end{matrix} (80)

does not have silent positions, and all positions code for different edges. The corresponding graph

ζ(𝐱,𝐲)={{1,2},{1,3},{1,4},{5,6},{6,7},{7,1}}\zeta({\bf x},{\bf y})=\Big{\{}\quad\{1,2\},\quad\{1,3\},\quad\{1,4\},\quad\{5,6\},\quad\{6,7\},\quad\{7,1\}\quad\Big{\}}

has the maximal possible number of edges, namely L=6L=6 edges. From the above discussion, it is clear that any graph with LL or less edges can be expressed as ζ(𝐱,𝐲)\zeta({\bf x},{\bf y}) for some pair of sentences (𝐱,𝐲)𝒳×𝒳.({\bf x},{\bf y})\in{\mathcal{X}}\times{\mathcal{X}}. Therefore ζ:𝒳×𝒳𝔊\zeta:{\mathcal{X}}\times{\mathcal{X}}\to\mathfrak{G} is surjective. On the other hand, different pair of sentences can be mapped to the same graph. Therefore ζ\zeta is not injective. We now introduce the following function.

Definition 4 (Number of cut-free equipartitions of a graph).

The function :𝔊\mathcal{I}:\mathfrak{G}\to\mathbb{N} is defined by :

(𝒢)=|{φΦ:φ(v)=φ(v) for all edge {v,v} of the graph 𝒢}|\mathcal{I}(\mathcal{G})=|\{\varphi\in\Phi:\varphi(v)=\varphi(v^{\prime})\text{ for all edge $\{v,v^{\prime}\}$ of the graph }\mathcal{G}\}| (81)

Recall that Φ\Phi is the set of maps φ:𝒱{1,,nc}\varphi:{\mathcal{V}}\to\{1,\ldots,n_{c}\} that satisfy |φ1(c)|=sc|\varphi^{-1}(c)|=s_{c} for all 1cnc.1\leq c\leq n_{c}. Given a graph 𝒢\mathcal{G}, the quantity (𝒢)\mathcal{I}(\mathcal{G}) can therefore be interpreted as the number of ways to partition the vertices into ncn_{c} labelled subsets of equal size so that no edges are severed (i.e. two connected vertices must be in the same subset.) In other words, (𝒢)\mathcal{I}(\mathcal{G}) is the number of “cut-free” equipartition of the graph 𝒢\mathcal{G}, see Figure 5 for an illustration. Note that if the graph 𝒢\mathcal{G} is connected, then (𝒢)=0\mathcal{I}(\mathcal{G})=0 since any equipartition of the graph will severe some edges. On the other hand, if the graph 𝒢\mathcal{G} has no edges, then (𝒢)=|Φ|\mathcal{I}(\mathcal{G})=|\Phi| since there are no edges to be cut (and therefore any equipartition is acceptable.)

Refer to caption
(a) Cut-free
Refer to caption
(b) Not cut-free
Figure 5: Two equipartitions of the same graph (each subsets of the equipartitions contain 77 vertices). The equipartition on the left is cut-free (no edges are severed). The equipartition on the right is not cut-free (4 edges are severed). The optimal kernel K(𝐱,𝐲)K^{\star}({\bf x},{\bf y}) can be interpreted as the number of distinct cut-free equipartitions of the graph ζ(𝐱,𝐲)\zeta({\bf x},{\bf y}) (modulo some scaling factor.)

The optimal kernel KK^{\star} can be expressed as a composition of the function ζ\zeta and \mathcal{I}. Indeed:

K(𝐱,𝐲)\displaystyle K^{\star}({\bf x},{\bf y}) =1|Φ|scL|{φΦ:φ(x)=φ(y) for all 1L}|\displaystyle=\frac{1}{|\Phi|s_{c}^{L}}\;\;|\{\varphi\in\Phi:\varphi(x_{\ell})=\varphi(y_{\ell})\text{ for all }1\leq\ell\leq L\}| (82)
=1|Φ|scL|{φΦ:φ(v)=φ(v) for all {v,v}ζ(𝐱,𝐲) }|\displaystyle=\frac{1}{|\Phi|s_{c}^{L}}\;\;|\{\;\varphi\in\Phi:\varphi(v)=\varphi(v^{\prime})\text{ for all $\{v,v^{\prime}\}\in\zeta({\bf x},{\bf y})$ }\}| (83)
=1|Φ|scL(ζ(𝐱,𝐲))\displaystyle=\frac{1}{|\Phi|s_{c}^{L}}\;\;\mathcal{I}(\zeta({\bf x},{\bf y})) (84)

where we have simply used that ζ(𝐱,𝐲):=1Lxy{{x,y}}\zeta({\bf x},{\bf y}):=\bigcup_{\begin{subarray}{c}1\leq\ell\leq L\\ x_{\ell}\neq y_{\ell}\end{subarray}}\{\{x_{\ell},y_{\ell}\}\big{\}} to go from (82) to (83). We will refer to (84) as the graph-cut formulation of the optimal kernel.

We have discussed earlier that the function ζ:𝒳×𝒳𝔊\zeta:{\mathcal{X}}\times{\mathcal{X}}\to\mathfrak{G} is surjective but not injective. We conclude this subsection with a lemma that provides an exact count of how many distinct (𝐱,𝐲)({\bf x},{\bf y}) are mapped by ζ\zeta to a same graph 𝒢.\mathcal{G}.

Lemma 13.

Suppose 𝒢𝔊\mathcal{G}\in\mathfrak{G} has mm edges. Then

|ζ1(𝒢)|=|{(𝐱,𝐲)𝒳×𝒳:ζ(𝐱,𝐲)=𝒢}|=m!α=mL(Lα){αm}2αnwLα.|\zeta^{-1}(\mathcal{G})|=|\{({\bf x},{\bf y})\in{\mathcal{X}}\times{\mathcal{X}}:\zeta({\bf x},{\bf y})=\mathcal{G}\}|=m!\sum_{\alpha=m}^{L}{L\choose\alpha}\genfrac{\{}{\}}{0.0pt}{}{\alpha}{m}2^{\alpha}n_{w}^{L-\alpha}.
Proof.

Fix once and for all a graph 𝒢=(𝒱,E)\mathcal{G}=({\mathcal{V}},E) with edge set E={e1,,em}E=\{e_{1},\ldots,e_{m}\} where mLm\leq L. Given 0αL0\leq\alpha\leq L, define the set

𝒪α={(𝐱,𝐲)𝒳×𝒳:ζ(𝐱,𝐲)=𝒢 and (𝐱,𝐲) has exactly α non-silent positions}.\mathcal{O}_{\alpha}=\{({\bf x},{\bf y})\in{\mathcal{X}}\times{\mathcal{X}}:\zeta({\bf x},{\bf y})=\mathcal{G}\text{ and }({\bf x},{\bf y})\text{ has exactly $\alpha$ non-silent positions}\}.

We start by noting that the set 𝒪α\mathcal{O}_{\alpha} is empty for all α<m\alpha<m: indeed, since 𝒢\mathcal{G} has mm edges, at least mm positions of a pair (𝐱,𝐲)({\bf x},{\bf y}) must be coding for edges (i.e., must be non-silent) in order to have ζ(𝐱,𝐲)=𝒢\zeta({\bf x},{\bf y})=\mathcal{G}. We therefore have the following partition:

ζ1(𝒢)=α=mL𝒪α and 𝒪α𝒪α= if αα.\zeta^{-1}(\mathcal{G})=\bigcup_{\alpha=m}^{L}\mathcal{O}_{\alpha}\qquad\text{ and }\qquad\mathcal{O}_{\alpha}\cap\mathcal{O}_{\alpha^{\prime}}=\emptyset\;\;\text{ if }\alpha\neq\alpha^{\prime}.

To conclude the proof, we need to show that

|𝒪α|=(Lα){αm}m! 2αnwLαfor all mαL.\left|\mathcal{O}_{\alpha}\right|={L\choose\alpha}\;\genfrac{\{}{\}}{0.0pt}{}{\alpha}{m}\;m!\;2^{\alpha}\;n_{w}^{L-\alpha}\qquad\text{for all }m\leq\alpha\leq L. (85)

Consider the following process to generate an ordered pair (𝐱,𝐲)({\bf x},{\bf y}) that belong to 𝒪α\mathcal{O}_{\alpha}: we start by deciding which positions of (x,y) are going to be silent, and which positions are going to code for which edge of the graph 𝒢\mathcal{G}. This is equivalent to choosing a map ρ:{1,,L}{e1,,em,s}\rho:\{1,\ldots,L\}\to\{e_{1},\ldots,e_{m},s\} where {1,,L}\{1,\ldots,L\} denotes the LL positions, e1,,eme_{1},\ldots,e_{m} denote the mm edges of the graph 𝒢\mathcal{G}, and ss is the silent symbol. Choosing a map ρ\rho correspond to assigning a “role" to each position: ρ()=ei\rho(\ell)=e_{i} means that position \ell is given the role to code for edge eie_{i}, and ρ()=s\rho(\ell)=s means that position \ell is given the role of being silent. The map ρ\rho must satisfy:

|ρ1(s)|=Lα and ρ1(ei) for 1im\displaystyle|\rho^{-1}(s)|=L-\alpha\qquad\text{ and }\qquad\rho^{-1}(e_{i})\neq\emptyset\;\;\;\text{ for $1\leq i\leq m$} (86)

because LαL-\alpha position must be silent and each edge must be represented by at least one position. The number of maps ρ:{1,,L}{e1,,em,s}\rho:\{1,\ldots,L\}\to\{e_{1},\ldots,e_{m},s\} that satisfies (86) is equal to

(LLα){αm}m!{L\choose L-\alpha}\;\;\genfrac{\{}{\}}{0.0pt}{}{\alpha}{m}\;\;m!

Indeed, we need to choose the LαL-\alpha positions that will be mapped to the silent symbol ss: there are (LLα){L\choose L-\alpha} ways of accomplishing this. We then partition the α\alpha remaining positions into mm non-empty subsets: there are {αm}\genfrac{\{}{\}}{0.0pt}{}{\alpha}{m} ways of accomplishing this. We finally map each of these non-empty subset to a different edge: there are m!m! ways of accomplishing this.

We have shown that there are (Lα){αm}m!{L\choose\alpha}\genfrac{\{}{\}}{0.0pt}{}{\alpha}{m}m! ways to assign roles to the positions. Let say that position \ell is assigned the role of coding for edge {v,v}.\{v,v^{\prime}\}. Then we have two choices to generate entries xx_{\ell} and yy_{\ell}: either x=vx_{\ell}=v and y=vy_{\ell}=v^{\prime}, or x=vx_{\ell}=v^{\prime} and y=v.y_{\ell}=v. Since α\alpha positions are coding for edges, this lead to the factor 2α2^{\alpha} in equation (85). Finally, if the position \ell is silent, then we have nwn_{w} choices to generate entries xx_{\ell} and yy_{\ell} (because we need to choose v𝒱v\in{\mathcal{V}} such that x=y=vx_{\ell}=y_{\ell}=v) , hence the factor nwLαn_{w}^{L-\alpha} appearing in (85). ∎

C.2 Level Sets of the Optimal Kernel

We recall that a connected component (or simply a component) of a graph is a connected subgraph that is not part of any larger connected subgraph. Since graphs in 𝔊\mathfrak{G} have at most LL edges, their components contain at most L+1L+1 vertices. This comes from the fact that the largest component that can be made with LL edges contains L+1L+1 vertices. It is therefore natural, given a vector 𝐤=[k1,,kL+1]L+1{\bf k}=[k_{1},\ldots,k_{L+1}]\in\mathbb{N}^{L+1}, to define

𝔊𝐤:={𝒢𝔊:𝒢 has exactly k1 components of size 1, exactly k2 components of size 2,…, exactly kL+1 components of size L+1 }\mathfrak{G}_{{\bf k}}:=\{\mathcal{G}\in\mathfrak{G}:\mathcal{G}\text{ has exactly $k_{1}$ components of size $1$, exactly $k_{2}$ components of size $2$,}\\ \text{\ldots, exactly $k_{L+1}$ components of size $L+1$ }\} (87)

where the size of a component refers to the number of vertices it contains. We recall that ={0,1,2,}\mathbb{N}=\{0,1,2,\ldots\} therefore some of the entries of the vector 𝐤{\bf k} can be equal to zero. Note that components of size 11 are simply isolated vertices. The following lemma identify which 𝐤L+1{\bf k}\in\mathbb{N}^{L+1} lead to non-empty 𝔊𝐤.\mathfrak{G}_{\bf k}.

Lemma 14.

The set 𝔊𝐤\mathfrak{G}_{\bf k} is not empty if and only if 𝐤{\bf k} satisfies

i=1L+1iki=nw and i=1L+1(i1)kiL.\sum_{i=1}^{L+1}ik_{i}=n_{w}\quad\text{ and }\quad\sum_{i=1}^{L+1}(i-1)k_{i}\leq L. (88)
Proof.

Suppose 𝔊𝐤\mathfrak{G}_{\bf k} is not empty. Then there exists a graph 𝒢𝔊\mathcal{G}\in\mathfrak{G} that has exactly kik_{i} components of size ii, for 1iL+1.1\leq i\leq L+1. A component of size ii contains ii vertices (by definition) and at least i1i-1 edges (otherwise it would not be connected.) Since 𝒢𝔊\mathcal{G}\in\mathfrak{G} it must have nwn_{w} vertices and at most LL edges. Therefore (88) must hold.

Suppose that 𝐤L+1{\bf k}\in\mathbb{N}^{L+1} satisfies (88). Then we can easily construct a graph 𝒢\mathcal{G} on 𝒱{\mathcal{V}} that has a number of edges less or equal to LL, and that has exactly kik_{i} components of size ii, for 1iL+1.1\leq i\leq L+1. To do this we first partition the vertices into subsets so that there are kik_{i} subsets of size ii, for 1iL+1.1\leq i\leq L+1. We then put i1i-1 edges on each subset of size ii so that they form connected components. The resulting graph has kik_{i} components of size ii, for 1iL+11\leq i\leq L+1, and i=1L+1(i1)ki\sum_{i=1}^{L+1}(i-1)k_{i} edges. ∎

The previous lemma allows us to partition 𝔊\mathfrak{G} into non-empty subsets as follow:

𝔊=𝐤𝒮𝔊𝐤,𝔊𝐤 for all 𝐤𝒮,and𝔊𝐤𝔊𝐤= if 𝐤𝐤\displaystyle\mathfrak{G}=\bigcup_{{\bf k}\in\mathcal{S}}\mathfrak{G}_{\bf k},\qquad\mathfrak{G}_{\bf k}\neq\emptyset\text{ for all }{\bf k}\in\mathcal{S},\qquad\text{and}\qquad\mathfrak{G}_{\bf k}\cap\mathfrak{G}_{{\bf k}^{\prime}}=\emptyset\text{ if }{\bf k}\neq{\bf k}^{\prime} (89)
where 𝒮:={𝐤L+1:i=1L+1iki=nw and i=1L+1(i1)kiL}.\displaystyle\text{where }\mathcal{S}:=\left\{{\bf k}\in\mathbb{N}^{L+1}:\sum_{i=1}^{L+1}ik_{i}=n_{w}\quad\text{ and }\quad\sum_{i=1}^{L+1}(i-1)k_{i}\leq L\right\}. (90)

Recall that (𝒢)\mathcal{I}(\mathcal{G}) count the number of equipartitions that do not severe edges of 𝒢.\mathcal{G}. The next lemma shows that two graphs that belongs to the same subset 𝔊𝐤\mathfrak{G}_{\bf k} have the same number of cut-free equipartitions, and it provides a formula for this number in term of the index 𝐤{\bf k} of the subset.

Lemma 15.

Suppose 𝐤𝒮{\bf k}\in\mathcal{S} and define the set of admissible assignment matrices

𝒜𝐤:={A(L+1)×nc:j=1ncAij=ki for all i and i=1L+1iAij=sc for all j }.\mathcal{A}_{{\bf k}}:=\left\{A\in\mathbb{N}^{(L+1)\times n_{c}}\;\;:\;\;\sum_{j=1}^{n_{c}}A_{ij}=k_{i}\text{ for all $i$ \;\;\; and }\;\;\;\sum_{i=1}^{L+1}iA_{ij}=s_{c}\text{ for all $j$ }\right\}. (91)

Then for all 𝒢𝔊𝐤\mathcal{G}\in\mathfrak{G}_{\bf k}, we have that

(𝒢)=A𝒜𝐤i=1L+1(kiAi,1,Ai,2,,Ai,nc).\mathcal{I}(\mathcal{G})=\sum_{A\in\mathcal{A}_{{\bf k}}}\prod_{i=1}^{L+1}{k_{i}\choose A_{i,1},A_{i,2},\ldots,A_{i,n_{c}}}. (92)

Let us remark that, since 0!=10!=1, the multinomial coefficient (kiAi,1,Ai,2,,Ai,nc){k_{i}\choose A_{i,1},A_{i,2},\ldots,A_{i,n_{c}}} appearing in (92) is equal to 11 when kik_{i} is equal to 0.0.

Poof of Lemma 15.

Let 𝐤𝒮{\bf k}\in\mathcal{S} and fix once and for all a graph 𝒢𝔊𝐤\mathcal{G}\in\mathfrak{G}_{\bf k}. Define the set

Ψ={φΦ:φ(v)=φ(v) for all edge {v,v} of the graph 𝒢}\Psi=\{\varphi\in\Phi:\varphi(v)=\varphi(v^{\prime})\text{ for all edge $\{v,v^{\prime}\}$ of the graph }\mathcal{G}\}

so that (𝒢)=|Ψ|.\mathcal{I}(\mathcal{G})=|\Psi|. Note that a map φ\varphi that belongs to Ψ\Psi must map all vertices that are in a connected component to the same concept (otherwise some edges of 𝒢\mathcal{G} would be severed.) So a map φΨ\varphi\in\Psi can be viewed as assigning connected components to concepts. Given a matrix A(L+1)×ncA\in\mathbb{N}^{(L+1)\times n_{c}} we define the set:

ΨA={φΨ: φ assigns Aij components of size i to concept j, for all 1iL+1 and 1jnc}.\Psi_{A}=\{\varphi\in\Psi:\text{ $\varphi$ assigns $A_{ij}$ components of size $i$ to concept $j$, for all }1\leq i\leq L+1\text{ and }1\leq j\leq n_{c}\}.

We then note that the set ΨA\Psi_{A} is empty unless the matrix AA satisfies:

Ai,1+Ai,2+Ai,3++Ai,nc=ki for all 1iL+1\displaystyle A_{i,1}+A_{i,2}+A_{i,3}+\ldots+A_{i,n_{c}}=k_{i}\qquad\text{ for all }1\leq i\leq L+1
A1,j+2A2,j+3A3,j++(L+1)AL+1,j=sc for all 1jnc.\displaystyle A_{1,j}+2A_{2,j}+3A_{3,j}+\ldots+(L+1)A_{L+1,j}=s_{c}\qquad\text{ for all }1\leq j\leq n_{c}.

The first constraint states that the total number of connected components of size ii is equal to kik_{i} (because 𝒢𝔊𝐤\mathcal{G}\in\mathfrak{G}_{\bf k}). The second constraint states that concept jj must receive a total of scs_{c} vertices (because φΦ\varphi\in\Phi.) The matrices that satisfy these two constraints constitute the set 𝒜𝐤\mathcal{A}_{{\bf k}} defined in (91). We therefore have the following partition of the set Ψ\Psi:

Ψ=A𝒜𝐤ΨA,ΨA if A𝒜𝐤 ,ΨAΨB= if AB.\Psi=\bigcup_{A\in\mathcal{A}_{\bf k}}\Psi_{A},\qquad\Psi_{A}\neq\emptyset\text{ if $A\in\mathcal{A}_{\bf k}$ },\qquad\Psi_{A}\cap\Psi_{B}=\emptyset\text{ if $A\neq B$}.

To conclude the proof, we need to show that

|ΨA|=i=1L+1(kiAi,1,Ai,2,,Ai,nc) for all A𝒜𝐤.|\Psi_{A}|=\prod_{i=1}^{L+1}{k_{i}\choose A_{i,1},A_{i,2},\ldots,A_{i,n_{c}}}\qquad\text{ for all }A\in\mathcal{A}_{\bf k}. (93)

To see this, consider the kik_{i} components of size ii. The number of ways to assign them to the ncn_{c} concepts so that concept jj receives AijA_{ij} of them is equal to the multinomial coefficient (kiAi,1,Ai,2,,Ai,nc).{k_{i}\choose A_{i,1},A_{i,2},\ldots,A_{i,n_{c}}}. Repeating this reasonning for the components of each size gives (93). ∎

We now leverage the previous lemma to obtain a formula for K.K^{\star}. For 𝐤𝒮{\bf k}\in\mathcal{S} we define

Ω𝐤:=ζ1(𝔊𝐤).\Omega_{\bf k}:=\zeta^{-1}(\mathfrak{G}_{\bf k}).

Since ζ:𝒳×𝒳𝔊\zeta:{\mathcal{X}}\times{\mathcal{X}}\to\mathfrak{G} is surjective, partition (89) of 𝔊\mathfrak{G} induces the following partition of 𝒳×𝒳{\mathcal{X}}\times{\mathcal{X}}:

𝒳×𝒳=𝐤𝒮Ω𝐤,Ω𝐤 if 𝐤𝒮andΩ𝐤Ω𝐤= if 𝐤𝐤.{\mathcal{X}}\times{\mathcal{X}}=\bigcup_{{\bf k}\in\mathcal{S}}\Omega_{\bf k},\qquad\Omega_{\bf k}\neq\emptyset\text{ if }{\bf k}\in\mathcal{S}\qquad\text{and}\qquad\Omega_{\bf k}\cap\Omega_{{\bf k}^{\prime}}=\emptyset\text{ if }{\bf k}\neq{\bf k}^{\prime}. (94)

Using the graph-cut formulation of the optimal kernel together with Lemma 15 we therefore have

K(𝐱,𝐲)=1|Φ|scL(ζ(𝐱,𝐲))=1|Φ|scLA𝒜𝐤i=1L+1(kiAi,1,Ai,2,,Ai,nc)for all (𝐱,𝐲)Ω𝐤.K^{\star}({\bf x},{\bf y})=\frac{1}{|\Phi|s_{c}^{L}}\;\;\mathcal{I}(\zeta({\bf x},{\bf y}))=\frac{1}{|\Phi|s_{c}^{L}}\;\;\sum_{A\in\mathcal{A}_{{\bf k}}}\prod_{i=1}^{L+1}{k_{i}\choose A_{i,1},A_{i,2},\ldots,A_{i,n_{c}}}\quad\text{for all $({\bf x},{\bf y})\in\Omega_{\bf k}.$} (95)

The above formula is key to our analysis. We restate it in the lemma below, but in a slightly different format that will better suit the rest of the analysis. Let 𝔣:𝒮\;\mathfrak{f}:\mathcal{S}\to\real be the function defined by

𝔣(𝐤):=ncL(sc!)ncnw!A𝒜𝐤i=1L+1(kiAi,1,Ai,2,,Ai,nc)\;\mathfrak{f}({\bf k}):=\frac{n_{c}^{L}\;(s_{c}!)^{n_{c}}}{n_{w}!}\sum_{A\in\mathcal{A}_{{\bf k}}}\prod_{i=1}^{L+1}{k_{i}\choose A_{i,1},A_{i,2},\ldots,A_{i,n_{c}}} (96)

We then have:

Lemma 16 (Level set decomposition of KK^{\star}).

The kernel KK^{\star} is constant on each subsets Ω𝐤\Omega_{\bf k} of the partition (94). Moreover we have

K(𝐱,𝐲)=𝔣(𝐤)/|𝒳|for all (𝐱,𝐲)Ω𝐤 and for all 𝐤𝒮.K^{\star}({\bf x},{\bf y})=\;\mathfrak{f}({\bf k})/|{\mathcal{X}}|\qquad\text{for all }({\bf x},{\bf y})\in\Omega_{\bf k}\text{ and for all }{\bf k}\in\mathcal{S}.
Proof.

The quantity |Φ||\Phi| appearing in (95) can be interpreted as the number of ways to assign the nwn_{w} words to the ncn_{c} concepts so that each concept receives scs_{c} words. Therefore

|Φ|=(nwsc,sc,,sc)=nw!(sc!)nc.|\Phi|={n_{w}\choose s_{c},s_{c},\ldots,s_{c}}=\frac{n_{w}!}{(s_{c}!)^{n_{c}}}.

Combined with the fact that |𝒳|=nwL|{\mathcal{X}}|=n_{w}^{L}, this leads to the desired formula for K.K^{\star}.

The above lemma provides us with the level sets of the optimal kernel. Together with Lemma 3, this allows us to derive the following upper bound for the permuted moment of KK^{\star}.

Lemma 17.

Let Ω=𝒳×𝒳.\Omega={\mathcal{X}}\times{\mathcal{X}}. The inequality

1|𝒳|𝐱𝒳t(K𝐱)(1𝐤𝒮|Ω𝐤||Ω|𝔣(𝐤))+1t+1(max𝐤𝒮𝔣(𝐤))\frac{1}{|{\mathcal{X}}|}\sum_{{\bf x}\in{\mathcal{X}}}{\mathcal{H}}_{t}(K^{\star}_{\bf x})\leq\left(1-\sum_{{\bf k}\in\mathcal{S}^{\prime}}\frac{|\Omega_{{\bf k}}|}{|\Omega|}\;\mathfrak{f}({\bf k})\right)+\frac{1}{t+1}\left(\max_{{\bf k}\in\mathcal{S}^{\prime}}\;\mathfrak{f}({\bf k})\right)

holds for all 𝒮𝒮\mathcal{S}^{\prime}\subset\mathcal{S}.

Proof.

Let 𝒮𝒮\mathcal{S}^{\prime}\subset\mathcal{S} and define:

λ=max𝐤𝒮max(𝐱,𝐲)Ω𝐤K(𝐱,𝐲)=1|𝒳|max𝐤𝒮𝔣(𝐤)\lambda=\max_{{\bf k}\in\mathcal{S}^{\prime}}\max_{({\bf x},{\bf y})\in\Omega_{\bf k}}K^{\star}({\bf x},{\bf y})=\frac{1}{|{\mathcal{X}}|}\max_{{\bf k}\in\mathcal{S}^{\prime}}\;\mathfrak{f}({\bf k})

where we have used the fact that KK^{\star} is equal to 𝔣(𝐤)/|𝒳|\;\mathfrak{f}({\bf k})/|{\mathcal{X}}| on Ω𝐤.\Omega_{\bf k}. We then appeal to Lemma 3 to obtain:

1|𝒳|𝐱𝒳t(K(𝐱,))\displaystyle\frac{1}{|{\mathcal{X}}|}\sum_{{\bf x}\in{\mathcal{X}}}{\mathcal{H}}_{t}\left(K^{\star}({\bf x},\cdot)\right) 1|𝒳|𝐱𝒳(λ|𝒳|t+1+1𝐲𝒳min{K(𝐱,𝐲),λ})\displaystyle\leq\frac{1}{|{\mathcal{X}}|}\sum_{{\bf x}\in{\mathcal{X}}}\left(\frac{\lambda|{\mathcal{X}}|}{t+1}+1-\sum_{{\bf y}\in{\mathcal{X}}}\min\{K^{\star}({\bf x},{\bf y}),\lambda\}\right) (97)
=λ|𝒳|t+1+11|𝒳|𝐱𝒳𝐲𝒳min{K(𝐱,𝐲),λ}\displaystyle=\frac{\lambda|{\mathcal{X}}|}{t+1}+1-\frac{1}{|{\mathcal{X}}|}\sum_{{\bf x}\in{\mathcal{X}}}\sum_{{\bf y}\in{\mathcal{X}}}\min\{K^{\star}({\bf x},{\bf y}),\lambda\} (98)
=λ|𝒳|t+1+11|𝒳|𝐤𝒮(𝐱,𝐲)Ω𝐤min{K(𝐱,𝐲),λ}\displaystyle=\frac{\lambda|{\mathcal{X}}|}{t+1}+1-\frac{1}{|{\mathcal{X}}|}\sum_{{\bf k}\in\mathcal{S}}\sum_{({\bf x},{\bf y})\in\Omega_{\bf k}}\min\{K^{\star}({\bf x},{\bf y}),\lambda\} (99)
λ|𝒳|t+1+11|𝒳|𝐤𝒮(𝐱,𝐲)Ω𝐤min{K(𝐱,𝐲),λ}\displaystyle\leq\frac{\lambda|{\mathcal{X}}|}{t+1}+1-\frac{1}{|{\mathcal{X}}|}\sum_{{\bf k}\in\mathcal{S}^{\prime}}\sum_{({\bf x},{\bf y})\in\Omega_{\bf k}}\min\{K^{\star}({\bf x},{\bf y}),\lambda\} (100)
=λ|𝒳|t+1+11|𝒳|𝐤𝒮(𝐱,𝐲)Ω𝐤K(𝐱,𝐲)\displaystyle=\frac{\lambda|{\mathcal{X}}|}{t+1}+1-\frac{1}{|{\mathcal{X}}|}\sum_{{\bf k}\in\mathcal{S}^{\prime}}\sum_{({\bf x},{\bf y})\in\Omega_{\bf k}}K^{\star}({\bf x},{\bf y}) (101)
=λ|𝒳|t+1+11|𝒳|𝐤𝒮|Ω𝐤|f(𝐤)|𝒳|\displaystyle=\frac{\lambda|{\mathcal{X}}|}{t+1}+1-\frac{1}{|{\mathcal{X}}|}\sum_{{\bf k}\in\mathcal{S}^{\prime}}|\Omega_{\bf k}|\frac{f({\bf k})}{|{\mathcal{X}}|} (102)

where we have use the fact that 𝒳×𝒳=𝐤𝒮Ω𝐤{\mathcal{X}}\times{\mathcal{X}}=\bigcup_{{\bf k}\in\mathcal{S}}\Omega_{\bf k} to go from (98) to (99), and the fact that K(𝐱,𝐲)λK^{\star}({\bf x},{\bf y})\leq\lambda on 𝐤𝒮Ω𝐤\bigcup_{{\bf k}\in\mathcal{S}^{\prime}}\Omega_{\bf k} to go from (100) to (101). To conclude the proof, we simply note that λ|𝒳|=max𝐤𝒮𝔣(𝐤)\lambda|{\mathcal{X}}|=\max_{{\bf k}\in\mathcal{S}^{\prime}}\;\mathfrak{f}({\bf k}) according to our definition of λ.\lambda.

The bound provided by the above lemma is not fully explicit because it involves the size of level sets Ω𝐤\Omega_{\bf k}. In the next section, we appeal to Cayley’s formula to obtain a lower bound for |Ω𝐤|.|\Omega_{\bf k}|.

C.3 Forest Lower Bound for the Size of the Level Sets

We recall that a forest is a graph whose connected components are trees (equivalently, a forest is a graph with no cycles.) Let us define:

𝔉:={𝒢𝔊:𝒢 is a forest}.\mathfrak{F}:=\{\mathcal{G}\in\mathfrak{G}:\mathcal{G}\text{ is a forest}\}.

In other words, 𝔉\mathfrak{F} is the set of forests on 𝒱={1,2,,nw}\mathcal{V}=\{1,2,\ldots,n_{w}\} that have at most LL edges. We obviously have the following lower bound on the size of the level sets:

|Ω𝐤|=|ζ1(𝔊𝐤)||ζ1(𝔊𝐤𝔉)|.|\Omega_{\bf k}|=\left|\zeta^{-1}(\mathfrak{G}_{\bf k})\right|\geq\left|\zeta^{-1}(\mathfrak{G}_{\bf k}\cap\mathfrak{F})\right|. (103)

In this subsection, we use Cayley’s formula to derive an explicit formula for |ζ1(𝔊𝐤𝔉)|.\left|\zeta^{-1}(\mathfrak{G}_{\bf k}\cap\mathfrak{F})\right|. We start with the following lemma:

Lemma 18.

Let 𝐤𝒮{\bf k}\in\mathcal{S}, then

|𝔊𝐤𝔉|=nw!k1!k2!kL+1!i=2L+1(ii2i!)ki|\mathfrak{G}_{\bf k}\cap\mathfrak{F}|=\frac{n_{w}!}{k_{1}!k_{2}!\cdots k_{L+1}!}\prod_{i=2}^{L+1}\left(\frac{i^{i-2}}{i!}\right)^{k_{i}} (104)
Proof.

First we note that (104) can be written as

|𝔊𝐤𝔉|=(nwk1,2k2,,(L+1)kL+1)i=2L+1iki(i2)ki!(ikii,i,,i)|\mathfrak{G}_{\bf k}\cap\mathfrak{F}|={n_{w}\choose k_{1},2k_{2},\ldots,(L+1)k_{L+1}}\;\;\prod_{i=2}^{L+1}\frac{i^{k_{i}(i-2)}}{k_{i}!}{ik_{i}\choose i,i,\ldots,i}

We now explain the above formula. The set 𝔊𝐤𝔉\mathfrak{G}_{\bf k}\cap\mathfrak{F} consists in all the forests that have exactly k1k_{1} trees of size 11, k2k_{2} trees of size 22, …, kL+1k_{L+1} trees of size L+1L+1. In order to construct a forest with this specific structure, we start by assigning the nwn_{w} vertices to L+1L+1 bins, with bin 1 receiving k1k_{1} vertices, bin 2 receiving 2k22k_{2} vertices, …, bin L+1L+1 receiving (L+1)kL+1(L+1)k_{L+1} vertices. The number of ways of accomplishing this is

(nwk1,2k2,,(L+1)kL+1).{n_{w}\choose k_{1},2k_{2},\ldots,(L+1)k_{L+1}}.

Let us now consider the vertices in bin ii for some i2i\geq 2. We claim that there are

1ki!(ikii,i,,i)iki(i2)\frac{1}{k_{i}!}{ik_{i}\choose i,i,\ldots,i}i^{k_{i}(i-2)}

ways of putting edges on these ikiik_{i} vertices in order to make kik_{i} trees of size ii. Indeed, there are 1ki!(ikii,i,,i)\frac{1}{k_{i}!}{ik_{i}\choose i,i,\ldots,i} ways of partitioning the vertices into kik_{i} disjoint subsets of size ii, and then, according to Cayley’s formula, there are ii2i^{i-2} ways of putting edges on each of these subsets so that they form a tree. To conclude, we remark that there is obviously only one way to to make k1k_{1} trees of size 1 out of the k1k_{1} vertices in the first bin. ∎

Recall that a tree on nn vertices always has n1n-1 edges. So a graph that belongs to 𝔊𝐤𝔉\mathfrak{G}_{\bf k}\cap\mathfrak{F} has

m=i=1L+1(i1)kim=\sum_{i=1}^{L+1}(i-1)k_{i}

edges since it is made of k1k_{1} trees of size 11, k2k_{2} trees of size 22, …, kL+1k_{L+1} trees of size (L+1)(L+1). The fact that all graphs in 𝔊𝐤𝔉\mathfrak{G}_{\bf k}\cap\mathfrak{F} have the same number of edges allows us to to obtain an explicit formula for |ζ1(𝔊𝐤𝔉)||\zeta^{-1}(\mathfrak{G}_{\bf k}\cap\mathfrak{F})| by combining combine Lemma 13 and 18, namely

|ζ1(𝔊𝐤𝔉)|=(nw!k1!k2!kL+1!i=2L+1(ii2i!)ki)(m!α=mL(Lα){αm}2αnwLα).|\zeta^{-1}(\mathfrak{G}_{\bf k}\cap\mathfrak{F})|=\left(\frac{n_{w}!}{k_{1}!k_{2}!\cdots k_{L+1}!}\prod_{i=2}^{L+1}\left(\frac{i^{i-2}}{i!}\right)^{k_{i}}\right)\left(m!\sum_{\alpha=m}^{L}{L\choose\alpha}\genfrac{\{}{\}}{0.0pt}{}{\alpha}{m}2^{\alpha}n_{w}^{L-\alpha}\right).

This lead us to define the function 𝔤:𝒮\mathfrak{g}:\mathcal{S}\to\real by

𝔤(𝐤)=1nw2L(nw!k1!k2!kL+1!i=2L+1(ii2i!)ki)(γ(𝐤)!α=γ(𝐤)L(Lα){αγ(𝐤)}2αnwLα)\displaystyle\mathfrak{g}({\bf k})=\frac{1}{n_{w}^{2L}}\left(\frac{n_{w}!}{k_{1}!k_{2}!\cdots k_{L+1}!}\prod_{i=2}^{L+1}\left(\frac{i^{i-2}}{i!}\right)^{k_{i}}\right)\left(\gamma({\bf k})!\sum_{\alpha=\gamma({\bf k})}^{L}{L\choose\alpha}\genfrac{\{}{\}}{0.0pt}{}{\alpha}{\gamma({{\bf k}})}2^{\alpha}n_{w}^{L-\alpha}\right)
where γ(𝐤)=i=1L+1(i1)ki.\displaystyle\quad\text{where }\gamma({\bf k})=\sum_{i=1}^{L+1}(i-1)k_{i}.

Recalling (103) we therefore have that

|Ω𝐤||Ω|𝔤(𝐤) for all 𝐤𝒮.\frac{|\Omega_{\bf k}|}{|\Omega|}\geq\mathfrak{g}({\bf k})\qquad\text{ for all }{\bf k}\in\mathcal{S}.

Combining the above inequality with Lemma 17 we obtain:

Theorem 6.

The inequality

1|𝒳|𝐱𝒳t(K𝐱)(1𝐤𝒮𝔤(𝐤)𝔣(𝐤))+1t+1(max𝐤𝒮𝔣(𝐤))\frac{1}{|{\mathcal{X}}|}\sum_{{\bf x}\in{\mathcal{X}}}{\mathcal{H}}_{t}(K^{\star}_{\bf x})\leq\left(1-\sum_{{\bf k}\in\mathcal{S}^{\prime}}\mathfrak{g}({\bf k})\;\mathfrak{f}({\bf k})\right)+\frac{1}{t+1}\left(\max_{{\bf k}\in\mathcal{S}^{\prime}}\;\mathfrak{f}({\bf k})\right)

holds for all 𝒮𝒮\mathcal{S}^{\prime}\subset\mathcal{S}.

The above theorem is more general than Theorem 5 — indeed, in Theorem 5, the choice of the subset 𝒮\mathcal{S}^{\prime} is restricted to the L+1L+1 candidates:

𝒮:={𝐤L+1:i=1L+1iki=nw and i=1L+1(i1)kiL} where =0,1,,L.\mathcal{S}_{\ell}:=\left\{{\bf k}\in\mathbb{N}^{L+1}:\;\;\sum_{i=1}^{L+1}ik_{i}=n_{w}\quad\text{ and }\quad\ell\leq\sum_{i=1}^{L+1}(i-1)k_{i}\leq L\right\}\qquad\text{ where }\ell=0,1,\ldots,L.

When L=9L=9, nw=150n_{w}=150, nc=5n_{c}=5 and t=1999t=1999 (these are the parameters used in Theorem 1), choosing 𝒮=𝒮7\mathcal{S}^{\prime}=\mathcal{S}_{7} leads to a relatively tight upper bound. When L=15L=15, nw=30n_{w}=30, nc=5n_{c}=5 and t=5999t=5999 (these are the parameters corresponding the the second experiment of the experimental section), choosing 𝒮=𝒮11\mathcal{S}^{\prime}=\mathcal{S}_{11} gives a good upper bound.

Appendix D Multiple Unfamiliar Sentences per Category

Refer to caption
Figure 6: Data Model with n=2n^{*}=2 unfamiliar sentences per category. The other parameters in this example are set to L=5L=5, nw=12n_{w}=12, nc=3n_{c}=3, R=3R=3 and nspl=6n_{\text{spl}}=6. The points highlighted in yellow are the ones involved in the definition of the event A(1)A^{(1)}, see equation (109).

In the data model depicted in Figure 1, each unfamiliar sequence of concept has a single representative in the training set. In this section we consider the more general case in which each unfamiliar sequence of concepts has nn^{*} representatives in the training set, where

1nnspl.1\leq n^{*}\leq n_{\rm spl}.

An example with n=2n^{*}=2 is depicted in Figure 6. The variables L,nw,nc,R,nsplL,n_{w},n_{c},R,n_{\text{spl}} and nn^{*} parametrize instances of this more general data model, and the associated sampling process is:

Sampling Process DM2:

(i) Sample 𝒯=(φ;𝐜1,,𝐜R;𝐜1,,𝐜R)\mathcal{T}=(\;\varphi\;;\;{\bf c}_{1},\ldots,{\bf c}_{R}\;;\;{\bf c}^{\prime}_{1},\ldots,{\bf c}^{\prime}_{R}\;) uniformly at random in 𝔗=Φ×𝒵2R\mathfrak{T}=\Phi\times\mathcal{Z}^{2R}. (ii) For r=1,,Rr=1,\ldots,R: Sample (𝐱r,1,,𝐱r,n)({\bf x}_{r,1},\ldots,{\bf x}_{r,n^{*}}) uniformly at random in φ̊1(𝐜r)××φ̊1(𝐜r)\mathring{\varphi}^{-1}({\bf c}^{\prime}_{r})\times\ldots\times\mathring{\varphi}^{-1}({\bf c}^{\prime}_{r}). Sample (𝐱r,n+1,,𝐱r,nspl)({\bf x}_{r,n^{*}+1},\ldots,{\bf x}_{r,n_{\text{spl}}}) uniformly at random in φ̊1(𝐜r)××φ̊1(𝐜r)\mathring{\varphi}^{-1}({\bf c}_{r})\times\ldots\times\mathring{\varphi}^{-1}({\bf c}_{r}). (iii) Sample 𝐱test{\bf x}^{\text{test}} uniformly at random in φ̊1(𝐜1)\mathring{\varphi}^{-1}({\bf c}^{\prime}_{1}).

Our analysis easily adapts to this more general case and gives:

Theorem 7.

Let 𝔗=Φ×𝒵2R\mathfrak{T}=\Phi\times\mathcal{Z}^{2R}. Then

1err¯(,ψ,𝔗)n[(1𝐤𝒮𝔣(𝐤)𝔤(𝐤))+12R(max𝐤𝒮𝔣(𝐤))]+1R1-\overline{\text{err}}(\mathcal{F},\psi,\mathfrak{T})\leq n^{*}\left[\left(1-\sum_{{\bf k}\in\mathcal{S}_{\ell}}\;\mathfrak{f}({\bf k})\mathfrak{g}({\bf k})\right)+\frac{1}{2R}\left(\max_{{\bf k}\in\mathcal{S}_{\ell}}\;\mathfrak{f}({\bf k})\right)\right]+\frac{1}{R} (105)

for all feature space \mathcal{F}, all feature map ψ:𝒳\psi:{\mathcal{X}}\mapsto\mathcal{F}, and all 0L0\leq\ell\leq L.

Theorem 3 in the main body of the paper can be viewed as a special case of the above theorem — indeed, setting n=1n^{*}=1 in inequality (105) we exactly recover (9).

In order to prove Theorem 7, we simply need to revisit subsection B.3. We denote by ΩDM2\Omega_{\rm DM2} and DM2\mathbb{P}_{{\rm DM2}} the sample space and probability measure associated with the sampling process DM2. As in subsection B.3, given a kernel K𝒦K\in\mathcal{K}, we define the event

EK={ωΩDM2:There exists 1snspl such that K(𝐱test,𝐱1,s)>K(𝐱test,𝐱r,s) for all 2rR and all 1snspl}.E_{K}=\Big{\{}\omega\in\Omega_{\rm DM2}:\;\;\text{There exists }1\leq s^{*}\leq n_{\text{spl}}\text{ such that }\\ K\left({\bf x}^{\text{test}},{\bf x}_{1,s^{*}}\right)>K\left({\bf x}^{\text{test}},{\bf x}_{r,s}\right)\text{ for all }2\leq r\leq R\text{ and all }1\leq s\leq n_{\text{spl}}\Big{\}}.

Such event describes all outcomes corresponding to successful classification of the test point 𝐱test{\bf x}^{\rm test}. For simplicity let us assume that n=2n^{*}=2 (therefore matching the scenario depicted in Figure 6). We further partition the event EKE_{K} according to which training point from the first category is most similar to the test point:

EK=Emeaningful(1)Emeaningful(2)EluckE_{K}=E^{(1)}_{\text{meaningful}}\cup E^{(2)}_{\text{meaningful}}\cup E_{\text{luck}} (106)

The event Emeaningful(1)E^{(1)}_{\text{meaningful}} consists in all the outcomes where, among the points from first category, 𝐱1,1{\bf x}_{1,1} is the most similar to 𝐱test{\bf x}^{\rm test}, Emeaningful(2)E^{(2)}_{\text{meaningful}} consists in all the outcomes where, among the points from first category, 𝐱1,2{\bf x}_{1,2} is the most similar to 𝐱test{\bf x}^{\rm test}, and EluckE_{\text{luck}} consists in all the remaining cases. Formally we have:

Emeaningful(1)=EK\displaystyle E^{(1)}_{\text{meaningful}}=E_{K} {ωΩDM2:K(𝐱test,𝐱1,1)>K(𝐱test,𝐱1,2)}\displaystyle\;\cap\;\Big{\{}\omega\in\Omega_{\rm DM2}:\;\;K\left({\bf x}^{\text{test}},{\bf x}_{1,1}\right)>K\left({\bf x}^{\text{test}},{\bf x}_{1,2}\right)\Big{\}}
{ωΩDM2:K(𝐱test,𝐱1,1)>K(𝐱test,𝐱1,s) for all 3snspl}\displaystyle\;\cap\;\Big{\{}\omega\in\Omega_{\rm DM2}:\;\;K\left({\bf x}^{\text{test}},{\bf x}_{1,1}\right)>K\left({\bf x}^{\text{test}},{\bf x}_{1,s}\right)\text{ for all }3\leq s\leq n_{\text{spl}}\Big{\}}
Emeaningful(2)=EK\displaystyle E^{(2)}_{\text{meaningful}}=E_{K} {ωΩDM2:K(𝐱test,𝐱1,2)K(𝐱test,𝐱1,1)}\displaystyle\;\cap\;\Big{\{}\omega\in\Omega_{\rm DM2}:\;\;K\left({\bf x}^{\text{test}},{\bf x}_{1,2}\right)\geq K\left({\bf x}^{\text{test}},{\bf x}_{1,1}\right)\Big{\}}
{ωΩDM2:K(𝐱test,𝐱1,2)>K(𝐱test,𝐱1,s) for all 3snspl}\displaystyle\;\cap\;\Big{\{}\omega\in\Omega_{\rm DM2}:\;\;K\left({\bf x}^{\text{test}},{\bf x}_{1,2}\right)>K\left({\bf x}^{\text{test}},{\bf x}_{1,s}\right)\text{ for all }3\leq s\leq n_{\text{spl}}\Big{\}}
Eluck=EK\displaystyle E_{\text{luck}}=E_{K} {ωΩDM2:there exists 3snspl such that\displaystyle\;\cap\;\Big{\{}\omega\in\Omega_{\rm DM2}:\;\;\text{there exists }3\leq s^{*}\leq n_{\text{spl}}\text{ such that }
K(𝐱test,𝐱1,s)K(𝐱test,𝐱1,s) for all 1snspl}\displaystyle\hskip 113.81102ptK\left({\bf x}^{\text{test}},{\bf x}_{1,s^{*}}\right)\geq K\left({\bf x}^{\text{test}},{\bf x}_{1,s}\right)\text{ for all }1\leq s\leq n_{\text{spl}}\Big{\}}

Exactly as in subsection B.3, we then prove that

DM2[Emeaningful(i)]1|𝒳|𝐱𝒳2R1(K𝐱)for i=1,2\displaystyle\mathbb{P}_{\rm DM2}[E^{(i)}_{\text{meaningful}}]\leq\frac{1}{|{\mathcal{X}}|}\sum_{{\bf x}\in{\mathcal{X}}}\mathcal{H}_{2R-1}(K^{\star}_{\bf x})\quad\text{for }i=1,2 (107)
DM2[Eluck]1R.\displaystyle\mathbb{P}_{\rm DM2}[E_{\text{luck}}]\leq\frac{1}{R}. (108)

The proof of inequality (107) is essentially identical to the proof of Lemma 10. We define the event

A(1):={ωΩDM2:K(𝐱test,𝐱1,1)>K(𝐱test,𝐱r,1) for all 2rR}{ωΩDM2:K(𝐱test,𝐱1,1)>K(𝐱test,𝐱r,3) for all 1rR}A^{(1)}:=\Big{\{}\omega\in\Omega_{\rm DM2}:\;\;K\left({\bf x}^{\text{test}},{\bf x}_{1,1}\right)>K\left({\bf x}^{\text{test}},{\bf x}_{r,1}\right)\text{ for all }2\leq r\leq R\Big{\}}\\ \cap\Big{\{}\omega\in\Omega_{\rm DM2}:\;\;K\left({\bf x}^{\text{test}},{\bf x}_{1,1}\right)>K\left({\bf x}^{\text{test}},{\bf x}_{r,3}\right)\text{ for all }1\leq r\leq R\Big{\}} (109)

and the event

A(2):={ωΩDM2:K(𝐱test,𝐱1,2)>K(𝐱test,𝐱r,2) for all 2rR}{ωΩDM2:K(𝐱test,𝐱1,2)>K(𝐱test,𝐱r,3) for all 1rR}A^{(2)}:=\Big{\{}\omega\in\Omega_{\rm DM2}:\;\;K\left({\bf x}^{\text{test}},{\bf x}_{1,2}\right)>K\left({\bf x}^{\text{test}},{\bf x}_{r,2}\right)\text{ for all }2\leq r\leq R\Big{\}}\\ \cap\Big{\{}\omega\in\Omega_{\rm DM2}:\;\;K\left({\bf x}^{\text{test}},{\bf x}_{1,2}\right)>K\left({\bf x}^{\text{test}},{\bf x}_{r,3}\right)\text{ for all }1\leq r\leq R\Big{\}}

The 𝐱{\bf x}’s involved in the definition of the event A(1)A^{(1)} are highlighted in yellow in Figure 6. Crucially they are all generated by different sequences of concepts, except for 𝐱1,1{\bf x}_{1,1} and 𝐱test{\bf x}^{\rm test}. We can therefore appeal to Theorem 4 to obtain

DM2[A(1)]1|𝒳|𝐱𝒳2R1(K𝐱)\mathbb{P}_{\rm DM2}[A^{(1)}]\leq\frac{1}{|{\mathcal{X}}|}\sum_{{\bf x}\in{\mathcal{X}}}\mathcal{H}_{2R-1}(K^{\star}_{\bf x})

since there is a total of t=2R1t=2R-1 ‘distractors’ (the ‘distractors’ in Figure 6 are 𝐱1,3{\bf x}_{1,3}, 𝐱2,1{\bf x}_{2,1}, 𝐱2,3{\bf x}_{2,3}, 𝐱3,1{\bf x}_{3,1} and 𝐱3,3{\bf x}_{3,3}). We then use the fact Emeaningful(1)A(1)E^{(1)}_{\text{meaningful}}\subset A^{(1)} to obtain (107) with i=1i=1. The case i=2i=2 is exactly similar.

We now prove (108). The proof is similar to the proof of Lemma 11. For 1rR1\leq r\leq R, we define the events

Br=1rRrr{ωΩDM2:max3snsplK(𝐱test,𝐱r,s)>max3snsplK(𝐱test,𝐱r,s)}\displaystyle B_{r}=\bigcap_{\begin{subarray}{c}1\leq r^{\prime}\leq R\\ r^{\prime}\neq r\end{subarray}}\left\{\omega\in\Omega_{\rm DM2}:\;\;\max_{3\leq s\leq n_{\text{spl}}}K({\bf x^{\text{test}}},{\bf x}_{r,s})>\max_{3\leq s^{\prime}\leq n_{\text{spl}}}K({\bf x^{\text{test}}},{\bf x}_{r^{\prime},s^{\prime}})\right\}

By symmetry, these events are equiprobable. They also are mutually disjoints, and therefore DM2[Br]1/R\mathbb{P}_{\rm DM2}[B_{r}]\leq 1/R. Inequality (108) then comes from the fact that EluckB1E_{\rm luck}\subset B_{1}.

Combining (106), (107), (108) then gives

supK𝒦DM2[EK]2|𝒳|𝐱𝒳2R1(K𝐱)+1R.\sup_{K\in\mathcal{K}}\mathbb{P}_{{\rm DM2}}\Big{[}E_{K}\Big{]}\leq\frac{2}{|{\mathcal{X}}|}\sum_{{\bf x}\in{\mathcal{X}}}\mathcal{H}_{2R-1}\left(K^{\star}_{\bf x}\right)+\frac{1}{R}. (110)

and, going back to the general case where nn^{*} denotes the number of representatives that each sequence of unfamiliar concepts has in the training set,

supK𝒦DM2[EK]n|𝒳|𝐱𝒳2R1(K𝐱)+1R\sup_{K\in\mathcal{K}}\mathbb{P}_{{\rm DM2}}\Big{[}E_{K}\Big{]}\leq\frac{n^{*}}{|{\mathcal{X}}|}\sum_{{\bf x}\in{\mathcal{X}}}\mathcal{H}_{2R-1}\left(K^{\star}_{\bf x}\right)+\frac{1}{R} (111)

which in turn implies (16). Combining inequalities (15) and (16) then concludes the proof of Theorem 7.

Appendix E Details of the Experiments

In this section we provide the details of the experiments described in Section 6, as well as additional experiments. Table 4 provides the results of experiments in which the parameters LL, nwn_{w}, ncn_{c} and RR are set to

L=9,nw=150,nc=5,R=1000.L=9,\quad n_{w}=150,\quad n_{c}=5,\quad R=1000.

The parameters nspln_{\rm spl} and nn^{*} are chosen so that the training set contains 55 familiar sentences per category, and between 11 and 55 unfamiliar sentences per category. Table 3 is identical to Table 1 in Section 6, with the exception that it contains additional information (i.e. the standard deviations of the obtained accuracies). The abbreviation NN appearing in Table 3 stands for ‘Nearest Neighbor’.

Table 4 provides the results of additional experiments in which the parameters LL, nwn_{w}, ncn_{c} and RR are set to

L=9,nw=50,nc=5,R=1000.L=9,\quad n_{w}=50,\quad n_{c}=5,\quad R=1000.

The parameters nspln_{\rm spl} and nn^{*} are chosen, as in the previous set experiments, so that the training set contains 55 familiar sentences per category, and between 11 and 55 unfamiliar sentences per category. The tasks considered in this set of experiments are easier due to the fact that the vocabulary is smaller (nw=50n_{w}=50 instead of nw=150n_{w}=150).

Table 3: Accuracy in %\% on unfamiliar test points (L=9L=9, nw=150n_{w}=150, nc=5n_{c}=5, R=1000R=1000).
n=1n^{*}\!=\!1 n=2n^{*}\!=\!2 n=3n^{*}\!=\!3 n=4n^{*}\!=\!4 n=5n^{*}\!=\!5
nspl=6n_{\rm spl}\!=\!6 nspl=7n_{\rm spl}\!=\!7 nspl=8n_{\rm spl}\!=\!8 nspl=9n_{\rm spl}\!=\!9 nspl=10n_{\rm spl}\!=\!10
Neural network 99.8±0.399.8\pm 0.3 99.9±0.199.9\pm 0.1 99.9±0.199.9\pm 0.1 99.9±0.199.9\pm 0.1 100±0.1100\pm 0.1
NN on feat. extracted by neural net 99.9±0.199.9\pm 0.1 99.9±0.199.9\pm 0.1 99.9±0.199.9\pm 0.1 99.9±0.199.9\pm 0.1 99.9±0.199.9\pm 0.1
NN on feat. extracted by ψ\psi^{\star} 0.7±0.20.7\pm 0.2 1.1±0.31.1\pm 0.3 1.5±0.31.5\pm 0.3 1.8±0.31.8\pm 0.3 2.2±0.32.2\pm 0.3
NN on feat. extracted by ψonehot\psi_{\rm one-hot} 0.6±0.20.6\pm 0.2 1.1±0.31.1\pm 0.3 1.4±0.21.4\pm 0.2 1.7±0.31.7\pm 0.3 2.1±0.32.1\pm 0.3
Upper bound (0.015n+1/10000.015n^{*}+1/1000) 1.61.6 3.13.1 4.64.6 6.16.1 7.67.6
SVM on feat. extracted by ψ\psi^{\star} 0.6±0.30.6\pm 0.3 1.5±0.41.5\pm 0.4 2.2±0.42.2\pm 0.4 3.2±0.63.2\pm 0.6 4.2±1.04.2\pm 1.0
SVM on feat. extracted by ψonehot\psi_{\rm one-hot} 0.5±0.10.5\pm 0.1 1.1±0.11.1\pm 0.1 1.9±0.11.9\pm 0.1 2.8±0.22.8\pm 0.2 3.8±0.23.8\pm 0.2
SVM with Gaussian kernel 0.6±0.10.6\pm 0.1 1.1±0.11.1\pm 0.1 2.0±0.12.0\pm 0.1 2.8±0.22.8\pm 0.2 3.6±0.23.6\pm 0.2

E.1 Neural Network Experiments

We consider the neural network in Figure 2. The number of neurons in the input, hidden and output layers of the MLPs constituting the neural network are set to:

MLP 1:din=150,dhidden=500,dout=10,MLP 2:din=90,dhidden=2000,dout=1000.\text{MLP 1:}\;\;d_{\text{in}}=150,d_{\text{hidden}}=500,d_{out}=10,\qquad\text{MLP 2:}\;\;d_{in}=90,d_{\text{hidden}}=2000,d_{\text{out}}=1000.

For each of the 10 possible parameter settings in Table 3 and Table 4, we do 104104 experiments. For each experiment we generate:

  • A training set containing R×nsplR\times n_{\rm spl} sentences.

  • A test set containing 10,00010,000 unfamiliar sentences (10 sentences per category).

We then train the neural network with stochastic gradient descent until the training loss reaches 10410^{-4} (we use a cross entropy loss). The learning rate is set to 0.010.01 (constant learning rate), and the batch size to 100100. At test time, we either use the neural network to classify the test points (first row of the tables) or we use a nearest neighbor classification rule on the top of the features extracted by the neural network (second row of the tables). The mean and standard deviation of the 104104 test accuracies, for each of the 10 settings, and for each of the two evaluation strategies, is reported in the first two rows of Table 3 and Table 4.

E.2 Nearest-Neighbor Experiments

In these experiments we use a nearest neighbor classification rule on the top of features extracted by ψ\psi^{\star} (third row of Table 3 and 4) or ψonehot\psi_{\rm one-hot} (fourth row). For each of the 10 possible parameter settings in Table 3 and Table 4, we do 5050 experiments. For each experiment we generate:

  • A training set containing R×nsplR\times n_{\rm spl} sentences.

  • A test set containing 1,0001,000 unfamiliar sentences (one sentences per category).

In order to perform the nearest neighbor classification rule on the features extracted by ψ\psi^{\star}, one needs to evaluate the kernel K(𝐱,𝐲)=ψ(𝐱),ψ(𝐲)K^{\star}({\bf x},{\bf y})=\langle\psi^{\star}({\bf x}),\psi^{\star}({\bf y})\rangle_{\mathcal{F}^{\star}} for each pair of sentences. Computing K(𝐱,𝐲)K^{\star}({\bf x},{\bf y}) requires an expensive combinatorial calculation which is the reason why we perform fewer experiments and use a smaller test set than in E.1. In order to break ties, the values of K(𝐱,𝐲)K^{\star}({\bf x},{\bf y}) are perturbed according to (31).

With the parameter setting L=9L=9, nw=50n_{w}=50, nc=5n_{c}=5 and R=1000R=1000, our theoretical lower bound for the generalization error is

err¯(,ψ,𝔗)10.073n1/Rfor all  and all ψ,\overline{\text{err}}(\mathcal{F},\psi,\mathfrak{T})\geq 1-0.073\,n^{*}-1/R\qquad\text{for all $\mathcal{F}$ and all $\psi$,} (112)

which is obtained by choosing =6\ell=6 in inequality (105). This lead to an upper bound of 0.073n+1/R0.073\,n^{*}+1/R on the success rate. This upper bound is evaluated for nn^{*} ranging from 11 to 55 in the fifth row of Table 4.

Table 4: Accuracy in %\% on unfamiliar test points (L=9L=9, nw=50n_{w}=50, nc=5n_{c}=5, R=1000R=1000).
n=1n^{*}\!=\!1 n=2n^{*}\!=\!2 n=3n^{*}\!=\!3 n=4n^{*}\!=\!4 n=5n^{*}\!=\!5
nspl=6n_{\rm spl}\!=\!6 nspl=7n_{\rm spl}\!=\!7 nspl=8n_{\rm spl}\!=\!8 nspl=9n_{\rm spl}\!=\!9 nspl=10n_{\rm spl}\!=\!10
Neural network 99.9±0.199.9\pm 0.1 99.9±0.199.9\pm 0.1 99.9±0.199.9\pm 0.1 99.9±0.199.9\pm 0.1 100±0.1100\pm 0.1
NN on feat. extracted by neural net 99.9±0.199.9\pm 0.1 99.9±0.199.9\pm 0.1 99.9±0.199.9\pm 0.1 99.9±0.199.9\pm 0.1 99.9±0.199.9\pm 0.1
NN on feat. extracted by ψ\psi^{\star} 2.4±0.32.4\pm 0.3 4.1±0.64.1\pm 0.6 5.5±0.65.5\pm 0.6 6.9±0.86.9\pm 0.8 8.0±0.88.0\pm 0.8
NN on feat. extracted by ψonehot\psi_{\rm one-hot} 2.0±0.32.0\pm 0.3 3.4±0.53.4\pm 0.5 4.8±0.64.8\pm 0.6 5.7±0.55.7\pm 0.5 6.7±0.76.7\pm 0.7
Upper bound (0.073n+1/10000.073n^{*}+1/1000) 7.47.4 14.714.7 22.022.0 29.329.3 36.636.6
SVM on feat. extracted by ψ\psi^{\star} 2.2±0.52.2\pm 0.5 5.2±0.95.2\pm 0.9 8.6±0.98.6\pm 0.9 11.7±0.611.7\pm 0.6 15.1±1.215.1\pm 1.2
SVM on feat. extracted by ψonehot\psi_{\rm one-hot} 1.2±0.11.2\pm 0.1 3.5±0.23.5\pm 0.2 6.4±0.26.4\pm 0.2 9.9±0.39.9\pm 0.3 13.6±0.413.6\pm 0.4
SVM with Gaussian kernel 2.0±0.12.0\pm 0.1 3.7±0.23.7\pm 0.2 5.4±0.25.4\pm 0.2 8.6±0.38.6\pm 0.3 12.1±0.312.1\pm 0.3

E.3 SVM on Features Extracted by ψonehot\psi_{\rm one-hot} and SVM with Gaussian Kernel

For each of the 10 possible parameter settings in Table 3 and Table 4, we do 100100 experiments. For each experiment we generate:

  • A training set containing R×nsplR\times n_{\rm spl} sentences.

  • A test set containing 10,00010,000 unfamiliar sentences (10 sentences per category).

We use the feature map ψonehot\psi_{\rm one-hot} (which simply concatenates the one-hot-encodings of the words composing a sentence) to extract features from each sentence. These features are further normalized according to the formula

𝐱~=ψonehot(𝐱)pp(1p) where p=1/nw\tilde{\bf x}=\frac{\psi_{\rm one-hot}({\bf x})-p}{\sqrt{p(1-p)}}\qquad\text{ where }p=1/n_{w} (113)

so that they are centered around 0 and are O(1)O(1). We then use the SVC function of Scikit-learn [23], which itself relies on the LIBSVM library [8], in order to run a soft multiclass SVM algorithm on these features. We tried various values for the parameter controlling the 2\ell_{2} regularization in the soft-SVM formulation, and found that the algorithm, on this task, is not sensitive to this choice — so we chose C=1C=1. The results are reported in the seventh row of both tables.

We also tried a soft SVM with Gaussian Kernel

K(𝐱,𝐲)=eγ𝐱𝐲2K({\bf x},{\bf y})=e^{-\gamma\|{\bf x}-{\bf y}\|^{2}}

applied on the top of features extracted by ψonehot\psi_{\rm one-hot} and normalized according to (113). We use the SVC function of Scikit-learn with 2\ell_{2} regularization parameter set to C=1C=1. For the experiments in Table 3 (nw=150n_{w}=150), the parameter γ\gamma involved in the definition of the kernel was set to γ=0.25\gamma=0.25 when n{1,2}n^{*}\in\{1,2\} and to γ=0.1\gamma=0.1 when n{3,4,5}n^{*}\in\{3,4,5\}. For the experiments in Table 4 (nw=50n_{w}=50), it was set to γ=0.75\gamma=0.75 when n=1n^{*}=1, to γ=0.1\gamma=0.1 when n=2n^{*}=2, and finally to γ=0.005\gamma=0.005 when n{3,4,5}n^{*}\in\{3,4,5\}.

E.4 SVM on Features Extracted by ψ\psi^{\star}

Table 5: Search for the hyperparameter α\alpha
α\alpha λmin(Gtrain)\lambda_{\text{min}}(G^{\text{train}}) λmax(Gtrain)\lambda_{\text{max}}(G^{\text{train}}) Test Accuracy
0.0010.001 90.9-90.9 50,583.350,583.3 6.1%
0.010.01 81.5-81.5 32,334.932,334.9 6.1%
0.10.1 56.8-56.8 15,358.715,358.7 6.6%
1.01.0 22.9-22.9 4,191.54,191.5 7.6%7.6\%
1010 2.5-2.5 673.4673.4 7.2%7.2\%
1515 0.138-0.138 471.7471.7 7.0%7.0\%
1616 0.20.2 445.5445.5 7.0%7.0\%
100100 4.74.7 86.986.9 5.4%5.4\%
10001000 4.9834.983 15.57315.573 5.0%5.0\%

Applying a SVM on the feature extracted by ψ\psi^{\star} is equivalent to running a kernelized SVM with kernel KK^{\star}. A naive implementation of such algorithm leads to very poor results on our data model. For such algorithm to not completely fail, it is important to carefully “rescale" KK^{\star} so that the eigenvalues of the corresponding Gram matrix are well behaved. Recall that

K(𝐱,𝐲)=ncLnwL|{φΦ:φ(x)=φ(y) for all 1L}||Φ|K^{\star}({\bf x},{\bf y})=\frac{n_{c}^{L}}{n_{w}^{L}}\;\;\frac{\big{|}\{\varphi\in\Phi:\varphi(x_{\ell})=\varphi(y_{\ell})\text{ for all }1\leq\ell\leq L\}\big{|}}{|\Phi|} (114)

and let ξ:\xi:\real\to\real be a strictly increasing function. Since the nearest neighbor classification rule works by comparing the values of KK^{\star} on various pairs of points, it is clear that using the kernel K(𝐱,𝐲)=ξ(K(𝐱,𝐲))K^{\star\star}({\bf x},{\bf y})=\xi(K^{\star}({\bf x},{\bf y})) is equivalent to using the kernel K(𝐱,𝐲).K^{\star}({\bf x},{\bf y}). In particular, choosing ξ(x):=log(1+(nwL/α)x)\xi(x):=\log(1+(n_{w}^{L}/\alpha)x) gives the following family of optimal kernels:

Kα(𝐱,𝐲)\displaystyle K_{\alpha}^{\star\star}({\bf x},{\bf y}) =log(1+nwLαK(𝐱,𝐲))\displaystyle=\log\left(1+\frac{n_{w}^{L}}{\alpha}K^{\star}({\bf x},{\bf y})\right) (115)

To be clear, all these kernels are exactly equivalent to the the kernel KK^{\star} when using a nearest neighbor classification rule. However, they lead to different algorithms when used for kernelized SVM. We have experimented with various choice of the function ξ\xi and found out that this logarithmic scaling works well for kernelized SVM.

For each of the 10 possible parameter settings in Table 3 and Table 4, we do 1010 experiments. For each experiment we generate:

  • A training set containing R×nsplR\times n_{\rm spl} sentences.

  • A test set containing 1,0001,000 unfamiliar sentences (one sentences per category).

Let us denote by 𝐱itrain{\bf x}^{\text{train}}_{i}, 1iR×nspl1\leq i\leq R\times n_{\rm spl}, the data points in one of these training set, and by 𝐱itest{\bf x}^{\text{test}}_{i}, 1i10001\leq i\leq 1000, the data points in the corresponding test set. In order to run the kernelized SVM algorithm we need to form the Gram matrices

Gijtrain=Kα(𝐱itrain,𝐱jtrain)and Gijtest=Kα(𝐱itest,𝐱jtrain)G^{\text{train}}_{ij}=K_{\alpha}^{\star\star}({\bf x}^{\text{train}}_{i},{\bf x}^{\text{train}}_{j})\qquad\text{and }\qquad G^{\text{test}}_{ij}=K_{\alpha}^{\star\star}({\bf x}^{\text{test}}_{i},{\bf x}^{\text{train}}_{j})\ (116)

Constructing each of these Gram matrices takes a few days on CPU. We then use the SVC function of Scikit-learn to run a soft multiclass kernelized-SVM algorithm. We tried various values for the parameter controlling the 2\ell_{2} and found that the algorithm is not sensitive to this choice — so we chose C=1C=1. The algorithm, on the other hand, is quite sensitive to the choice of the hyperparamater α\alpha defining the kernel KαK_{\alpha}^{\star\star}. We experimented with various choices of α\alpha and found that choosing the smallest α\alpha that makes the Gram matrix GtrainG^{\text{train}} positive definite works well (note that the Gram matrix should be positive semidefinite for the kernelized SVM method to make sense). In Table 5 we show an example, on a specific pair of train and test set444the training set and test set used in this experiment were generated by our data model with parameters L=9L=9, nw=50n_{w}=50, nc=5n_{c}=5, R=1000R=1000, nspl=8n_{\rm spl}=8, and n=3n^{*}=3, of how the eigenvalues of GtrainG^{\text{train}} and the test accuracy depends on α\alpha.