Incremental Few-shot Text Classification with Multi-round New Classes:
Formulation, Dataset and System
Abstract
Text classification is usually studied by labeling natural language texts with relevant categories from a predefined set. In the real-world, new classes might keep challenging the existing system with limited labeled data. The system should be intelligent enough to recognize upcoming new classes with a few examples. In this work, we define a new task in the NLP domain—“incremental few-shot text classification”, where the system first handles some base classes with rich annotations, then copes with multiple rounds of new classes. For each round, there is a batch of new classes with a few labeled examples per class. Two major challenges are existing in this new task: (i) For the learning process, the system should incrementally learn new classes round by round without re-training on the examples of preceding classes; (ii) For the performance, the system should perform well on new classes without much loss on preceding classes. In addition to formalizing the new task, we also release two benchmark datasets in the incremental few-shot setting: intent classification and relation classification. Moreover, we propose an approach Entailment, which shows promise for solving this novel problem.
1 Introduction
Text classification has achieved great success in the past decades with the development of deep learning techniques kowsari2019text. However, decent performance highly relies on the quality and quantity of the training data. Recently, few-shot text classification yu2018diverse has attracted increasing attention from researchers since we realize that it is unlikely to have large-scale labeled data for new classes in reality.
Typically, few-shot text classification is formulated like this: the system first sees a set of base classes that have a large number of labeled examples, then a group of new classes is provided with examples per class. For a testing instance, the system is required to search for its label in the space of or merely .
However, if we think about few-shot text classification in real scenarios, new challenges exist which makes it worth further exploration. First, take the bank’s customer service system as an example, queries with new intents are continuously appearing, (e.g., by a sequence of rounds), without enough labeled data. The system should be able to keep learning and recognizing new intents round by round. For each query, the system needs to pick up the most appropriate intent in the incrementally increasing label space or return “none of them”. Second, existing incremental training strategies have the problem of catastrophic forgetting mccloskey1989catastrophic. The systems tend to forget the knowledge learned in the past when continually fine-tuning on new classes. This leads to a significant drop in the performance of base classes. For a real-world application, such as the aforementioned customer service system, the system is expected to perform well on all classes, regardless of when the classes come to the system or how many examples they have.
In this work, our contribution lies in three aspects. First, we formally define this problem: “incremental few-shot text classification”. In our definition, the system is first provided with some base classes with rich annotations, then rounds of new classes (i.e., , , , ) will come sequentially. Each () consists of a group of new classes and each class has labeled examples ( is in the range of [1, 5] and varies for different classes). For testing, we require the system to either select the best class from or output “none of them” which means no existing class applies to the input. As far as we know, this is the first work in the NLP community that studies text classification with an incremental size of classes. There are a few papers working on incremental few-shot classification in the field of computer vision DBLPRenLFZ19. However, they only consider one round of new classes. It’s difficult to evaluate the system’s ability for incremental learning without multi-rounds of new classes.
Furthermore, we consider an extreme case where no base classes are available. This situation is named as incremental few-shot text classification without base classes. It happens whenever you want to build a system from scratch. In real-world scenarios, base classes with rich annotations might not be available at the beginning. In this setting, we need to solve the cold start problem where no initial annotations are available. All previous few-shot learning models DBLPSnellSZ17; DBLPGidarisK18; DBLPRenLFZ19 fail to solve this problem, since they relied on the large-scale labeled data for base classes to train a robust system.
Second, to evaluate the aforementioned two settings, incremental few-shot text classification with/without base classes, we release a benchmark dataset, IFS-Intent for evaluation. This benchmark simulates a task like the bank’s customer service as we just mentioned. Another important feature of our benchmark datasets is that we do not provide dev sets. Existing systems are commonly evaluated on the dev set to choose the best training model. We claim that in real-world (incremental) few-shot applications, we cannot expect extra labeled data other than the examples. This is in line with the observation in DBLP07676. If a system has to rely on dev set to find the best parameters, it is not suitable for the incremental few-shot setting.
Third, we propose our approach, Entailment, to solve this new problem. Entailment models the text classification problem as a textual entailment dagan2013recognizing problem. To figure out if an input belongs to a class , Entailment tries to infer the truth value of (i.e., a hypothesis), given the (i.e, the premise). The main benefit of this formulation is that the system learns this task not only from the label-specific examples, but more importantly, from the large-scale entailment datasets. In other words, we make use of indirect supervision from textual entailment datasets to address the target few-shot task. Specifically, for each round which has new classes with examples for each, we first build positive (premise, hypothesis) pairs by accompanying each input with its gold class, and negative (premise, hypothesis) pairs by accompanying the input with other classes in . It is worth noting that we only use the few-shot examples and label names in for the incremental training.
Current state-of-the-art paper zhangdiscriminative for few-shot text classification also utilizes textual entailment for pre-training. However, they ignore the information in the class labels. Instead of inferring the truth value of a class conditioned on the input text , they tried to infer whether two text inputs () are in the same class. In this way, they not only increase the computation cost since the number of examples is much more than the number of labels. The other thing is that their final predictions need to be made by search the nearest neighbor among all the examples. The performance highly depends on the examples they choose. Bad examples will definitely bring poor results.
2 Related Work
Incremental few-shot learning.
As far as we know, there is no prior work in the NLP domain that studies incremental few-shot text classification. In this section, we mainly introduce some work in the computer vision domain. As we mentioned before, they only assume that a single round of new classes is appended to the base classes . Generally, they will learn class representations for classification. Different approaches differ in the way of representing base classes and new classes. Hereafter, we use and as the representations for and , respectively.
DBLPSnellSZ17 proposes the Prototypical Network, in which both and are stored as the average embedding of the few-shot support images for a certain class. Although Prototypical Network was not designed for incremental few-shot learning, it can be easily adapted to the incremental setting by providing the representations for all the classes. It trains a nearest neighbor algorithm on the base classes and tests directly on the union of base and new classes. DBLPQiBL18 proposes an “imprinting” mechanism: the base representations are learned regularly through supervised pre-training (e.g., the weight matrix in a softmax classifier), and are computed using the averaged representations like Prototypical Network.
In DBLPGidarisK18, the base representations are learned through supervised pre-training. The representation of the novel class () comes from two origins: (i) the prototypical averaging, ; (ii) attention-weighted sum over base representations: . Namely, , where and are learnable weight vectors. In the few-shot training stage, the original base classes are split into “new base classes” and “fake novel classes” for each episode. The loss is computed when the system predicts “new base classes” as well as “fake novel classes”. In testing, , the representations of novel classes, are constructed based on the examples and .
In DBLPRenLFZ19, both and are learned through supervised training: are classifier parameters pre-trained on base classes, are classifier parameters learned in new classes. During the training, the support set and the query set are constructed differently for new classes. The support set consists of examples only from new classes; the query set contains examples from both new classes and base classes (because the training goal is to maximize the performance on all classes). The training in this literature has two phases. The first phase is few-shot episode training which learns and optimizes the performance on the support set; the second phase (called meta-learning training) optimizes the performance on the query set. The latter has a regularization term on the , enforcing to be as close as possible to the attention-weighted sum of .
To summarize, compared with DBLPSnellSZ17 and DBLPQiBL18, both DBLPGidarisK18 and DBLPRenLFZ19 build connections between the representations of base classes and the new classes. However, these methods cannot be directly applied to our problem for the following reasons. (i) Despite the claims in some literature that they are dealing with incremental or dynamic few-shot problems, they only considered a single round of new classes DBLPQiBL18; DBLPGidarisK18; DBLPRenLFZ19. It is unclear if the system can keep the performance when multi-round new classes are considered. (ii) During the training for the new classes, they often rely on extra labeled data other than the examples, such as the query set in DBLPRenLFZ19. (iii) Different from their setting, we have an extra label “out-of-distribution” in incremental few-shot text classification. It’s not guaranteed that the input, such as the customer’s utterance, always falls into the range of seen labels.
Using textual entailment for text classification.
zhangdiscriminative is a state-of-the-art paper for few-shot text classification. They propose a discriminative nearest neighbor classification (DNNC) model by comparing whether two examples are in the same class or not. A matching model S(, ) is trained as a binary classifier, such that S(, ) is closed to 1.0 if and belong to the same class, otherwise closed to 0.0. Thus, their model can be pre-trained with large-scale textual entailment dataset. Given a test query , they compare the test query with all the previous examples. The final prediction is made by searching the nearest neighbor which has the highest matching score S(, ) with the query example. As we mentioned before, their computation cost is high and the performance is highly related to the quality of chosen examples.
Moreover, comparing whether two examples are in the same class is different from doing textual entailment. In textual entailment, a human reading a premise to infer that the hypothesis is true or not. The fact that two examples are in the same class does not mean they can entail each other. Thus, they cannot fully utilize the pre-trained entailment model. Instead, our proposed model, Entailment, entail the label with a given utterance, which is much more efficient and maximize the utilization of the pre-trained entailment model.
DBLPYinHR19 is another work that utilizes textual entailment for zero-shot text classification. They convert the zero-shot text classification as a problem of filling a label candidate for a hypothesis. For example, they combine “emotion” labels with the question “this text expresses ?”, and ask the model if this hypothesis is true, given the text. This work more focus on zero-shot learning and they need to propose different questions for different labels.
3 Problem Formulation
In this section, we give a formal description of the problem “incremental few-shot text classification” with/without base classes.
Training data.
For the setting with base classes, the system is provided with a set of base classes ; each base class () has labeled examples. is usually a large number, like several hundred or several thousand. For the setting without base classes, is ignored. Both settings have totally rounds of new classes coming sequentially: {}. Each round has new classes, namely . Each new class only has examples (). The value of is not fixed and varies for different new classes in the same round, i.e., .
We create the multi-round setting since it can evaluate the system more precisely while learning a long line of new classes. In each round, we set and allow the flexibility that . This setting is more in line with reality, in which we can only collect a handful of examples for the upcoming classes and the number of examples cannot be guaranteed.
Development data.
To simulate real scenarios in the experiments, we assume that there are only examples available for the new classes during the incremental training. Thus, our formulation does not provide any development set to help select the best model. It is recommended to select hyper-parameters based on experience or related tasks.
Testing data.
To evaluate the system, the test data consists of examples across all the classes. For the setting with base classes, the potential label space is . For the setting without base classes, is excluded and we only search among the classes in . is an extra out-of-distribution (OOD) class in which examples falling outside of all the seen classes. It give us a chance to check the system’s ability to detect instances that reject all the known classes. This is crucial for an open-set problem like incremental learning, since there are always examples from upcoming classes that do not belong to any existing class.
Requirements.
(i) For the training of round , the system can only access the newly added few-shot examples in this round and preceding class names in . The system is not allowed to re-train on the (full or partial) examples of preceding classes. (ii) For the evaluation, we care about the performance in different types of classes, including base classes, different rounds of new classes and OOD classes in . We expect a system that can continuously recognize new classes well with few-shot examples. In the meantime, the performance drop for preceding classes is also considered. A system showing severer catastrophic forgetting is less preferred.
4 Our Model: Entailment
Our approach Entailment casts the text classification problem into textual entailment: the input text acts as a premise, the class name, such as “open a bank account” in intent detection , acts as a hypothesis. Then the question that if the input belongs to a class is equivalent to ask if the hypothesis is true given the premise. There are two benefits by transforming the text classification problem to entailment. First, we can make use of indirect supervision from large-scale entailment dataset williams2018broad to benefit the few-shot settings. Second, this enables us to utilize the few-shot examples as well as the information of the class names. Typical text classification approaches treat classes as indices. In fact, class names usually contain informative signals.
Entailment pairs.
To transfer text classification problem into textual entailment, we construct positive and negative entailment pairs for the training. Positive entailment pairs (, ) are constructed with utterance and its gold label name , where for base classes and for new classes. Negative entailment pairs consist of (, ), where is an incorrect label in current round. For base classes, but ; for new classes, but .
Compared to zhangdiscriminative, their entailment pairs are constructed with two utterances in the same round. () is a positive pair if they belong to a same class, otherwise it is a negative pair. In order to explore the potential of different combinations, we also propose a hybrid entailment model that use both (utterance, label) paris () and (utterance, utterance) pairs (). In this hybrid model, we train the model with entailment pairs from both our proposed model and zhangdiscriminative.
In the setting where we have base classes and examples for each base class, positive entailment pairs and negative pairs are generated for them. For round which contains new classes and examples for each class, there are positive entailment pairs and negative entailment pairs. For simplicity, we use the same value for all new classes here; in real datasets, different new classes may have different numbers of few-shot examples. In that case, the number of generated pairs will change accordingly.
For each entailment pair () no matter it is positive or negative, we concatenate its utterance with the label and fed into the RoBERTa liu2019roberta encoder. Given an utterance with words and a label with words, we add a special start-of-sequence ([CLS]) token at the beginning of the input and a special end-of-sequence ([SEP]) token at the end of each sentence. The whole input is ([CLS], , , …, , [SEP], , , …, , [SEP]). We use the [CLS] embedding output from the RoBERTa encoder with a fully connected layer for binary textual entailment: \linenomathAMS
(1) | ||||
(2) |
where is the embedding for the [CLS] token, and are parameters.
Training strategy.
Our model is a binary classification model that can utilize the indirect supervision from textual entailment. Firstly, we pre-train the model with a large-scale entailment dataset williams2018broad. For the setting with base classes, we fine-tune the model on entailment pairs from base classes to obtain a base model. Then it is fine-tuned on the new classes in each round. In the setting without base classes, the model is directly fine-tuned with the entailment pairs from new classes in each round .
Inference strategy.
After the training, we use the model to infer the class for a test input. For each input, we generate entailment pairs by accompanying the input with all classes except . Each pair will get a score indicating whether this input belongs to the particular class or not. indicates “YES”, “No” otherwise. If there is at least one class labeled with “YES”, the class with the maximal score is returned; otherwise, the system returns . We choose the threshold as 0.5 because entailment recognition is a binary classification problem.
Next, we compare our model with some related systems that can be potentially applied to the incremental few-shot text classification.
Entailment vs. DNNC
DNNC zhangdiscriminative also converts text classification problem into textual entailment. They discriminate whether two text inputs () are in the same class. As a result, for round with originally examples, this baseline will generate positive pairs and negative pairs. In testing, a query needs to compare with all the examples of all classes. The computation cost of this baseline is much higher than our proposed model.
Entailment vs. Prototypical Network.
Prototypical Network DBLPSnellSZ17 tries to solve few-shot target tasks given a collection of training tasks. The distributions in the training tasks and the target tasks are required to be similar. Prototypical network uses episode training to learn a nearest neighbor algorithm and hope it can generalize well to the target tasks.
The few-shot learning problem solved in Prototypical network is slightly different with our incremental few-shot setting. In Prototypical Network, the label space for target tasks only contains the new classes. However, in the incremental few-shot setting, the target label space is continuously increasing by adding new classes. Due to this essential distinction, applying Prototypical Network to incremental few-shot are very likely to have performance drop on base classes when fine-tuning on new classes.
Entailment vs. Incremental few-shot approaches in computer vision.
In Related Work, we introduced some typical approaches in computer vision that deal with incremental few-shot problem. Those methods consistently try to learn representations for classes and examples separately (i.e,, the and in Section 2). In our model, there are no individual representation vectors for classes or examples. Instead, the model learns an overall representation vector for the whole (input, class) pair. Our solution enables the learning of the input and the class to interact with each other, which has widely demonstrated its superiority in modeling the relations of two elements DBLP12808; zhangdiscriminative.
In addition, the approaches in computer vision mostly rely on large-scale labeled data for base classes to train a robust system. We would argue that the base classes with rich annotations may not be available in real-world applications. Our system which can be pre-trained with entailment dataset, instead, does not rely on base classes. This makes our system more applicable to various scenarios.
# class | 20 | 10 | 10 | 10 | 10 | 10 | 7 |
---|---|---|---|---|---|---|---|
# train | 2088 | 30 | 30 | 30 | 30 | 30 | - |
# test | 800 | 400 | 400 | 400 | 400 | 400 | 280 |
5 Experiments
5.1 Datasets
IFS-Intent.
This is our benchmark for incremental few-shot intent detection. IFS-Intent is converted from BANKING77111https://github.com/PolyAI-LDN/task-specific-datasets casanueva2020efficient, which is single-domain intent detection dataset comprising 13,083 annotated examples over 77 intents (average: 170 examples per intent). Each intent class is described by a short name, such as “get physical card”, “lost or stolen card”, etc.
We split the 77 intents into a base group (i.e., ), 5 rounds of new intents (i.e., {, , }) and a group of out-of-distribution intents (i.e., ). Each upcoming round has 10 new classes. We randomly split the 10 classes into 5 groups (each with 2 classes), then intentionally let the 5 groups have different sizes of k-shot examples (). Detail statistics are reported in Table 1.
5.2 Experimental setting
Baselines.
Since this is the first work that studies the incremental few-shot text classification problem, there is no prior system that deals with exactly the same task. We compare our model with two different types of baselines. Two baselines DBLPYinHR19; zhangdiscriminative solve text classification as a textual entailment problem and use large-scale entailment datasets for pre-training. We also adapt two incremental few-shot learning models in computer vision to incremental few-shot text classification DBLPSnellSZ17; DBLPGidarisK18. For these two baselines, we replace their encoders with RoBERTa to fit into the text classification task.
• Textual Entailment. DBLPYinHR19 uses a pre-trained textual entailment system to cope with zero-shot text classification, in which the input text acts as a premise and the class name or its definition acts as a hypothesis. It is similar to our approach, except for the reminder mechanism proposed in this work. Thus, this textual entailment baseline keeps fine-tuning on the regular entailment pairs round by round.
• DNNC. zhangdiscriminative proposes an alternative way to implement the entailment idea for few-shot text classification. Instead of inferring the truth value of a class conditioned on the input text , they tried to infer whether two text inputs () are in the same class. As a result, for round with originally examples, this baseline will generate positive pairs and negative pairs. In testing, a query needs to compare with all the examples of all classes. The computation cost of this baseline is high.
• Prototypical Network DBLPSnellSZ17. Prototypical Network is trained on base classes with the episode training method. For each episode, it randomly selects base classes; each base class is equipped with -shot supporting examples and a set of query examples. The representations for both base and new classes are calculated with the average embedding of -shot supporting examples. Prototypical network trains a nearest neighbor algorithm to optimize the prediction on the query sets. In testing, the model compares the distances of a query example with all the class representations and choose the nearest neighbor as its label.
• DyFewShot DBLPGidarisK18. We introduced this baseline in the Section 2. For this baseline, we extend this baseline to address multi-round few-shot classes: for the present round , all the preceding classes, including that in and {}, are viewed as “base classes”.
Implementation and setting.
For all the textual entailment models, we use MNLI williams2018broad to pre-train the model. All systems are implemented through the Huggingface Transformers package.222https://github.com/huggingface/transformers We always fine-tune on each round 5 epochs with learning rate 1e-6 and batch size 16. We run the same program with 3 different seeds and report the average performance.
Accuracy is reported for {, , , } and F1 score for .
ProtoNet | 87.250.10 | 53.410.68 | ||||||
---|---|---|---|---|---|---|---|---|
Entailment | 96.420.41 | 64.733.84 | ||||||
Clu4Fewshot | 95.960.68 | 61.894.78 | ||||||
DyFewShot | 81.041.91 | 55.012.52 | ||||||
Entailment | 96.120.12 | 58.921.22 | ||||||
ProtoNet | 85.831.94 | 31.671.48 | 43.663.08 | |||||
Entailment | 94.420.21 | 75.421.56 | 56.385.29 | |||||
Clu4Fewshot | 95.750.41 | 74.831.64 | 64.542.02 | |||||
DyFewShot | 81.291.56 | 0.00.0 | 39.331.25 | |||||
Entailment | 95.621.00 | 77.750.25 | 58.415.10 | |||||
ProtoNet | 83.920.33 | 24.925.54 | 38.833.43 | 31.149.83 | ||||
Entailment | 94.290.16 | 71.921.45 | 84.831.33 | 48.123.20 | ||||
Clu4Fewshot | 95.420.62 | 72.924.37 | 75.083.3 | 49.023.23 | ||||
DyFewShot | 81.291.56 | 0.00.0 | 0.50.71 | 33.941.42 | ||||
Entailment | 96.440.19 | 76.752.75 | 75.01.0 | 42.110.30 | ||||
ProtoNet | 81.082.06 | 24.335.54 | 30.676.17 | 22.51.34 | 23.626.99 | |||
Entailment | 92.710.41 | 70.750.54 | 82.832.16 | 73.922.52 | 29.343.31 | |||
Clu4Fewshot | 95.670.33 | 68.172.37 | 66.335.02 | 71.253.78 | 45.691.73 | |||
DyFewShot | 81.291.56 | 0.00.0 | 0.50.71 | 0.00.0 | 27.481.24 | |||
Entailment | 95.440.44 | 73.620.62 | 71.622.62 | 73.50.75 | 33.693.66 | |||
ProtoNet | 81.172.52 | 17.832.58 | 31.750.94 | 24.921.9 | 22.253.19 | 28.194.78 | ||
Entailment | 91.670.36 | 65.922.18 | 79.921.78 | 73.750.74 | 69.080.12 | 45.732.80 | ||
Clu4Fewshot | 95.290.16 | 68.752.35 | 66.753.82 | 67.03.4 | 57.751.41 | 42.093.72 | ||
DyFewShot | 81.541.71 | 0.250.35 | 0.170.24 | 0.00.0 | 0.00.0 | 23.521.51 | ||
Entailment | 95.690.06 | 72.120.62 | 67.751.25 | 70.250.25 | 72.621.38 | 38.850.89 | ||
ProtoNet | 80.002.65 | 21.835.45 | 29.173.7 | 24.673.12 | 23.173.6 | 30.334.17 | 29.242.96 | |
Entailment | 89.170.60 | 65.082.45 | 78.50.94 | 69.081.12 | 68.250.35 | 70.671.3 | 39.481.45 | |
Clu4Fewshot | 95.120.47 | 67.500.89 | 67.924.7 | 64.424.17 | 52.421.2 | 53.332.09 | 30.465.92 | |
DyFewShot | 81.501.27 | 0.080.12 | 0.830.62 | 0.00.0 | 0.00.0 | 0.50.71 | 21.231.34 | |
Entailment | 95.560.06 | 68.752.75 | 67.380.62 | 63.751.75 | 65.123.62 | 61.622.38 | 37.650.44 |
Entailment | 65.171.36 | 75.430.41 | |||||
---|---|---|---|---|---|---|---|
Clu4Fewshot | 55.502.27 | 72.290.20 | |||||
Entailment | 70.080.77 | 78.250.19 | |||||
Entailment | 64.082.04 | 76.331.01 | 64.680.71 | ||||
Clu4Fewshot | 64.580.42 | 77.751.08 | 61.720.90 | ||||
Entailment | 74.251.34 | 86.671.01 | 64.390.27 | ||||
Entailment | 75.501.63 | 83.830.62 | 75.251.24 | 56.562.43 | |||
Clu4Fewshot | 65.251.67 | 79.581.50 | 64.671.93 | 50.250.52 | |||
Entailment | 74.251.08 | 85.921.05 | 76.581.05 | 53.091.73 | |||
Entailment | 68.331.16 | 72.670.77 | 68.581.9 | 69.501.34 | 53.920.75 | ||
Clu4Fewshot | 66.750.54 | 79.080.51 | 60.52.35 | 62.251.08 | 42.560.76 | ||
Entailment | 73.751.41 | 85.501.06 | 71.671.53 | 75.832.44 | 52.750.63 | ||
Entailment | 67.580.82 | 73.501.24 | 67.830.47 | 71.830.66 | 73.750.74 | 50.950.68 | |
Clu4Fewshot | 65.330.62 | 76.751.59 | 62.833.17 | 59.752.83 | 57.252.32 | 36.661.07 | |
Entailment | 70.751.27 | 82.501.27 | 72.420.96 | 76.671.05 | 71.00.41 | 47.051.60 |
5.3 Experimental results
As the problem formulation presented in Section 3, we want to investigate two questions. : can our system get better performance on each round? : can our system hold more stable performance during the incremental learning process?
Tables 23 list the results on the benchmarks IFS-Intent and IFS-Relation, respectively. Our system Entailment is compared with the baselines for the seven batches of testing classes (base, five rounds and OOD) along with the incremental learning from the base classes to the fifth round.
As for the question , we summarize our observations as follows. (i) The ProtoNet generally works worst on most cases, regardless of the test classes and the timeline. This should be due to the fact that ProtoNet does not fine-tune on the new classes; thus, no incremental learning in ProtoNet. (ii) The baselines “Entailment” and “Cluster4Fewshot”, which perform incremental fine-tuning, generally outperform the ProtoNet. In addition, they are mostly comparable. (iii) Our system Entailment consistently obtains the best results across all test classes and the timeline.
To answer the question , we need to quantify the performance changes of all systems along the timeline not only on but also on all {} (). Given a list of result values , we first use linear regression to fit these numbers. For example, if we fine a line , where is the slope, is the intercept, and is the time stamp. The performance drop reflected by this list is calculated as . Since the linear regression is more reliable when the value is larger, we compute the drop values for , and only and average them as the final evaluation of a system in responding to the question .
6 Conclusion
In this work, we define a new challenge in the NLP domain, incremental few-shot text classification with multi-round new classes. In addition to the problem formulation, we also release two benchmark datasets for this particular challenge: IFS-Intent and IFS-Relation . A novel approach Entailment is proposed to solve this problem. Entailment converts the text classification problem into textual entailment which can be pre-trained with large-scale entailment dataset. The reminder mechanism in Entailment mitigates the catastrophic forgetting problem in the incremental setting. Experiments on these two benchmark datasets show the effectiveness of our proposed model.