Interactive Continual Learning: Fast and Slow Thinking
Abstract
Advanced life forms, sustained by the synergistic interaction of neural cognitive mechanisms, continually acquire and transfer knowledge throughout their lifespan. In contrast, contemporary machine learning paradigms exhibit limitations in emulating the facets of continual learning (CL). Nonetheless, the emergence of large language models (LLMs) presents promising avenues for realizing CL via interactions with these models. Drawing on Complementary Learning System theory, this paper presents a novel Interactive Continual Learning (ICL) framework, enabled by collaborative interactions among models of various sizes. Specifically, we assign the ViT model as System1 and multimodal LLM as System2. To enable the memory module to deduce tasks from class information and enhance Set2Set retrieval, we propose the Class-Knowledge-Task Multi-Head Attention (CKT-MHA). Additionally, to improve memory retrieval in System1 through enhanced geometric representation, we introduce the CL-vMF mechanism, based on the von Mises-Fisher (vMF) distribution. Meanwhile, we introduce the von Mises-Fisher Outlier Detection and Interaction (vMF-ODI) strategy to identify hard examples, thus enhancing collaboration between System1 and System2 for complex reasoning realization. Comprehensive evaluation of our proposed ICL demonstrates significant resistance to forgetting and superior performance relative to existing methods. Code is available at github.com/ICL.
1 Introduction
Advanced life forms exhibit continual learning (CL) and memory formation, facilitated by neural cognitive interactions that enable collaborative knowledge transfer [9, 48, 26]. These underlying mechanisms enhance memory consolidation and utilization, as well as reasoning abilities in advanced life forms [16, 39]. However, current machine learning paradigms, particularly neural network-based models, face challenges in achieving CL. Specifically, neural networks learning from evolving data face a risk known as catastrophic forgetting [5], wherein the integration of new knowledge frequently disrupts existing knowledge, resulting in notable performance degradation [50, 17, 5].
To tackle this challenge, current CL methods strive to preserve and augment knowledge acquired throughout the learning process [14, 36, 28, 22]. In CL, rehearsal-based methods [32, 36, 40, 44, 46, 45], are the most direct strategy.
However, these methods often ignore the geometric structure [30] of memory representations and face challenges in open-class settings. Another perspective includes architecture-based methods [22, 21, 31], allocating distinct parameters for knowledge encoding from various tasks. Early studies centered on convolution-based architectures [28]. Recent advancements pivoted towards transformer-based methods like L2P [46], and Dualprompt [45].
From the perspective of Complementary Learning System (CLS) Theory in neurocognitive science [16], the current designs of CL frameworks may not be optimal. In a brain-like system, multiple memory modules dynamically maintain a balance between stability and plasticity, with each module possessing predictive capabilities [33, 35, 24]. However, most advanced CL frameworks lean towards more intuitive systems. Previous CLS-driven methods [28, 25] involve the separation and expansion of parameters to facilitate the learning of both fast and slow knowledge. For instance, DualNet [28] optimizes this process by emphasizing task-specific pattern separation. Similarly, BiMeCo [25] divides model parameters into two distinct components: a short-term memory module and a long-term memory module. However, these methods [28, 25] are limited to a single backbone model. In line with CLS principles, this underscores the importance of developing an interactive CL framework between models to consistently achieve higher performance levels.
Meanwhile, recent advancements in large language models (LLMs), as exemplified by ChatGPT [38] and GPT4 [5], have demonstrated remarkable reasoning capabilities. These models can employ chains of thought [47] to engage in complex reasoning, much like System2. Consequently, it raises an interesting question: Can we integrate intuitive models, such as ViT [8] as System1 alongside LLM-based as System2 to establish an interactive framework for CL?
To response this question, we reevaluated CL by exploring the interaction between ViT (System1) and Multimodal Large Language Model (System2). In alignment with the current CL setting, our focus is on adapting ViT parameters while keeping System2 parameters stable. System2 is responsible for handling hard examples and facilitates collaboration with System1. To enable the continual updating of ViT, we introduced the Class-Knowledge-Task Multi-Head Attention (CKT-MHA) module. CKT-MHA utilizes category features and the knowledge of ViT to aid System1 in acquiring task-related knowledge, facilitating knowledge retrieval through Class-Task collections. Furthermore, we introduce the CL-vMF mechanism, which employs von Mises-Fisher distribution modeling to improve memory geometry and enhance retrieval distinguishability through an Expectation-Maximization (EM) update strategy. This design enables System1 to retain old memory parameters, preventing unnecessary updates and addressing catastrophic forgetting issues. To realize the coordination between System1 and System2 during their reasoning transitions, we introduce the von Mises-Fisher Outlier Detection and Interaction (vMF-ODI) mechanism for assessing sample difficulty. This mechanism is designed to assist the System1 in adaptive identifying hard examples within each batch. Once identified, these hard examples undergo initial inference by System1, and the resulting prediction outcomes serve as background knowledge for System2 to facilitate more intricate reasoning.
We conduct experiments on various benchmarks, including the demanding Imagenet-R, to validate proposed interactive continual learning (ICL). The results illustrate that ICL significantly mitigates catastrophic forgetting, surpassing state-of-the-art methods. Moreover, it maintains consistently high accuracy across different task. In summary, our contributions are as follows:
-
•
We propose an ICL framework from a novel perspective that emphasizes the interaction between fast intuitive model (ViT) and slow deliberate model (multimodal LLM), aligning with CLS principles.
-
•
We propose the CKT-MHA module to acquire task-related information by leveraging category features and small model knowledge.
-
•
We propose the CL-VMF mechanism, an optimization strategy guided by VMF distribution modeling with EM strategy updates to enhance the retrieval of geometric memory representations.
-
•
We propose vMF-ODI, a batch-wise retrieval interaction strategy that enables the adaptive identification of hard examples within each batch, fostering collaborative reasoning between the two systems.
2 Related Works
We discuss three primary categories of CL methods.
Regularization-based methods incorporate regularization terms into the loss function to mitigate catastrophic forgetting for previously learned tasks. These methods [14, 50, 1, 20, 17, 27], primarily revolve around the development of metrics for assessing task importance, with additional research efforts dedicated to characterizing the significance of individual features [42]. Nevertheless, these methods tend to exhibit reduced performance when applied to more complex datasets.
Rehearsal based methods utilize previous task data to mitigate catastrophic forgetting within limited memory buffers. Reservoir sampling techniques [7, 34], randomly retain a fixed number of old training samples from each training batch. Further, [12] employs coefficient-based cosine similarity to address sample number imbalances among categories. To better recover past knowledge, GEM [36] constructs individual constraints based on old training samples for each task to ensure no increase in their loss. LOGD [40] decomposes gradients for each task into shared and task-specific components, capitalizing on inter-task information. CVT [44] explores online CL using external attention strategy.
Architecture-based methods aim at assigning independent parameters for new task data. These methods involve strategies such as parameter allocation, model division, and modular network models [22, 21, 31]. Previous studies concentrated on specific convolution-based architectures, with DualNet [28] optimizes memory through separation representations for specific tasks. Recent work focus to transformer-based models. L2P [46] enhances integration of knowledge by treating prompts as optimization parameters. Furthermore, Dualprompt [45] enhances knowledge memory by constructing dual orthogonal prompt spaces.
3 Methodology

3.1 Problem Setup
A standard paradigm for CL can be defined by a set of task descriptors , and the corresponding distribution for each task. The task-specific dataset , where represents the number of samples in the training set of the -th task. The dataset is drawn i.i.d. from the sample space . During the training phase, the training samples are sequentially fed as inputs, following the task descriptors from to . In formulating the ICL framework, we provide definitions for System1 and System2, respectively. System1 is instantiated by the model parameterized by which updates its parameters to in the -th task. Our objective is to determine , ensuring the memory capacity of the System1. Here represents loss function, with . The system1 utilizes a memory buffer and updates to by using . Furthermore, System2 is instantiated with a Multimodal LLM represented as to model complex reasoning abilities. To enable collaborative inference, System2 must handle hard samples that System1 struggles with, maximizing the probability . This requires the ability of System1 to filter out these samples, thereby improving the inference of System2 by leveraging predicted results from System1 for second-stage inference:
(1) |
here represents label to prompt operation, while represents its inverse operation. System2 is expected to produce an output that minimizes . Next, we will present detailed designs for each component of the ICL framework and discuss optimization strategies.
3.2 Query and Value Memory for System1
In general, deep neural networks implicitly encode data memories within their parameters, and unnecessary parameter changes can result in memory degradation. The prevailing methods to deploying models in downstream tasks involves utilizing pre-trained feature extractors and introducing new parameters for adaptation, which has been demonstrated to be effective in CL setting [46, 45]. Nevertheless, these methods face challenge when the precise number of classes in the downstream task is uncertain. Moreover, updating all parameters of the classification head for each new task worsens the issue of forgetting. To address these challenges, we propose separating the model’s parameters into two distinct groups: value memory parameters, denoted as with class-specific representations , and query memory parameters represented by . This decoupling strategy enhances operational flexibility. In simpler terms, we envision as a collection of class-specific value memory variables, ensuring that
-
•
Value memory parameters can be augmented as the number of tasks increases, i.e., there is a memory increment. When training on task , will be updated from to .
-
•
The relevant value memory for each class is only updated when necessary, i.e., the value memory will not be updated if the input does not contain class , thus after trained on task equals to that trained before training on task .
Then we can ensure that the System1 can be equipped with persistent value memory of old data and update query parameters using a portion of the memory buffer as rehearsal samples, thus can effectively handle old data. At the same time, this enables System1 to be unconstrained by a predefined number of classes and allows for more flexible memory allocation for new tasks.
Interactive Query and Value Memory with CKT-MHA
Firstly, we design such a memory module for System1. Specifically, we propose a Set2Set memory retrieval mechanism to further enhance memory retrieval stability. This mechanism involves first obtaining class (cls) information via a projector parameterized by : , is the obtained token length. Here we use the pre-trained ViT as image feature extractor , followed by an query interactor , parameterized by , introduced for memory matching. This results in . Subsequently, we utilize and pretrained knowledge to construct the Class-Knowledge-Task Multi-Head Attention (CKT-MHA) to capture the task information corresponding to the class:
(2) | |||
(3) |
where is the head number of MHA, is the corresponding attention head interval. Then we combine the class and task features set to obtain the classification feature: , where denotes the -th token of . Consequently, the interactor parameters are denoted as . Finally, we set a group of task-specified value memory vector and class-specified corresponding to the class to form the final value memory variable for retrieval: . During the inference stage, the proposed task-class Set2Set retrieval is performed from value memory as: .
With decoupled parameters, we also need to decouple the updates of query memory parameters and value memory parameters to ensure the flexibility of adjusting these two parts of parameters, avoiding unnecessary updates to value memory parameters, which prevents biases in memory variables for future tasks and mitigates catastrophic forgetting. Furthermore, prioritizing value memory parameter optimization guides the optimization of query parameters. Is it possible to achieve such decoupled optimization while ensuring a consistent optimization objective? The inherent nature of EM algorithm provides a framework for meeting these optimization requirements. Specifically, considering the Maximum Likelihood Estimation (MLE) under a classification setting, with the probability of task denoted as , the objective can be expressed as follows:
(4) |
For the sake of simplicity in the framework, here we only consider the parameters that are being updated. During training on task , the System1 aims to determine . To achieve this, we model value memory as hidden variables, alongside the hidden distribution which satisfies . This allows us to obtain
(5) |
taking the expectation with respect to on both sides,
(6) | ||||
(7) |
the first term of R.H.S is referred to as the Evidence Lower Bound , the second term is . Since the KL divergence is non-negative, we can achieve maximum likelihood by iteratively updating and using the Generalized Expectation-Maximization algorithm (GEM) as follows:
(8) |
(9) |
In the context of supervised learning, we can define as class-specific . The prior form of is . Note that in this case,
(10) |
Hence, we can express the two distinct objectives in eq.(8) and (9) as a unified objective:
(11) |
When training on a sequence of tasks, if we directly optimize as in eq.(4) for the current task’s optimal , as is typical, it will inevitably lead to , resulting in catastrophic forgetting. However, our decoupled optimization strategy ensures that the value memory parameters remain optimal for their corresponding tasks, and with the integration of rehearsal, it guarantees that the optimization of query memory parameters can be consistently guided by such value memory parameters of old tasks, thereby mitigating catastrophic forgetting while continually adapting to new data. Next, we describe how to model in eq.(11).
3.3 Optimizing Memory via CL-vMF
Modelling Posterior with vMF Distribution. To assure value memory vectors more discriminative that facilitate more explicit memory retrieval, necessitating the construction of more separable geometric relationships for them. Introducing improved geometric relationships for value memory vectors further ensures that the query features of each class are more separable and closer to the class center when combined with the EM updating strategy. This also aids in more effectively filtering out outliers, laying the groundwork for screening hard samples for System1, as discussed in Section 3.4. Therefore, we opt for the von Mises-Fisher (vMF) distribution, which naturally excels in modeling geometric relationships in high-dimensional spaces [2, 18, 29] and has demonstrated effectiveness in downstream tasks [51, 23]. Its probability density function is as follows:
(12) |
where the correlated sample space is defined as . represent the mean direction, while is the concentration parameter. The constant is only related to and . By setting normalized value memory parameter to the mean direction, we model the distribution of the normalized feature for class as a vMF distribution with probability density . By utilizing the constructed probability density for Bayesian discrimination, we can ascertain the form of the posterior probability as:
(13) |
When a sample pair of a new class is inputted, a new can be assigned to it for memory expansion, thereby avoiding the limitation imposed by the predefined number of classes, i.e., class-free. Combined with the proposed memory module, we introduce CL-vMF, a class-free memory retrieval mechanism that accommodates new classes when the total number is unknown, it retrieves from memories based on the vMF posterior and ensures that class-specific value memory parameters are only updated when needed.
Implementation of CL-vMF.
We set the value memory vectors as learnable memory embeddings, enabling the model to use them for retrieval using the vMF posterior to ascertain a class of sample. The process resembles constructing a new ”hippocampus” within the pre-trained model by utilizing and learning query memory interactions via to adapt and employ the value memory. In the training phase, upon encountering each new class , we allocate a , incorporating it into . To ensure updating the value memory parameter exclusively when the input contains a sample of class , we introduce the subsequent batchwise vMF loss to maximize likelihood:
(14) |
where is the -th input batch of task and
(15) |
where , is the set of classes that appears in , denotes the projection operator. is set as a hyperparameter. Hence, our training approach involves alternating between the EM steps based on eq.(8) and eq.(9) throughout the training process. Nonetheless, when delving deeper into the optimization process, it becomes imperative to consider the gradient of :
(16) | |||
thus when , the gradient for () will tend to zero, meanwhile, as approaches , will also tend to zero. Likewise, for we have:
(17) | |||
Note that when ,
(18) |
This means that will tend to zero. To ensure stable gradients during training and achieve consistent loss reduction, we introduce a gradient stabilization loss
(19) |
where
(20) |
which provides a constant gradient as compensation, with regulating the threshold of gradient compensation. When the loss becomes small, no further compensation is provided due to the instability of the zero point for the absolute value function. Similarly, we incorporate the batchwise gradient stabilization loss into the objective, leading to the overall loss:
(21) |
here, and are hyperparameters.
The CL-vMF model possesses several advantageous features, including incremental value memory and the ability to handle an arbitrary number of classes. As a result, there is no need to retrain the classification head even when the number of classes exceeds the pre-defined limit. Since the value memory parameters are frozen after the completion of task , i.e. all memories remain unchanged both before and after training task . This guarantees stable and persistent value memory. Moreover, the storage cost of class value memory parameters is calculated as , where and represent the dimensions of and the number of classes, respectively. Importantly, this implies that the storage cost of value memory parameters scales linearly with .
3.4 Collaborative Inference: System1 and System2
To align with the CLS, we aim for the ICL framework to activate System2 when System1 fails to perform fast thinking, i.e. when encounters hard samples. This activation leverages the complex reasoning capabilities of System2 to achieve collaborative inference. Specifically, we use MLLM to instantiate . And we propose a hard sample filtering strategy, vMF-ODI, to screen data that challenges the System1. Specifically, we use batchwise normalization to filter outliers, thus identifying hard sample set :
(22) |
where , is the predicted label, , and is a detection threshold. For the filtered hard samples, we utilize the TopK outputs from the System1 to construct an inquiry-based language prompt . The operation converts labels to language and prompts the System2 to perform reasoning and rank the results based on the given context, and finally gives the prediction like we stated in eq. (1). After completing the inference in stage 2, if the System2 provides a precise answer, we will use that answer. Otherwise, we will rely on the judgment result from the System1. This interactive scheme suggests the possibility of using fine-tuning strategies, such as LoRA [13], to minimize with respect to . This adjustment would help align the System2 with the System1. Consistent with existing CL setups, we focus on the parameter updates of System1. Detailed algorithmic description can be found in Appendix A. During the training process, we perform iterations of EM alternating steps for each task, with updates only applied to in subsequent updates for that task.
4 Experiments

Backbone | Method Type | Memory | Method | CIFAR10 | CIFAR100 | ImageNet-R | |||
---|---|---|---|---|---|---|---|---|---|
Buffer | Class-IL | Task-IL | Class-IL | Task-IL | Class-IL | Task-IL | |||
ResNet18 | - | - | JOINT | 92.20 | 98.31 | 70.62 | 86.19 | 7.72 | 25.48 |
FT-seq | 19.62 | 61.02 | 17.58 | 40.46 | 0.59 | 10.82 | |||
Non-Rehearsal based | 0 | EWC[14] | 17.82 | 83.52 | 7.62 | 55.14 | 1.08 | 21.34 | |
LwF[19] | 18.52 | 84.72 | 8.88 | 61.32 | 1.24 | 45.68 | |||
SI[50] | 18.41 | 84.74 | 6.73 | 50.44 | 3.31 | 22.72 | |||
Rehearsal based | 200 | ER[34] | 44.79 | 91.19 | 21.40 | 61.36 | 1.01 | 15.36 | |
A-GEM [6] | 18.58 | 80.19 | 7.97 | 55.20 | 1.23 | 16.24 | |||
iCaRL [32] | 23.80 | 67.82 | 7.31 | 33.10 | 0.81 | 9.20 | |||
CVT [44] | 30.74 | 75.92 | 12.09 | 43.14 | 1.60 | 9.01 | |||
SCoMMER [37] | 66.35 | 92.66 | 38.89 | 67.62 | 1.73 | 10.65 | |||
DualNet [28] | 24.50 | 90.70 | 25.30 | 54.60 | 7.01 | 20.70 | |||
BiMeCo [25] | 27.92 | 92.75 | 28.71 | 56.65 | 10.41 | 22.75 | |||
500/600 | ER[34] | 57.74 | 93.61 | 28.02 | 68.23 | 1.27 | 22.84 | ||
A-GEM [6] | 24.85 | 84.80 | 8.89 | 51.47 | 1.23 | 19.35 | |||
iCaRL [32] | 29.21 | 67.72 | 4.40 | 23.41 | 1.01 | 7.60 | |||
CVT [44] | 40.13 | 79.61 | 13.83 | 46.39 | 1.24 | 6.97 | |||
SCoMMER [37] | 73.95 | 94.14 | 49.09 | 74.50 | 1.40 | 10.05 | |||
DualNet [28] | 35.00 | 91.90 | 34.65 | 62.70 | 8.70 | 20.40 | |||
BiMeCo [25] | 38.40 | 93.95 | 38.05 | 64.75 | 12.13 | 22.45 | |||
ViT | - | - | JOINT | 97.49 | 99.54 | 87.23 | 97.64 | 74.75 | 83.39 |
FT-seq | 22.32 | 86.33 | 20.48 | 83.97 | 32.56 | 49.62 | |||
Non-Rehearsal based | 0 | L2P [46] | 92.22 | 98.99 | 79.68 | 96.24 | 48.68 | 65.38 | |
DualPrompt [45] | 94.43 | 99.32 | 79.98 | 95.92 | 52.20 | 69.22 | |||
Rehearsal based | 200 | L2P [46] | 67.13 | 96.39 | 65.29 | 92.16 | 36.70 | 55.35 | |
DualPrompt [45] | 70.60 | 97.78 | 65.97 | 92.85 | 38.79 | 59.32 | |||
ICL w/o System2 | 94.60 | 99.43 | 77.34 | 94.81 | 49.87 | 68.62 | |||
ICL w MiniGPT4 | 95.34 | 99.56 | 78.28 | 95.70 | 52.46 | 69.87 | |||
ICL w Inf-MLLM | 95.42 | 99.56 | 78.55 | 95.83 | 53.20 | 72.96 | |||
ICL w Pure-MM | 95.94 | 99.57 | 79.12 | 95.99 | 53.64 | 73.59 | |||
500/600 | L2P [46] | 71.23 | 96.78 | 69.43 | 93.92 | 40.17 | 57.89 | ||
DualPrompt [45] | 73.56 | 98.12 | 69.98 | 93.76 | 43.77 | 61.24 | |||
ICL w/o System2 | 95.54 | 99.52 | 80.67 | 95.24 | 54.65 | 76.02 | |||
ICL w MiniGPT4 | 96.49 | 99.58 | 81.38 | 95.62 | 55.99 | 79.68 | |||
ICL w Inf-MLLM | 96.69 | 99.64 | 82.29 | 96.14 | 57.47 | 81.82 | |||
ICL w Pure-MM | 96.83 | 99.68 | 82.43 | 96.35 | 58.18 | 82.64 |
4.1 Experimental Setups
Following [41, 43], we investigate two common CL setups: Task-Incremental Learning (Task IL) and Class-Incremental Learning (Class IL). In Task IL, task identifiers are provided during both the training and testing phases. In contrast, the Class IL protocol assigns task identifiers only during the training phase. During the testing phase, the model faces the challenge of predicting all classes encountered up to that point, making it a more demanding scenario.
Datasets. Three datasets are used in our experiments: CIFAR10 [15], CIFAR100 [15], ImageNet-R [11]. Details of these datasets are in Appendix.
Baselines. We combine the proposed ICL framework with the following advanced baselines: Selected rehearsal-based methods: ER [34], A-GEM [6], iCaRL [32], CVT [44], SCoMMER [37], BiMeCo [25]. architecture-based methods: DualNet [28], L2P [46], DualPrompt [45]. Additionally, we perform comparisons with well-known regularization-based techniques, namely EWC [14], LwF [19], and SI [50]. Additionally, we assess JOINT, which entails supervised fine-tuning across all task training sets and represents an upper performance limit. We also examine FT-seq, a sequential fine-tuning technique that partially freezes pre-training weights and generally serves as a performance lower bound. Both JOINT and FT-seq have two variants, one using ViT [8] and the other using ResNet18 [10] as the backbone. Our main focus on comparing these methods with rehearsal- and prompt-based methods.
4.2 Results
Extensive experiments are conducted on CIFAR10, CIFAR100, ImageNet-R. Specifically, we add our ICL to several state of the arts methods to evaluate its effectiveness.
Results of comparisons with State-of-the-Art Methods. The quantitative comparisons are summarized in Tab. 1. In the rehearsal-based method, we experiment with different buffer sizes for comparison. The results consistently show that our method outperforms others. Notably, to better simulate the CL scenario, we restrict the number of epochs to one, allowing the model to encounter the data only once during incremental task learning. This restriction significantly affects the efficacy of regularization techniques like EWC [14], LwF [19], as well as rehearsal-based methods such as ER [34], A-GEM [6], iCaRL [32] and CVT [44], among others. However, transformer-based approaches like L2P [46] and DualPrompt [45] maintain certain performance. Additionally, even in the absence of integrating the System2, its incorporation leads to further performance enhancement. This is demonstrated by a notable increase of over 3% in CL accuracy on the ImageNet-R dataset, observed across 10 consecutive splits, with a memory capacity of 600. The introduction of the System2 enhances System1 ability to effectively recognize and address previously forgotten images or information. To ensure fair comparisons, we incorporate a buffer into the L2P [46] and DualPrompt [45] methods, using a fixed strategy akin to CVT [44]. Subsequently, we observe that the performance of the L2P [46] and DualPrompt [45] methods declined after integrating the buffer. This decline is likely due to interference from the replayed samples in the buffer, which hampers the training of task-specific prompts and leads to greater forgetting. Results of different task settings. To assess CL strategies across varying numbers of data streams, following the protocol outlined in [44, 49]. This method entails dividing the ImageNet-R dataset, which consists of classes, into subsets containing , , and classes each, thereby creating incremental tasks. Tab. 2 offers a comprehensive analysis of the accuracy achieved by various methods across different task configurations. The results unequivocally establish the significant superiority of our method over both regularization-based and rehearsal-based method in a wide range of incremental division scenarios. Notably, even when the memory capacity is held constant at , our method outperforms architecture-based methods such as L2P [46] and DualPrompt [45]. Furthermore, the results illustrate a consistent enhancement in our method performance as the buffer size increases.
Memory | Method | 5 splits | 10 splits | 20 splits | |||
---|---|---|---|---|---|---|---|
Buffer | Class-IL | Task-IL | Class-IL | Task-IL | Class-IL | Task-IL | |
0 | EWC[14] | 1.56 | 11.35 | 1.08 | 21.34 | 8.10 | 12.68 |
LwF[19] | 1.38 | 14.66 | 1.24 | 45.68 | 0.77 | 14.34 | |
SI[50] | 1.78 | 11.50 | 3.31 | 22.72 | 3.35 | 40.29 | |
L2P [46] | 29.87 | 38.58 | 48.68 | 65.38 | 20.08 | 47.98 | |
DualPrompt [45] | 54.43 | 66.86 | 52.20 | 69.22 | 47.13 | 71.43 | |
200 | ER[34] | 1.29 | 9.75 | 1.01 | 15.36 | 1.38 | 19.81 |
A-GEM [6] | 1.30 | 4.23 | 1.23 | 16.24 | 1.34 | 21.78 | |
iCaRL [32] | 0.41 | 3.22 | 0.81 | 9.20 | 0.83 | 15.23 | |
CVT [44] | 1.47 | 5.65 | 1.6 | 9.01 | 1.01 | 13.19 | |
SCoMMER [37] | 0.80 | 3.78 | 1.73 | 10.65 | 0.32 | 10.36 | |
DualNet [28] | 9.32 | 13.14 | 7.01 | 20.70 | 4.92 | 25.53 | |
BiMeCo [25] | 11.18 | 14.27 | 10.41 | 22.75 | 5.86 | 26.33 | |
ICL w/o System2 (ours) | 49.91 | 74.36 | 49.87 | 68.62 | 48.75 | 73.95 | |
ICL (ours) | 54.85 | 75.57 | 52.46 | 69.87 | 49.98 | 74.63 | |
500/600 | ER[34] | 1.30 | 13.02 | 1.27 | 22.84 | 1.56 | 20.42 |
A-GEM [6] | 1.31 | 9.55 | 1.23 | 19.35 | 1.58 | 21.89 | |
iCaRL [32] | 0.43 | 3.62 | 1.01 | 7.60 | 1.62 | 14.23 | |
CVT [44] | 1.95 | 5.68 | 1.24 | 6.97 | 1.45 | 15.58 | |
SCoMMER [37] | 0.61 | 3.35 | 1.40 | 10.05 | 0.56 | 12.60 | |
DualNet [28] | 10.03 | 13.44 | 8.70 | 20.40 | 6.40 | 31.80 | |
BiMeCo [25] | 11.89 | 14.57 | 12.13 | 22.45 | 7.34 | 32.73 | |
ICL w/o System2 (ours) | 54.60 | 75.57 | 54.65 | 76.02 | 52.46 | 77.05 | |
ICL (ours) | 56.34 | 78.36 | 55.99 | 79.68 | 53.60 | 80.47 |
Results of forgetting curve comparison. To illustrate the forgetting process within each compared methods in the CL data stream, we record the average test accuracy for both the current and preceding tasks upon completing the training of each task. Subsequently, we create a line chart to visualize the change in accuracy with the addition of each task, offering a visual representation of the forgetting process. Fig. 2(a) and Fig. 2(b) provide clear illustrations that, as new tasks are introduced, most methods exhibit a decline in performance. However, our method consistently outperforms these methods in terms of accuracy at every stage.
4.3 Ablation Study
Analysis of hyperparameters and margin .
Fig. 2(c) and Fig. 2(d) depict the impact of parameters and on our method performance, using the CIFAR100 and ImageNet-R datasets, with memory sizes set at 500 and 600, respectively. An analysis of the heat maps reveals that our method’s sensitivity to varies between datasets. Specifically, CIFAR100 demonstrates optimal performance when , while ImageNet-R maintains consistent performance with values within the range . In contrast, variations in do not result in substantial accuracy differences between the datasets. This suggests that the margin regularization term has only a marginal impact on overall model performance. Importantly, when and (as shown in the bottom), the absence of the regularization term leads to diminished performance, underscoring the value of incorporating margin regularization to enhance the model’s effectiveness.
Analysis of concentration parameter Selection. We conduct an investigation into the influence of the concentration parameter on model performance. Fig. 2(e) indicates that there is no significant variation in performance across different values of for the CIFAR10 dataset. However, for datasets such as CIFAR100 and ImageNet-R, it becomes important to estimate the concentration in advance to ensure a more accurate modeling of the concentration. Consequently, these datasets demonstrate a noticeable sensitivity to the concentration parameter.
Impact of Top-k results in collaboration inference. To investigate the influence of the number of prompt categories on the reasoning process of System2, a series of experiments are conducted using different values of on multiple datasets, as depicted in Fig. 2(f). The results clearly indicate that memory vectors possessing well-defined geometric structures facilitate stable memory retrieval, thereby preventing System1 from deviating significantly from the central data point during the reasoning phase. Consequently, when a smaller number of top-k choices are provided, it can ensure almost guaranteed inclusion of the correct categories. Conversely, when is larger, there is an increased tendency for erroneous category information to be included in the prompt, further intensifying the potential for confusion in System2. Fig. 2(f) also presents the accuracy achieved without System2 across various datasets (blue horizontal line in the figure), demonstrating the performance enhancement by the incorporation of System2.
Impact of separate query and value memory optimization. To assess the query-value separation strategy impact on stable value memory modeling in persistent scenarios, we compare its use (Fig. 2(g)) and absence (Fig. 2(h)). Visualizing the value memory via tsne reduction reveals that query-value parameter optimization can create more focused value memory modeling, enhancing task discrimination. The corresponding results consistently validate the effectiveness of our proposed CL-vMF mechanism.
5 Conclusion
In our paper, we introduced ICL, a groundbreaking continual learning (CL) paradigm inspired by Complementary Learning System theory in neurocognitive science. ICL combines ViT with an interactive query and value memory module powered by CKT-MHA, enhancing the efficiency of fast thinking (System1). Additionally, it leverages our CL-vMF mechanism to improve memory representation distinction. ICL also integrates multi-modal Large Language Models (System2) with System1 for advanced reasoning, dynamically modulated by hard examples detected through our VMF-ODI strategy. Our experiments confirmed the effectiveness of our framework in reducing forgetting, surpassing contemporary state-of-the-art methods.
Acknowledgement This work was supported in part by the National Key R&D Program of China (No. 2023YFC3305102). We extend our gratitude to the anonymous reviewers for their insightful feedback, which has greatly contributed to the improvement of this paper.
References
- Aljundi et al. [2018] Rahaf Aljundi, Francesca Babiloni, Mohamed Elhoseiny, Marcus Rohrbach, and Tinne Tuytelaars. Memory aware synapses: Learning what (not) to forget. In Proceedings of the European Conference on Computer Vision (ECCV), pages 139–154, 2018.
- Banerjee et al. [2005] Arindam Banerjee, Inderjit S. Dhillon, Joydeep Ghosh, and Suvrit Sra. Clustering on the unit hypersphere using von mises-fisher distributions. J. Mach. Learn. Res., 6:1345–1382, 2005.
- Buzzega et al. [2020] Pietro Buzzega, Matteo Boschini, Angelo Porrello, Davide Abati, and Simone Calderara. Dark experience for general continual learning: a strong, simple baseline. Advances in neural information processing systems, 33:15920–15930, 2020.
- Buzzega et al. [2021] Pietro Buzzega, Matteo Boschini, Angelo Porrello, and Simone Calderara. Rethinking experience replay: a bag of tricks for continual learning. In 2020 25th International Conference on Pattern Recognition (ICPR), pages 2180–2187. IEEE, 2021.
- Cao et al. [2023] Yihan Cao, Siyu Li, Yixin Liu, Zhiling Yan, Yutong Dai, Philip S Yu, and Lichao Sun. A comprehensive survey of ai-generated content (aigc): A history of generative ai from gan to chatgpt. arXiv preprint arXiv:2303.04226, 2023.
- Chaudhry et al. [2018] Arslan Chaudhry, Marc’Aurelio Ranzato, Marcus Rohrbach, and Mohamed Elhoseiny. Efficient lifelong learning with a-gem. arXiv preprint arXiv:1812.00420, 2018.
- Chaudhry et al. [2019] Arslan Chaudhry, Marcus Rohrbach, Mohamed Elhoseiny, Thalaiyasingam Ajanthan, Puneet K Dokania, Philip HS Torr, and Marc’Aurelio Ranzato. On tiny episodic memories in continual learning. arXiv preprint arXiv:1902.10486, 2019.
- Dosovitskiy et al. [2020] Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, et al. An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929, 2020.
- Evans [2003] Jonathan St BT Evans. In two minds: dual-process accounts of reasoning. Trends in cognitive sciences, 7(10):454–459, 2003.
- He et al. [2016] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770–778, 2016.
- Hendrycks et al. [2021] Dan Hendrycks, Steven Basart, Norman Mu, Saurav Kadavath, Frank Wang, Evan Dorundo, Rahul Desai, Tyler Zhu, Samyak Parajuli, Mike Guo, et al. The many faces of robustness: A critical analysis of out-of-distribution generalization. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 8340–8349, 2021.
- Hou et al. [2019] Saihui Hou, Xinyu Pan, Chen Change Loy, Zilei Wang, and Dahua Lin. Learning a unified classifier incrementally via rebalancing. In Proceedings of the IEEE/CVF conference on Computer Vision and Pattern Recognition, pages 831–839, 2019.
- Hu et al. [2021] Edward J Hu, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, Weizhu Chen, et al. Lora: Low-rank adaptation of large language models. In International Conference on Learning Representations, 2021.
- Kirkpatrick et al. [2017] James Kirkpatrick, Razvan Pascanu, Neil Rabinowitz, Joel Veness, Guillaume Desjardins, Andrei A Rusu, Kieran Milan, John Quan, Tiago Ramalho, Agnieszka Grabska-Barwinska, et al. Overcoming catastrophic forgetting in neural networks. Proceedings of the National Academy of Sciences, 114(13):3521–3526, 2017.
- Krizhevsky et al. [2009] Alex Krizhevsky, Geoffrey Hinton, et al. Learning multiple layers of features from tiny images. 2009.
- Kumaran et al. [2016] Dharshan Kumaran, Demis Hassabis, and James L McClelland. What learning systems do intelligent agents need? complementary learning systems theory updated. Trends in cognitive sciences, 20(7):512–534, 2016.
- Lee et al. [2020] Janghyeon Lee, Hyeong Gwon Hong, Donggyu Joo, and Junmo Kim. Continual learning with extended kronecker-factored approximate curvature. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 9001–9010, 2020.
- Levy et al. [2015] Omer Levy, Yoav Goldberg, and Ido Dagan. Improving distributional similarity with lessons learned from word embeddings. Transactions of the Association for Computational Linguistics, 3:211–225, 2015.
- Li and Hoiem [2017] Zhizhong Li and Derek Hoiem. Learning without forgetting. IEEE transactions on pattern analysis and machine intelligence, 40(12):2935–2947, 2017.
- Liu et al. [2018] Xialei Liu, Marc Masana, Luis Herranz, Joost Van de Weijer, Antonio M Lopez, and Andrew D Bagdanov. Rotate your networks: Better weight consolidation and less catastrophic forgetting. In 2018 24th International Conference on Pattern Recognition (ICPR), pages 2262–2268. IEEE, 2018.
- Mallya and Lazebnik [2018] Arun Mallya and Svetlana Lazebnik. Packnet: Adding multiple tasks to a single network by iterative pruning. In Proceedings of the IEEE conference on Computer Vision and Pattern Recognition, pages 7765–7773, 2018.
- Mallya et al. [2018] Arun Mallya, Dillon Davis, and Svetlana Lazebnik. Piggyback: Adapting a single network to multiple tasks by learning to mask weights. In Proceedings of the European conference on computer vision (ECCV), pages 67–82, 2018.
- Meng et al. [2019] Yu Meng, Jiaming Shen, Chao Zhang, and Jiawei Han. Weakly-supervised hierarchical text classification. In The Thirty-Third AAAI Conference on Artificial Intelligence, AAAI 2019, The Thirty-First Innovative Applications of Artificial Intelligence Conference, IAAI 2019, The Ninth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2019, Honolulu, Hawaii, USA, January 27 - February 1, 2019, pages 6826–6833, 2019.
- Mermillod et al. [2013] Martial Mermillod, Aurélia Bugaiska, and Patrick Bonin. The stability-plasticity dilemma: Investigating the continuum from catastrophic forgetting to age-limited learning effects, 2013.
- Nie et al. [2023] Xing Nie, Shixiong Xu, Xiyan Liu, Gaofeng Meng, Chunlei Huo, and Shiming Xiang. Bilateral memory consolidation for continual learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 16026–16035, 2023.
- O’Reilly and Norman [2002] Randall C O’Reilly and Kenneth A Norman. Hippocampal and neocortical contributions to memory: Advances in the complementary learning systems framework. Trends in cognitive sciences, 6(12):505–510, 2002.
- Park et al. [2019] Dongmin Park, Seokil Hong, Bohyung Han, and Kyoung Mu Lee. Continual learning by asymmetric loss approximation with single-side overestimation. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 3335–3344, 2019.
- Pham et al. [2021] Quang Pham, Chenghao Liu, and Steven Hoi. Dualnet: Continual learning, fast and slow. Advances in Neural Information Processing Systems, 34:16131–16144, 2021.
- Qi et al. [2023a] Biqing Qi, Bowen Zhou, Weinan Zhang, Jianxing Liu, and Ligang Wu. Improving robustness of intent detection under adversarial attacks: A geometric constraint perspective. IEEE transactions on neural networks and learning systems, PP, 2023a.
- Qi et al. [2023b] Biqing Qi, Bowen Zhou, Weinan Zhang, Jianxing Liu, and Ligang Wu. Improving robustness of intent detection under adversarial attacks: A geometric constraint perspective. IEEE Transactions on Neural Networks and Learning Systems, 2023b.
- Qin et al. [2021] Qi Qin, Wenpeng Hu, Han Peng, Dongyan Zhao, and Bing Liu. Bns: Building network structures dynamically for continual learning. Advances in Neural Information Processing Systems, 34:20608–20620, 2021.
- Rebuffi et al. [2017] Sylvestre-Alvise Rebuffi, Alexander Kolesnikov, Georg Sperl, and Christoph H Lampert. icarl: Incremental classifier and representation learning. In Proceedings of the IEEE conference on Computer Vision and Pattern Recognition, pages 2001–2010, 2017.
- Richards and Frankland [2017] Blake A Richards and Paul W Frankland. The persistence and transience of memory. Neuron, 94(6):1071–1084, 2017.
- Riemer et al. [2018] Matthew Riemer, Ignacio Cases, Robert Ajemian, Miao Liu, Irina Rish, Yuhai Tu, and Gerald Tesauro. Learning to learn without forgetting by maximizing transfer and minimizing interference. arXiv preprint arXiv:1810.11910, 2018.
- Ryan and Frankland [2022] Tomás J Ryan and Paul W Frankland. Forgetting as a form of adaptive engram cell plasticity. Nature Reviews Neuroscience, 23(3):173–186, 2022.
- Saha et al. [2021] Gobinda Saha, Isha Garg, and Kaushik Roy. Gradient projection memory for continual learning. arXiv preprint arXiv:2103.09762, 2021.
- Sarfraz et al. [2023] Fahad Sarfraz, Elahe Arani, and Bahram Zonooz. Sparse coding in a dual memory system for lifelong learning. In Proceedings of the AAAI Conference on Artificial Intelligence, pages 9714–9722, 2023.
- Schulman et al. [2022] John Schulman, Barret Zoph, Christina Kim, Jacob Hilton, Jacob Menick, Jiayi Weng, Juan Felipe Ceron Uribe, Liam Fedus, Luke Metz, Michael Pokorny, et al. Chatgpt: Optimizing language models for dialogue. OpenAI blog, 2022.
- Sun et al. [2023] Advani Sun, Weinan et al. Organizing memories for generalization in complementary learning systems. Nature neuroscience, 26(8):1438–1448, 2023.
- Tang et al. [2021] Shixiang Tang, Dapeng Chen, Jinguo Zhu, Shijie Yu, and Wanli Ouyang. Layerwise optimization by gradient decomposition for continual learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 9634–9643, 2021.
- Van de Ven and Tolias [2019] Gido M Van de Ven and Andreas S Tolias. Three scenarios for continual learning. arXiv preprint arXiv:1904.07734, 2019.
- Wang et al. [2023] Liyuan Wang, Xingxing Zhang, Hang Su, and Jun Zhu. A comprehensive survey of continual learning: Theory, method and application. arXiv preprint arXiv:2302.00487, 2023.
- Wang et al. [2022a] Zhen Wang, Liu Liu, Yiqun Duan, Yajing Kong, and Dacheng Tao. Continual learning with lifelong vision transformer. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 171–181, 2022a.
- Wang et al. [2022b] Zhen Wang, Liu Liu, Yajing Kong, Jiaxian Guo, and Dacheng Tao. Online continual learning with contrastive vision transformer. In ECCV, 2022b.
- Wang et al. [2022c] Zifeng Wang, Zhang, et al. Dualprompt: Complementary prompting for rehearsal-free continual learning. In ECCV, pages 631–648, 2022c.
- Wang et al. [2022d] Zifeng Wang, Zizhao Zhang, Chen-Yu Lee, Han Zhang, Ruoxi Sun, Xiaoqi Ren, Guolong Su, Vincent Perot, Jennifer Dy, and Tomas Pfister. Learning to prompt for continual learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 139–149, 2022d.
- Wei et al. [2022] Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed Chi, Quoc V Le, Denny Zhou, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems, 35:24824–24837, 2022.
- Winocur et al. [2007] Gordon Winocur, Morris Moscovitch, and Melanie Sekeres. Memory consolidation or transformation: context manipulation and hippocampal representations of memory. Nature neuroscience, 10(5):555–557, 2007.
- Yan et al. [2021] Shipeng Yan, Jiangwei Xie, and Xuming He. Der: Dynamically expandable representation for class incremental learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 3014–3023, 2021.
- Zenke et al. [2017] Friedemann Zenke, Ben Poole, and Surya Ganguli. Continual learning through synaptic intelligence. In International Conference on Machine Learning, pages 3987–3995. PMLR, 2017.
- Zhang et al. [2017] Chao Zhang, Liyuan Liu, Dongming Lei, Quan Yuan, Honglei Zhuang, Timothy Hanratty, and Jiawei Han. Triovecevent: Embedding-based online local event detection in geo-tagged tweet streams. pages 595–604, 2017.
- Zhou et al. [2023] Qiang Zhou, Zhibin Wang, Wei Chu, Yinghui Xu, Hao Li, and Yuan Qi. Infmllm: A unified framework for visual-language tasks, 2023.
- Zhu et al. [2023] Deyao Zhu, Jun Chen, Xiaoqian Shen, Xiang Li, and Mohamed Elhoseiny. Minigpt-4: Enhancing vision-language understanding with advanced large language models. arXiv preprint arXiv:2304.10592, 2023.
Supplementary Material
6 Algorithms of ICL
We formalize the algorithms for the training and inference stages of ICL, as shown in Algorithm 1 and 2. Here, we set the detection threshold as the upper th percentile of the standard normal distribution, which is .
7 Datasets Settings
CIFAR-10 comprises 10 classes, each with 50,000 training and 10,000 test color images. CIFAR-100 includes 100 classes, offering 500 training and 100 testing images per class. ImageNet-R, an extension of the ImageNet dataset, possess 200 classes. It contains a total of 30,000 images, of which were allocated as the test set.
CIFAR-10 was divided into five tasks, two classes allocated to each task. CIFAR-100 was divided into ten tasks, each task with ten classes. Similarly, ImageNet-R was organized into ten tasks, with each task containing 20 classes. Input images were resized to and normalized to the range . ICL was compared against both representative baselines and state-of-the-art methods across diverse buffer sizes and datasets.
8 Implementation Details
To ensure a fair comparison between methods, we carried out uniform resizing of the images to dimensions of and applied image normalization. Following the settings of [44, 32, 3, 4], we adopted 10 batch size and 1 epoch for all methods during training, utilizing cross-entropy as the classification loss. For the L2P[46], DualPrompt[45] approaches, we followed the implementation details of the original paper and employed ViT as the backbone network, while ResNet18 served as the backbone network for the remaining methods. We meticulously reproduced the outcomes by adhering to the original implementation and settings. We have set up separate Adam optimizers with a constant learning rate of for the query and value memory parameters.
9 Inference with System 1
We conducted a comparison by directly applying rehearsal-based fine-tuning with a same-sized buffer, using only the pretrained ViT with a trainable classification head on each dataset. The results, as shown in Tab. 3, were significantly lower than those obtained using only System 1. This stark contrast serves as strong evidence that both ViT and MiniGPT-4 have not undergone pretraining on the three datasets and highlights the effectiveness of our proposed method.
Memory | Method | CIFAR10 | CIFAR100 | ImageNet-R | |||
---|---|---|---|---|---|---|---|
Buffer | Class-IL | Task-IL | Class-IL | Task-IL | Class-IL | Task-IL | |
200 | ViT Finetune | 33.15 | 96.00 | 32.60 | 91.50 | 20.88 | 64.45 |
ICL w/o System2 | 94.60 | 99.43 | 77.34 | 94.81 | 49.87 | 68.62 | |
500/600 | ViT Finetune | 62.65 | 97.15 | 45.30 | 92.80 | 33.26 | 75.50 |
ICL w/o System2 | 95.54 | 99.52 | 80.67 | 95.24 | 54.65 | 76.02 |
10 Inference with System 2
In order to validate the mutually beneficial interaction between systems, we conduct experiments using the pre-trained MiniGPT4 [53] to perform inference on the test sets of CIFAR-10, CIFAR-100, and ImageNet-R. MiniGPT4 loads the official 7B pre-trained parameters, and the prompt used by MiniGPT4 is the same as System2. Since System 1 does not provide a topk option, we provided all categories to MiniGPT4, allowing it to select a category for image classification based on the image description. Tab. 4 presents the accuracy of reasoning, error rate, and proportion of no exact response (i.e. there is not only one class in the given classes is returned or no response).
Dataset | Accuracy | Error | No Response | Total |
---|---|---|---|---|
CIFAR-10 | 9.53 | 15.04 | 75.43 | 10000 |
CIFAR-100 | 2.45 | 14.53 | 83.02 | 10000 |
ImageNet-R | 2.67 | 10.33 | 87.00 | 6000 |
The results presented in the table indicate that over of the images fed in MiniGPT4, when applied to the CIFAR-10 dataset, fail to return a specific class to which the image belongs. And when faced with the CIFAR-100 and ImageNet-R datasets, MiniGPT4 with prompt that includes a larger number of classes, encounters increased difficulty in making accurate selections. Among the images that were returned with specific class information, over two-thirds were misclassified. These experimental results demonstrate that relying solely on MiniGPT4 for image classification tasks yields poor performance. Nevertheless, when System 1 offers the top-K option, incorporating MiniGPT4 as System2 enhances the image classification task and improves the final accuracy. This finding demonstrates that the interactive inference between System1 and System2 enables mutual promotion and improvement.
The limitations of MiniGPT-4 restricted the performance enhancement of System 2. To address these concerns, we chose more advanced MLLMs as System 2. As depicted in Tab. 1, there was a notable 3-4% improvement, especially on the challenging ImageNet-R dataset.