Continual Contrastive Learning for Image Classification
Abstract
Recently, self-supervised representation learning gives further development in multimedia technology. Most existing self-supervised learning methods are applicable to packaged data. However, when it comes to streamed data, they are suffering from catastrophic forgetting problem, which is not studied extensively. In this paper, we make the first attempt to tackle the catastrophic forgetting problem in the mainstream self-supervised methods, i.e., contrastive learning methods. Specifically, we first develop a rehearsal-based framework combined with a novel sampling strategy and a self-supervised knowledge distillation to transfer information over time efficiently. Then, we propose an extra sample queue to help the network separate the feature representations of old and new data in the embedding space. Experimental results show that compared with the naive self-supervised baseline, which learns tasks one by one without taking any technique, we improve the image classification accuracy by on CIFAR-100, on ImageNet-Sub, and on ImageNet-Full under 10 incremental steps setting. Our code will be available at https://github.com/VDIGPKU/ContinualContrastiveLearning.
Index Terms— Self-supervised learning, Continual learning, Classification
1 Introduction
Recently, as a novel branch of unsupervised learning, self-supervised learning is proposed to learn general feature representations from unlabeled data, and further boosts the performance of multimedia technology. The great success of self-supervised learning on visual representation relies on its impressive potential of learning from unlabeled data on a large scale. Many experiments show that, with the larger and more diverse training data, the self-supervised model can learn a better feature representation. However, in most practical scenarios, the unlabeled training data are streamed. The limitation of storage and computational power does not allow us to collect all data together and use it to train the model.
For streamed data, directly training the model on it and updating its parameters causes the catastrophic forgetting problem, namely, a drastic performance drop on the old data when the model is trained on new data. The catastrophic forgetting problem has been discovered and studied in supervised learning on many tasks [1][2][3]. However, under the unsupervised learning setting, the problem is seldom studied [4][5]. In our experiments, we find that self-supervised learning is also suffered from catastrophic forgetting. For example, Fig. 1(a) illustrates that the catastrophic forgetting in several self-supervised learning methods. Obviously, the classification performance of these methods decreases when they learn the streamed data. Moreover, Fig. 1(b) shows that when the number of incremental steps increases from 2 to 10, MoCoV2 [6] forgets more knowledge it has learned. Hence, a continual self-supervised learning method is required.
(a)
(b)
In this paper, we try to alleviate the catastrophic forgetting in self-supervised learning. Specifically, we focus on the mainstream of self-supervised learning, contrastive learning, and consider a specific continual learning task, i.e., Class-Incremental Learning, in which each incremental data consists of new class samples. We restore parts of samples from old data in each incremental step and develop a rehearsal-based framework. In detail, first, to select more representative samples, we propose a novel sampling strategy, which is different from the previous sampling strategy in supervised continual learning in the following aspects: (1) our sampling strategy does not rely on the data labels; (2) the samples are sorted by the feature variance instead of the confidence of classification. Second, to utilize the stored samples efficiently, we further introduce self-supervised knowledge distillation to our framework, to better transfer the feature representation learned before. Moreover, though the properties of feature representation of old data are sustained by samples selection and knowledge distillation, when the network learns feature representations of new data, the region of feature vectors of new data in embedding space may mix up with the region of feature vectors of old data (as illustrated in Fig. 3). To address this issue, we use an extra sample queue to help the network discriminate new data from old data.
The main contributions of this work can be summarized as:
-
•
To the best of our knowledge, we are the first to address the problem of catastrophic forgetting in contrastive learning.
-
•
We develop a rehearsal-based framework, which utilizes a novel sampling strategy and self-supervised knowledge distillation to transfer knowledge from old data.
-
•
We propose an extra sample queue to reduce the interference of feature representation distribution when the network learns a new task.
-
•
We improve the performance of our baseline methods on the CIFAR-100 and ImageNet significantly.
2 Related Work
2.1 Self-supervised Learning
The core idea of self-supervised learning is to train a neural network on the large-scale unlabeled data by designing specific pretext tasks, so that the network can learn a general and transferable feature representation. At present, instance discrimination [7], or more specifically, contrastive learning [8][9][10][11], has made breakthroughs and achieves new state-of-the-art performance on feature representation learning. In instance discrimination methods, one image falls into one category by distinguishing from other images, and the learning objective is to maximize the mutual information of two views from the same image generated by different data augmentation operations.
When training a self-supervised learning network, a large-scale data set is indispensable. However, directly training a whole large data set may be unaffordable in many scenarios. Moreover, new data is usually generated continuously. Collecting old and new data together and training the network again is a time and resources consuming solution. Thus, a continuous style of self-supervised learning method is imperative when it comes to application.
2.2 Class Incremental Learning
Class-Incremental Learning (CIL) aims to learn a classification model with the number of classes increasing step by step. Existing works on CIL often adopt rehearsal methods and knowledge distillation to tackle the catastrophic forgetting problem. Rehearsal-based methods [1][12] try to select a representative set of samples from the old data. Specifically, these works use data labels or use a classifier trained with labeled data, to select the samples. However, in self-supervised continual learning, there is no access to the data labels. Thus, in our method, we propose a novel sampling strategy using feature variance measurement, which does not rely on data labels. Distillation-based methods [13][14] use knowledge distillation loss as the regularization term to preserve previous knowledge when learning new data. Many distillation-based methods use supervised knowledge distillation, i.e., the predicted label logits of the new model are enforced to be close to those of the old model. However, directly applying supervised knowledge distillation loss to self-supervised continual learning will cause some problems since it lacks the classification predictor. Thus, we adopt self-supervised knowledge distillation loss [15] in our method, which does not require the classification head. Moreover, unlike traditional knowledge distillation methods which often fix the teacher network during distillation, we propose the momentum teacher design.
3 Proposed Method

We implement our Continual Contrastive Learning (CCL) method based on the widely used contrastive learning framework MoCoV2 [6], and the overall pipeline is shown in Fig. 2.The main components of our method are introduced as following.
3.1 MoCoV2
First, we introduce MoCoV2 [6] briefly for better understanding. MoCoV2 [6] contains two encoders, and , and a memory bank. Given an input image , we first transform it to two views and with two different augmentations. Then, we get query vector and its positive key sample by feeding , into and respectively. The memory bank stores negative key samples of . Finally, MoCoV2 [6] adopts a contrastive loss to pull to and push away from .
Specifically, the contrastive loss is defined as:
(1) |
where is the hyper-parameter of temperature and is the number of negative samples.
Moreover, two encoders and share the same architecture, i.e., the backbone of the network followed by an extra MLP, but with different weights and . The parameters are updated by back-propagation and the parameters are updated by momentum strategy, that is,
(2) |
3.2 Rehearsal with Knowledge Distillation
Rehearsal is widely used in supervised continual learning. By replaying the stored old samples from previous tasks, we find rehearsal methods can also help alleviate catastrophe forgetting in self-supervised continual learning. Besides, inspired by many supervised continual learning methods, we utilize self-supervised knowledge distillation to transfer the contrastive information when the network learns a new task.
In continual learning, given a set of datasets , the network will be trained continuously on the dataset at time t. In rehearsal methods, we store a small fraction of data from after the training of is finished. Instead of randomly storing samples from old data, we propose a novel sampling strategy with the feature variance measurement. Specifically, when the training on is finished, we input all images of into the encoder to extract feature vectors. Then we group the vectors by the K-Means algorithm and divide them into classes. In our experiments, we set the equal to the number of classes in . Then, for images in class , we have different views of after applying different data augmentations mentioned in the MoCoV2 [6]. After that, we calculate the variance of the feature vectors of these views . We found is enough for measuring the variance. Finally, we store the first images with the smallest variance for each class. Essentially, the images with the smallest variance are ones well learned by the network, thus the contrastive information of these images can be easily transferred to the new task through self-supervised knowledge distillation.
Knowledge distillation is another way to address catastrophe forgetting problems. Hence, we introduce it to our method to further enhance the ability for handling catastrophe forgetting. Following some self-supervised knowledge distillation methods [15][16], we utilize the contrastive similarity between images from the teacher network as the distillation target for the student network. Specifically, given the sampled images from the old dataset, we have after one data augmentation. Then, and are mapped and normalized into feature vector representations , , and , where is the number of images, is the feature dimension, and and denote the query encoder of teacher and student respectively. We compute the similarity matrix between the feature vectors and extracted by teacher. After that, the softmax function with the temperature scaling factor is applied to for normalization. We obtain with a similar operation from the student. Finally, we adopt KL-divergence loss to and :
(3) |
Besides, compared to supervised continual learning, which often fixes the teacher network during distillation, we newly find that a momentum teacher brings better performance. Thus, for each training epoch, we update the teacher network with a momentum style:
(4) |
where in our experiments.

3.3 Extra Sample Queue
As mentioned in the section of the introduction, learning feature representations of a new task will cause the mixture of feature space among the old data and the new data. As illustrated in Fig. 3, when the network learns the feature representation (orange circulars) of new data, it may interfere with the feature representation (blue triangles) of old data. Thus, we propose an extra sample queue (red triangles) to push away the feature of new data from the feature of old data.
Specifically, the extra sample queue only contains the negative samples of the sampled images from old data. For a data mini-batch , where is the sampled images from old data and is the images from new data, we sent them to two encoders of MoCoV2 [6] and then obtain feature representations and . The network discriminates data of the new dataset from data of the old datasets by computing the contrastive loss between , , and extra sample queue. In every iteration, we will pick out the feature , and update the extra sample queue with .
To sum up, the total loss is given by:
(5) |
where , , and are balancing weights. In our experiments, we set , and .
Method | CIFAR-100 | ImageNet-Sub | ImageNet-Full | ||
---|---|---|---|---|---|
T=5 | 10 | 5 | 10 | 10 | |
Upper Bound | 57.63 0.12 | 57.63 0.12 | 60.54 | 60.54 | 67.50 |
Finetuning | 51.08 0.34 | 48.40 0.42 | 56.56 | 52.62 | 61.50 |
Simple Rehearsal | 52.61 0.41 | 49.44 0.36 | 57.98 | 54.28 | 62.05 |
Our | 58.92 | 55.48 | 62.79 |
4 Experiment
4.1 Settings
4.1.1 Datasets.
We evaluate our method on two popular datasets for class-incremental learning, i.e., CIFAR-100 and ImageNet-Sub&Full. Specifically, ImageNet-Sub is a subset of ImageNet-Full, which contains 100 classes of images selected from ImageNet-Full randomly. For a given dataset, the classes are first arranged in a fixed random order and then cut into T incremental splits that come sequentially.
4.1.2 Protocols.
We follow [17] to split the dataset into incremental steps and manage the memory budget. 1) For each split dataset, all classes are divided equally to come in T incremental steps. 2) For the old training set, a constant number of images are stored after each incremental step.
4.1.3 Evaluation Metrics.
4.1.4 Implementation Details.
The experiments are conducted on CIFAR-100 with a modified ResNet18 as the backbone and on ImageNet-Sub&Full with ResNet50 as the backbone. We adopt MoCoV2 [6] as our basic method. Each incremental training step consists of 200 epochs. For ImageNet-Sub&Full, we use the same training hyperparameters as MoCoV2 [6]. For CIFAR-100, we follow the training hyperparameters in the official implementation of MoCoV2 [6] on CIFAR-10, and Split BN is used to simulate 8 GPU behavior of BatchNorm in 1 GPU. The size of the extra sample queue is set to 128 for both ImageNet-Sub&Full and CIFAR-100. For linear classification, we follow the setting of MoCoV2 [6]. For each dataset, We divide the classes of the dataset into T parts equally with a random order. We store 20 images for each class in every incremental step. The experiment results on CIFAR-100 are obtained from the average of 6 trials.
4.2 Main Results
Besides our method, we implemented and tested two naive continual contrastive learning methods, Finetuning and Simple Rehearsal. Finetuning learns new tasks one by one without taking any technique to prevent catastrophic forgetting. Simple Rehearsal stores a constant number of images randomly after training on the dataset . For the subsequent task , it adds the stored images into the new training set to train the model. It’s worth noting that, Simple Rehearsal does not use techniques like knowledge distillation or the extra sample queue.
Table 1 shows the performance of different continual contrastive learning methods on CIFAR-100 and ImageNet. One can see that our method outperforms the others from each column of Table 1. More specifically, the top-1 accuracy is improved from to on CIFAR-100, from to on ImageNet-Sub, and from to on ImageNet-Full under 10 incremental settings. These results show that our method works well on both small and large datasets, and demonstrate the effectiveness of our proposed continual contrastive learning method.
Besides MoCoV2 [6], we also apply our method to other contrastive learning methods, including SimCLR [9] and InsDisc [7]. More detail can be found in Appendix. The results on CIFAR-100 are shown in Fig. 4. We can see that our method also improves the linear classification accuracy of other contrastive learning methods, which demonstrates the generalization of our method.

Method | Top-1 Acc |
---|---|
Finetuning | 52.62 |
+ Random Sampling | 54.28 (+1.66) |
+ Our Sampling Strategy | 55.18 (+2.56) |
+ Knowledge Distillation | 55.21 (+2.60) |
+ Extra Sample Queue | 55.48 (+2.86) |
4.3 Ablation Study
To test the effectiveness of each component in our approach, we conduct ablation studies on ImageNet-Sub under 10 incremental steps setting.
In the first experiment, we evaluate the main components of our method: sampling strategy with feature variance measurement, self-supervised knowledge distillation, and the extra sample queue. As described in Section 3.2 and 3.3, data sampling strategy and knowledge distillation are exploited to distill the contrastive information from the previous contrast process, and the extra sample queue is exploited to separate the feature space of the new data and old data. As shown in Table 2, our novel sampling strategy, knowledge distillation, and extra sample queue boost the performance consistently and gains accuracy improvement when all of them are applied.

Besides, since it is impossible to decide the category number of the unlabeled data, the hyper-parameter K in the K-Means algorithm in our experiment is the prior knowledge. Fig. 5 shows that the top-1 accuracy on ImageNet-Sub varies according to the choice of the hyper-parameter K. We find when K varies within a certain range, from 5 to 15, the performance of our method has little change. However, when the gap between K and the real number of classes is too large, the accuracy decreases rapidly.
Size | Top-1 Acc |
---|---|
32 | 53.01 0.33 |
64 | 53.61 0.28 |
128 | 53.80 0.37 |
256 | 53.59 0.35 |
512 | 53.23 0.34 |
We further explore the effect of the size of the extra sample queue and conduct the experiment on CIFAR-100. Table 3 shows that when the size of the extra sample queue increases, the performance is not improved consistently. A possible reason is that, when the size is large, the update rate of the extra sample queue is slow. Thus, many feature vectors in the extra sample queue are from previous iterations, which are detrimental to the update of the model.
4.4 Limitations
In this paper, we try to solve the problem of continual contrastive learning. The experiment results show that our method works well on both small and large datasets. However, our work still has many limitations. First, our method is only available for the contrastive learning method. When applying our method to other self-supervised learning methods, like GAN [18], BYOL [11], and MAE [19], some additional revisions are required. Second, though our method narrows the gap between continual contrastive learning and its upper bound, there is still a large gap between them. We hope our work can give researchers some insights into the catastrophic forgetting problems in contrastive learning, and inspire more research in this direction.
5 Conclusion
In this paper, we propose a rehearsal-based continual contrastive learning framework, to alleviate the catastrophic forgetting in contrastive learning. Our method restores a small number of images of old data with a novel sampling strategy and rehearses them while learning the new dataset. Besides, we exploit self-supervised knowledge distillation and propose an extra sample queue to make the network learn a better feature representation from old and new data. Extensive experimental results and analyses demonstrate the effectiveness of our method.
References
- [1] Sylvestre-Alvise Rebuffi, Alexander Kolesnikov, Georg Sperl, and Christoph H. Lampert, “icarl: Incremental classifier and representation learning,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017, pp. 5533–5542.
- [2] Konstantin Shmelkov, Cordelia Schmid, and Karteek Alahari, “Incremental learning of object detectors without catastrophic forgetting,” in Proceedings of the IEEE International Conference on Computer Vision, 2017, pp. 3420–3429.
- [3] Arthur Douillard, Yifu Chen, Arnaud Dapogny, and Matthieu Cord, “Plop: Learning without forgetting for continual semantic segmentation,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2021, pp. 4040–4050.
- [4] Mark Schutera, Frank M. Hafner, Jochen Abhau, Veit Hagenmeyer, Ralf Mikut, and Markus Reischl, “Cuepervision: self-supervised learning for continuous domain adaptation without catastrophic forgetting,” Image and Vision Computing, vol. 106, pp. 104079, 2021.
- [5] Kibok Lee, Kimin Lee, Jinwoo Shin, and Honglak Lee, “Overcoming catastrophic forgetting with unlabeled data in the wild,” in Proceedings of the IEEE International Conference on Computer Vision, 2019, pp. 312–321.
- [6] Xinlei Chen, Haoqi Fan, Ross B. Girshick, and Kaiming He, “Improved baselines with momentum contrastive learning,” arXiv preprint arXiv:2003.04297, 2020.
- [7] Zhirong Wu, Yuanjun Xiong, Stella X. Yu, and Dahua Lin, “Unsupervised feature learning via non-parametric instance discrimination,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp. 3733–3742.
- [8] Kaiming He, Haoqi Fan, Yuxin Wu, Saining Xie, and Ross B. Girshick, “Momentum contrast for unsupervised visual representation learning,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2020, pp. 9726–9735.
- [9] Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey Hinton, “A simple framework for contrastive learning of visual representations,” in International conference on machine learning, 2020, pp. 1597–1607.
- [10] Xinlei Chen and Kaiming He, “Exploring simple siamese representation learning,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2021, pp. 15750–15758.
- [11] Jean-Bastien Grill, Florian Strub, Florent Altché, Corentin Tallec, Pierre H. Richemond, Elena Buchatskaya, Carl Doersch, Bernardo Ávila Pires, Zhaohan Guo, Mohammad Gheshlaghi Azar, Bilal Piot, Koray Kavukcuoglu, Rémi Munos, and Michal Valko, “Bootstrap your own latent - A new approach to self-supervised learning,” in Advances in Neural Information Processing Systems, 2020.
- [12] Saihui Hou, Xinyu Pan, Chen Change Loy, Zilei Wang, and Dahua Lin, “Learning a unified classifier incrementally via rebalancing,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2019, pp. 831–839.
- [13] Zhizhong Li and Derek Hoiem, “Learning without forgetting,” in Proceedings of the European conference on computer vision, 2016, pp. 614–629.
- [14] Hongjoon Ahn, Jihwan Kwak, Subin Lim, Hyeonsu Bang, Hyojun Kim, and Taesup Moon, “Ss-il: Separated softmax for incremental learning,” in Proceedings of the IEEE International Conference on Computer Vision, 2021, pp. 844–853.
- [15] Zhiyuan Fang, Jianfeng Wang, Lijuan Wang, Lei Zhang, Yezhou Yang, and Zicheng Liu, “SEED: self-supervised distillation for visual representation,” in Proceedings of International Conference on Learning Representations, 2021.
- [16] Guodong Xu, Ziwei Liu, Xiaoxiao Li, and Chen Change Loy, “Knowledge distillation meets self-supervision,” in Proceedings of the European Conference on Computer Vision, 2020, pp. 588–604.
- [17] Saihui Hou, Xinyu Pan, Chen Change Loy, Zilei Wang, and Dahua Lin, “Learning a unified classifier incrementally via rebalancing,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2019, pp. 831–839.
- [18] Ian J. Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron C. Courville, and Yoshua Bengio, “Generative adversarial networks,” arXiv preprint arXiv:1406.2661, 2014.
- [19] Kaiming He, Xinlei Chen, Saining Xie, Yanghao Li, Piotr Dollár, and Ross Girshick, “Masked autoencoders are scalable vision learners,” arXiv preprint arXiv:2111.06377, 2021.
Appendix
A Pseudo-code
We provide a Pseudo-code of our method here.
Input: Dataset , encoder , and , augmentation , queue , extra sample queue , resotred data set
for do
Params() Params()
for iteration = do
for minibatch do
# get positive and negative pairs
=
=
# calculate MoCo loss and ESQ loss
= Contrastiveloss()
= Contrastiveloss()
# knowledge distillation
# get embeddings
, = ,
, = ,
# calculate the similarity matrix
= Softmax(Similarity(, )/, dim=-1)
= Softmax(Similarity(, )/, dim=-1)
# calculate knowledge distillation loss
= KLdivergence(, )
= + +
# update the model
Update(, )
# update the momentum encoder
Momentumupdate(, , )
# update the queue and the extra sample queue
Queue_update(, )
=
Queue_update(, )
end for
# update the teacher network
Momentumupdate(, , )
end for
# get samples
= Samplestrategy()
end for
return
B Additional results
Besides linear evaluation, we also evaluate our method under Forgetting and Forward Transfer metrics, that is,
where is the linear evaluation accuracy of the model on dataset after observing the last sample from dataset , and is the linear evaluation accuracy of the model with random initialization on dataset .
The results are shown in the Table A. We can find that our method outperforms Finetuning under both Forgetting and Forward Transfer evaluation metrics. The results further demonstrate that our method can alleviate the catastrophic forgetting in self-supervised learning.
Method | T=5 | T=10 | ||
---|---|---|---|---|
F() | TF() | F() | TF() | |
Finetuning | 0.7 | 47.1 | 3.5 | 45.8 |
Simple Rehearsal | 0.3 | 48.4 | 2.0 | 47.0 |
Our | 0.3 | 48.3 | 1.6 | 47.3 |
C Implementation Detail
To demonstrate the generalization of our method, we apply our method to SimCLR and InsDisc. We give the implementation detail here.
There are two loss terms, and , should be added. First, the knowledge distillation term can be directly added to SimCLR and InsDisc without any modification. Second, the extra sample queue provides the extra negative samples from old data. For both SimCLR and InsDisc, these extra negative samples are used to calculate contrastive loss with their original positive samples, and form the ESQ loss term.
The hyperparameters, including balancing weights, temperature and the size of the extra sample queue, are the same as those in the MoCoV2.