This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Continual Contrastive Learning for Image Classification

Abstract

Recently, self-supervised representation learning gives further development in multimedia technology. Most existing self-supervised learning methods are applicable to packaged data. However, when it comes to streamed data, they are suffering from catastrophic forgetting problem, which is not studied extensively. In this paper, we make the first attempt to tackle the catastrophic forgetting problem in the mainstream self-supervised methods, i.e., contrastive learning methods. Specifically, we first develop a rehearsal-based framework combined with a novel sampling strategy and a self-supervised knowledge distillation to transfer information over time efficiently. Then, we propose an extra sample queue to help the network separate the feature representations of old and new data in the embedding space. Experimental results show that compared with the naive self-supervised baseline, which learns tasks one by one without taking any technique, we improve the image classification accuracy by 1.60%1.60\% on CIFAR-100, 2.86%2.86\% on ImageNet-Sub, and 1.29%1.29\% on ImageNet-Full under 10 incremental steps setting. Our code will be available at https://github.com/VDIGPKU/ContinualContrastiveLearning.

Index Terms—  Self-supervised learning, Continual learning, Classification

1 Introduction

Recently, as a novel branch of unsupervised learning, self-supervised learning is proposed to learn general feature representations from unlabeled data, and further boosts the performance of multimedia technology. The great success of self-supervised learning on visual representation relies on its impressive potential of learning from unlabeled data on a large scale. Many experiments show that, with the larger and more diverse training data, the self-supervised model can learn a better feature representation. However, in most practical scenarios, the unlabeled training data are streamed. The limitation of storage and computational power does not allow us to collect all data together and use it to train the model.

For streamed data, directly training the model on it and updating its parameters causes the catastrophic forgetting problem, namely, a drastic performance drop on the old data when the model is trained on new data. The catastrophic forgetting problem has been discovered and studied in supervised learning on many tasks [1][2][3]. However, under the unsupervised learning setting, the problem is seldom studied [4][5]. In our experiments, we find that self-supervised learning is also suffered from catastrophic forgetting. For example, Fig. 1(a) illustrates that the catastrophic forgetting in several self-supervised learning methods. Obviously, the classification performance of these methods decreases when they learn the streamed data. Moreover, Fig. 1(b) shows that when the number of incremental steps increases from 2 to 10, MoCoV2 [6] forgets more knowledge it has learned. Hence, a continual self-supervised learning method is required.

Refer to caption

(a)

Refer to caption

(b)

Fig. 1: Illustration of the catastrophic forgetting in self-supervised learning. (a) The performance degradations of several self-supervised learning methods on CIFAR-100; (b) Linear evaluation top-1 accuracy on CIFAR-100 of MoCoV2 under different Class-Incremental Learning settings.

In this paper, we try to alleviate the catastrophic forgetting in self-supervised learning. Specifically, we focus on the mainstream of self-supervised learning, contrastive learning, and consider a specific continual learning task, i.e., Class-Incremental Learning, in which each incremental data consists of new class samples. We restore parts of samples from old data in each incremental step and develop a rehearsal-based framework. In detail, first, to select more representative samples, we propose a novel sampling strategy, which is different from the previous sampling strategy in supervised continual learning in the following aspects: (1) our sampling strategy does not rely on the data labels; (2) the samples are sorted by the feature variance instead of the confidence of classification. Second, to utilize the stored samples efficiently, we further introduce self-supervised knowledge distillation to our framework, to better transfer the feature representation learned before. Moreover, though the properties of feature representation of old data are sustained by samples selection and knowledge distillation, when the network learns feature representations of new data, the region of feature vectors of new data in embedding space may mix up with the region of feature vectors of old data (as illustrated in Fig. 3). To address this issue, we use an extra sample queue to help the network discriminate new data from old data.

The main contributions of this work can be summarized as:

  • To the best of our knowledge, we are the first to address the problem of catastrophic forgetting in contrastive learning.

  • We develop a rehearsal-based framework, which utilizes a novel sampling strategy and self-supervised knowledge distillation to transfer knowledge from old data.

  • We propose an extra sample queue to reduce the interference of feature representation distribution when the network learns a new task.

  • We improve the performance of our baseline methods on the CIFAR-100 and ImageNet significantly.

2 Related Work

2.1 Self-supervised Learning

The core idea of self-supervised learning is to train a neural network on the large-scale unlabeled data by designing specific pretext tasks, so that the network can learn a general and transferable feature representation. At present, instance discrimination [7], or more specifically, contrastive learning [8][9][10][11], has made breakthroughs and achieves new state-of-the-art performance on feature representation learning. In instance discrimination methods, one image falls into one category by distinguishing from other images, and the learning objective is to maximize the mutual information of two views from the same image generated by different data augmentation operations.

When training a self-supervised learning network, a large-scale data set is indispensable. However, directly training a whole large data set may be unaffordable in many scenarios. Moreover, new data is usually generated continuously. Collecting old and new data together and training the network again is a time and resources consuming solution. Thus, a continuous style of self-supervised learning method is imperative when it comes to application.

2.2 Class Incremental Learning

Class-Incremental Learning (CIL) aims to learn a classification model with the number of classes increasing step by step. Existing works on CIL often adopt rehearsal methods and knowledge distillation to tackle the catastrophic forgetting problem. Rehearsal-based methods [1][12] try to select a representative set of samples from the old data. Specifically, these works use data labels or use a classifier trained with labeled data, to select the samples. However, in self-supervised continual learning, there is no access to the data labels. Thus, in our method, we propose a novel sampling strategy using feature variance measurement, which does not rely on data labels. Distillation-based methods [13][14] use knowledge distillation loss as the regularization term to preserve previous knowledge when learning new data. Many distillation-based methods use supervised knowledge distillation, i.e., the predicted label logits of the new model are enforced to be close to those of the old model. However, directly applying supervised knowledge distillation loss to self-supervised continual learning will cause some problems since it lacks the classification predictor. Thus, we adopt self-supervised knowledge distillation loss [15] in our method, which does not require the classification head. Moreover, unlike traditional knowledge distillation methods which often fix the teacher network during distillation, we propose the momentum teacher design.

3 Proposed Method

Refer to caption
Fig. 2: The pipeline of our Continual Contrastive Learning method.

We implement our Continual Contrastive Learning (CCL) method based on the widely used contrastive learning framework MoCoV2 [6], and the overall pipeline is shown in Fig. 2.The main components of our method are introduced as following.

3.1 MoCoV2

First, we introduce MoCoV2 [6] briefly for better understanding. MoCoV2 [6] contains two encoders, fqf_{q} and fkf_{k}, and a memory bank. Given an input image xx, we first transform it to two views xqx_{q} and xkx_{k} with two different augmentations. Then, we get query vector qq and its positive key sample k+k_{+} by feeding xqx_{q}, xkx_{k} into fqf_{q} and fkf_{k} respectively. The memory bank stores negative key samples Q={k1,k2,k3,,kn}Q=\{{k_{1}},{k_{2}},{k_{3}},...,{k_{n}}\} of qq. Finally, MoCoV2 [6] adopts a contrastive loss to pull qq to k+k_{+} and push away qq from Q={k1,k2,k3,,kn}Q=\{{k_{1}},{k_{2}},{k_{3}},...,{k_{n}}\}.

Specifically, the contrastive loss is defined as:

contrast(q,k+,Q)=logexp(qk+/τ)kiQ{k+}exp(qki/τ),\mathcal{L}_{contrast}(q,k_{+},Q)=-\log\frac{\exp(q\cdot k_{+}/\tau)}{\sum_{k_{i}\in Q\cup\{k_{+}\}}\exp(q\cdot k_{i}/\tau)}, (1)

where τ\tau is the hyper-parameter of temperature and nn is the number of negative samples.

Moreover, two encoders fqf_{q} and fkf_{k} share the same architecture, i.e., the backbone of the network followed by an extra MLP, but with different weights θq\theta_{q} and θk\theta_{k}. The parameters θq\theta_{q} are updated by back-propagation and the parameters θk\theta_{k} are updated by momentum strategy, that is,

θkmθk+(1m)θq.\theta_{k}\leftarrow m\theta_{k}+(1-m)\theta_{q}. (2)

3.2 Rehearsal with Knowledge Distillation

Rehearsal is widely used in supervised continual learning. By replaying the stored old samples from previous tasks, we find rehearsal methods can also help alleviate catastrophe forgetting in self-supervised continual learning. Besides, inspired by many supervised continual learning methods, we utilize self-supervised knowledge distillation to transfer the contrastive information when the network learns a new task.

In continual learning, given a set of datasets {D1,D2,,Dt,,DN}\{D_{1},D_{2},...,D_{t},...,D_{N}\}, the network will be trained continuously on the dataset DtD_{t} at time t. In rehearsal methods, we store a small fraction of data from DtD_{t} after the training of DtD_{t} is finished. Instead of randomly storing samples from old data, we propose a novel sampling strategy with the feature variance measurement. Specifically, when the training on DtD_{t} is finished, we input all images of DtD_{t} into the encoder fqf_{q} to extract feature vectors. Then we group the vectors by the K-Means algorithm and divide them into CC classes. In our experiments, we set the CC equal to the number of classes in DtD_{t}. Then, for images xx in class cc, we have different views {𝒯1(x),𝒯2(x),,𝒯l(x)}\{\mathcal{T}_{1}(x),\mathcal{T}_{2}(x),...,\mathcal{T}_{l}(x)\} of xx after applying different data augmentations 𝒯\mathcal{T} mentioned in the MoCoV2 [6]. After that, we calculate the variance of the feature vectors of these views {fq(𝒯1(x)),fq(𝒯2(x)),\{f_{q}(\mathcal{T}_{1}(x)),f_{q}(\mathcal{T}_{2}(x)), ,fq(𝒯l(x))}...,f_{q}(\mathcal{T}_{l}(x))\}. We found l=6l=6 is enough for measuring the variance. Finally, we store the first nn images with the smallest variance for each class. Essentially, the images with the smallest variance are ones well learned by the network, thus the contrastive information of these images can be easily transferred to the new task through self-supervised knowledge distillation.

Knowledge distillation is another way to address catastrophe forgetting problems. Hence, we introduce it to our method to further enhance the ability for handling catastrophe forgetting. Following some self-supervised knowledge distillation methods [15][16], we utilize the contrastive similarity between images from the teacher network as the distillation target for the student network. Specifically, given the sampled images xx from the old dataset, we have xqx_{q} after one data augmentation. Then, xx and xqx_{q} are mapped and normalized into feature vector representations zT=fqT(x)\textbf{z}^{T}=f_{q}^{T}(x), zqT=fqT(xq)\textbf{z}^{T}_{q}=f_{q}^{T}(x_{q}), zS=fqS(x)\textbf{z}^{S}=f_{q}^{S}(x) and zqS=fqS(xq)B×D\textbf{z}^{S}_{q}=f_{q}^{S}(x_{q})\in\mathbb{R}^{B\times D}, where BB is the number of images, DD is the feature dimension, and fqTf_{q}^{T} and fqSf_{q}^{S} denote the query encoder of teacher and student respectively. We compute the similarity matrix PT(zT,zqT)B×BP^{T}(\textbf{z}^{T},\textbf{z}^{T}_{q})\in\mathbb{R}^{B\times B} between the feature vectors zT\textbf{z}^{T} and zqT\textbf{z}^{T}_{q} extracted by teacher. After that, the softmax function with the temperature scaling factor τ\tau is applied to PTP^{T} for normalization. We obtain PSP^{S} with a similar operation from the student. Finally, we adopt KL-divergence loss to PTP^{T} and PSP^{S}:

kd=i=1bj=1bPijTlog(PijS).\mathcal{L}_{kd}=-\sum_{i=1}^{b}\sum_{j=1}^{b}P^{T}_{ij}\log(P^{S}_{ij}). (3)

Besides, compared to supervised continual learning, which often fixes the teacher network during distillation, we newly find that a momentum teacher brings better performance. Thus, for each training epoch, we update the teacher network with a momentum style:

θtmtθt+(1mt)θq,\theta_{t}\leftarrow m_{t}\theta_{t}+(1-m_{t})\theta_{q}, (4)

where mt=0.996m_{t}=0.996 in our experiments.

Refer to caption
Fig. 3: The illustration of feature space interference of old and new data and the effect of the extra sample queue. The blue triangles denote the feature vectors of old data. The orange circulars indicate the feature vectors of new data. The red triangles denote the negative samples stored in the extra sample queue.

3.3 Extra Sample Queue

As mentioned in the section of the introduction, learning feature representations of a new task will cause the mixture of feature space among the old data and the new data. As illustrated in Fig. 3, when the network learns the feature representation (orange circulars) of new data, it may interfere with the feature representation (blue triangles) of old data. Thus, we propose an extra sample queue (red triangles) to push away the feature of new data from the feature of old data.

Specifically, the extra sample queue only contains the negative samples of the sampled images from old data. For a data mini-batch B=BSBDB=B_{S}\cup B_{D}, where BSB_{S} is the sampled images from old data and BDB_{D} is the images from new data, we sent them to two encoders of MoCoV2 [6] and then obtain feature representations zqS=qSqD\textbf{z}^{S}_{q}=q_{S}\cup q_{D} and zkS=kSkD\textbf{z}^{S}_{k}=k_{S}\cup k_{D}. The network discriminates data of the new dataset from data of the old datasets by computing the contrastive loss ESQ\mathcal{L}_{ESQ} between zqS\textbf{z}^{S}_{q}, zkS\textbf{z}^{S}_{k}, and extra sample queue. In every iteration, we will pick out the feature kSk_{S}, and update the extra sample queue with kSk_{S}.

To sum up, the total loss is given by:

=λ1MoCo+λ2ESQ+λ3kd,\mathcal{L}=\lambda_{1}\mathcal{L}_{MoCo}+\lambda_{2}\mathcal{L}_{ESQ}+\lambda_{3}\mathcal{L}_{kd}, (5)

where λ1\lambda_{1}, λ2\lambda_{2}, and λ3\lambda_{3} are balancing weights. In our experiments, we set λ1=0.9\lambda_{1}=0.9, λ2=0.1\lambda_{2}=0.1 and λ3=0.1\lambda_{3}=0.1.

Table 1: Linear evaluation top-1 accuracy on CIFAR-100 and ImageNet-Sub&Full under different incremental steps settings.
Method CIFAR-100 ImageNet-Sub ImageNet-Full
T=5 10 5 10 10
Upper Bound 57.63 ±\pm 0.12 57.63 ±\pm 0.12 60.54 60.54 67.50
Finetuning 51.08 ±\pm 0.34 48.40 ±\pm 0.42 56.56 52.62 61.50
Simple Rehearsal 52.61 ±\pm 0.41 49.44 ±\pm 0.36 57.98 54.28 62.05
Our 53.80±0.37\textbf{53.80}\pm 0.37 50.10±0.35\textbf{50.10}\pm 0.35 58.92 55.48 62.79

4 Experiment

4.1 Settings

4.1.1 Datasets.

We evaluate our method on two popular datasets for class-incremental learning, i.e., CIFAR-100 and ImageNet-Sub&Full. Specifically, ImageNet-Sub is a subset of ImageNet-Full, which contains 100 classes of images selected from ImageNet-Full randomly. For a given dataset, the classes are first arranged in a fixed random order and then cut into T incremental splits that come sequentially.

4.1.2 Protocols.

We follow [17] to split the dataset into incremental steps and manage the memory budget. 1) For each split dataset, all classes are divided equally to come in T incremental steps. 2) For the old training set, a constant number of images are stored after each incremental step.

4.1.3 Evaluation Metrics.

All models are evaluated after the last incremental step. To evaluate the performance of the encoders, we follow previous self-supervised learning tasks [7][8][9], that is, we verify the encoders by linear classification on frozen features in term of top-1 accuracy.

4.1.4 Implementation Details.

The experiments are conducted on CIFAR-100 with a modified ResNet18 as the backbone and on ImageNet-Sub&Full with ResNet50 as the backbone. We adopt MoCoV2 [6] as our basic method. Each incremental training step consists of 200 epochs. For ImageNet-Sub&Full, we use the same training hyperparameters as MoCoV2 [6]. For CIFAR-100, we follow the training hyperparameters in the official implementation of MoCoV2 [6] on CIFAR-10, and Split BN is used to simulate 8 GPU behavior of BatchNorm in 1 GPU. The size of the extra sample queue is set to 128 for both ImageNet-Sub&Full and CIFAR-100. For linear classification, we follow the setting of MoCoV2 [6]. For each dataset, We divide the classes of the dataset into T parts equally with a random order. We store 20 images for each class in every incremental step. The experiment results on CIFAR-100 are obtained from the average of 6 trials.

4.2 Main Results

Besides our method, we implemented and tested two naive continual contrastive learning methods, Finetuning and Simple Rehearsal. Finetuning learns new tasks one by one without taking any technique to prevent catastrophic forgetting. Simple Rehearsal stores a constant number of images randomly after training on the dataset Dt1D_{t-1}. For the subsequent task tt, it adds the stored images into the new training set DtD_{t} to train the model. It’s worth noting that, Simple Rehearsal does not use techniques like knowledge distillation or the extra sample queue.

Table 1 shows the performance of different continual contrastive learning methods on CIFAR-100 and ImageNet. One can see that our method outperforms the others from each column of Table 1. More specifically, the top-1 accuracy is improved from 48.40%48.40\% to 50.10%50.10\% on CIFAR-100, from 52.62%52.62\% to 55.48%55.48\% on ImageNet-Sub, and from 61.50%61.50\% to 62.79%62.79\% on ImageNet-Full under 10 incremental settings. These results show that our method works well on both small and large datasets, and demonstrate the effectiveness of our proposed continual contrastive learning method.

Besides MoCoV2 [6], we also apply our method to other contrastive learning methods, including SimCLR [9] and InsDisc [7]. More detail can be found in Appendix. The results on CIFAR-100 are shown in Fig. 4. We can see that our method also improves the linear classification accuracy of other contrastive learning methods, which demonstrates the generalization of our method.

Refer to caption
Fig. 4: Linear evaluation top-1 accuracy of three contrastive learning methods on CIFAR-100 under 5 incremental steps.
Table 2: Ablation studies of the main components of our method. The numbers in the brackets are the performance improvement compared with Finetuning.
Method Top-1 Acc
Finetuning 52.62
+ Random Sampling 54.28 (+1.66)
+ Our Sampling Strategy 55.18 (+2.56)
+ Knowledge Distillation 55.21 (+2.60)
+ Extra Sample Queue 55.48 (+2.86)

4.3 Ablation Study

To test the effectiveness of each component in our approach, we conduct ablation studies on ImageNet-Sub under 10 incremental steps setting.

In the first experiment, we evaluate the main components of our method: sampling strategy with feature variance measurement, self-supervised knowledge distillation, and the extra sample queue. As described in Section 3.2 and 3.3, data sampling strategy and knowledge distillation are exploited to distill the contrastive information from the previous contrast process, and the extra sample queue is exploited to separate the feature space of the new data and old data. As shown in Table 2, our novel sampling strategy, knowledge distillation, and extra sample queue boost the performance consistently and gains 2.86%2.86\% accuracy improvement when all of them are applied.

Refer to caption
Fig. 5: Linear evaluation top-1 accuracy of different value of K on ImageNet-Sub under 10 incremental steps setting.

Besides, since it is impossible to decide the category number of the unlabeled data, the hyper-parameter K in the K-Means algorithm in our experiment is the prior knowledge. Fig. 5 shows that the top-1 accuracy on ImageNet-Sub varies according to the choice of the hyper-parameter K. We find when K varies within a certain range, from 5 to 15, the performance of our method has little change. However, when the gap between K and the real number of classes is too large, the accuracy decreases rapidly.

Table 3: Linear evaluation top-1 accuracy of different size of the extra sample queue. The best result is achieved when the size is 128.
Size Top-1 Acc
32 53.01 ±\pm 0.33
64 53.61 ±\pm 0.28
128 53.80 ±\pm 0.37
256 53.59 ±\pm 0.35
512 53.23 ±\pm 0.34

We further explore the effect of the size of the extra sample queue and conduct the experiment on CIFAR-100. Table 3 shows that when the size of the extra sample queue increases, the performance is not improved consistently. A possible reason is that, when the size is large, the update rate of the extra sample queue is slow. Thus, many feature vectors in the extra sample queue are from previous iterations, which are detrimental to the update of the model.

4.4 Limitations

In this paper, we try to solve the problem of continual contrastive learning. The experiment results show that our method works well on both small and large datasets. However, our work still has many limitations. First, our method is only available for the contrastive learning method. When applying our method to other self-supervised learning methods, like GAN [18], BYOL [11], and MAE [19], some additional revisions are required. Second, though our method narrows the gap between continual contrastive learning and its upper bound, there is still a large gap between them. We hope our work can give researchers some insights into the catastrophic forgetting problems in contrastive learning, and inspire more research in this direction.

5 Conclusion

In this paper, we propose a rehearsal-based continual contrastive learning framework, to alleviate the catastrophic forgetting in contrastive learning. Our method restores a small number of images of old data with a novel sampling strategy and rehearses them while learning the new dataset. Besides, we exploit self-supervised knowledge distillation and propose an extra sample queue to make the network learn a better feature representation from old and new data. Extensive experimental results and analyses demonstrate the effectiveness of our method.

References

  • [1] Sylvestre-Alvise Rebuffi, Alexander Kolesnikov, Georg Sperl, and Christoph H. Lampert, “icarl: Incremental classifier and representation learning,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017, pp. 5533–5542.
  • [2] Konstantin Shmelkov, Cordelia Schmid, and Karteek Alahari, “Incremental learning of object detectors without catastrophic forgetting,” in Proceedings of the IEEE International Conference on Computer Vision, 2017, pp. 3420–3429.
  • [3] Arthur Douillard, Yifu Chen, Arnaud Dapogny, and Matthieu Cord, “Plop: Learning without forgetting for continual semantic segmentation,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2021, pp. 4040–4050.
  • [4] Mark Schutera, Frank M. Hafner, Jochen Abhau, Veit Hagenmeyer, Ralf Mikut, and Markus Reischl, “Cuepervision: self-supervised learning for continuous domain adaptation without catastrophic forgetting,” Image and Vision Computing, vol. 106, pp. 104079, 2021.
  • [5] Kibok Lee, Kimin Lee, Jinwoo Shin, and Honglak Lee, “Overcoming catastrophic forgetting with unlabeled data in the wild,” in Proceedings of the IEEE International Conference on Computer Vision, 2019, pp. 312–321.
  • [6] Xinlei Chen, Haoqi Fan, Ross B. Girshick, and Kaiming He, “Improved baselines with momentum contrastive learning,” arXiv preprint arXiv:2003.04297, 2020.
  • [7] Zhirong Wu, Yuanjun Xiong, Stella X. Yu, and Dahua Lin, “Unsupervised feature learning via non-parametric instance discrimination,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp. 3733–3742.
  • [8] Kaiming He, Haoqi Fan, Yuxin Wu, Saining Xie, and Ross B. Girshick, “Momentum contrast for unsupervised visual representation learning,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2020, pp. 9726–9735.
  • [9] Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey Hinton, “A simple framework for contrastive learning of visual representations,” in International conference on machine learning, 2020, pp. 1597–1607.
  • [10] Xinlei Chen and Kaiming He, “Exploring simple siamese representation learning,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2021, pp. 15750–15758.
  • [11] Jean-Bastien Grill, Florian Strub, Florent Altché, Corentin Tallec, Pierre H. Richemond, Elena Buchatskaya, Carl Doersch, Bernardo Ávila Pires, Zhaohan Guo, Mohammad Gheshlaghi Azar, Bilal Piot, Koray Kavukcuoglu, Rémi Munos, and Michal Valko, “Bootstrap your own latent - A new approach to self-supervised learning,” in Advances in Neural Information Processing Systems, 2020.
  • [12] Saihui Hou, Xinyu Pan, Chen Change Loy, Zilei Wang, and Dahua Lin, “Learning a unified classifier incrementally via rebalancing,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2019, pp. 831–839.
  • [13] Zhizhong Li and Derek Hoiem, “Learning without forgetting,” in Proceedings of the European conference on computer vision, 2016, pp. 614–629.
  • [14] Hongjoon Ahn, Jihwan Kwak, Subin Lim, Hyeonsu Bang, Hyojun Kim, and Taesup Moon, “Ss-il: Separated softmax for incremental learning,” in Proceedings of the IEEE International Conference on Computer Vision, 2021, pp. 844–853.
  • [15] Zhiyuan Fang, Jianfeng Wang, Lijuan Wang, Lei Zhang, Yezhou Yang, and Zicheng Liu, “SEED: self-supervised distillation for visual representation,” in Proceedings of International Conference on Learning Representations, 2021.
  • [16] Guodong Xu, Ziwei Liu, Xiaoxiao Li, and Chen Change Loy, “Knowledge distillation meets self-supervision,” in Proceedings of the European Conference on Computer Vision, 2020, pp. 588–604.
  • [17] Saihui Hou, Xinyu Pan, Chen Change Loy, Zilei Wang, and Dahua Lin, “Learning a unified classifier incrementally via rebalancing,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2019, pp. 831–839.
  • [18] Ian J. Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron C. Courville, and Yoshua Bengio, “Generative adversarial networks,” arXiv preprint arXiv:1406.2661, 2014.
  • [19] Kaiming He, Xinlei Chen, Saining Xie, Yanghao Li, Piotr Dollár, and Ross Girshick, “Masked autoencoders are scalable vision learners,” arXiv preprint arXiv:2111.06377, 2021.

Appendix

A Pseudo-code

We provide a Pseudo-code of our method here.

Algorithm 1 Pseudo-code for training

Input: Dataset {𝑫𝟏,𝑫𝟐,,𝑫𝑵}\{\bm{D_{1}},\bm{D_{2}},...,\bm{D_{N}}\}, encoder 𝒇𝒒𝑺\bm{f^{S}_{q}}, 𝒇𝒌𝑺\bm{f^{S}_{k}} and 𝒇𝒒𝑻\bm{f^{T}_{q}}, augmentation 𝓣\bm{\mathcal{T}}, queue 𝑸\bm{Q}, extra sample queue 𝑬𝑺𝑸\bm{ESQ}, resotred data set 𝑺\bm{S}

𝑺\bm{S\leftarrow\emptyset}

for t=1,,N{t=1,...,N} do

𝑫=𝑫𝒕𝑺\bm{D}=\bm{D_{t}}\cup\bm{S}

Params(𝒇𝒒𝑻\bm{f^{T}_{q}}) \leftarrow Params(𝒇𝒒𝑺\bm{f^{S}_{q}})

for iteration = 1,,K{1,...,K} do

for minibatch B=BDBSDB=B_{D}\cup B_{S}\sim D do

# get positive and negative pairs

𝒛𝒒𝑺\bm{z^{S}_{q}} = 𝒇𝒒𝑺(𝓣(B))\bm{f^{S}_{q}}(\bm{\mathcal{T}}(B))

𝒛𝒌𝑺\bm{z^{S}_{k}} = 𝒇𝒌𝑺(𝓣(B))\bm{f^{S}_{k}}(\bm{\mathcal{T}}(B))

# calculate MoCo loss and ESQ loss

𝓛MoCo\bm{\mathcal{L}}_{MoCo} = Contrastive_\_loss(𝒛𝒒𝑺,𝒛𝒌𝑺,𝑸\bm{z^{S}_{q}},\bm{z^{S}_{k}},\bm{Q})

𝓛ESQ\bm{\mathcal{L}}_{ESQ} = Contrastive_\_loss(𝒛𝒒𝑺,𝒛𝒌𝑺,𝑬𝑺𝑸\bm{z^{S}_{q}},\bm{z^{S}_{k}},\bm{ESQ})

# knowledge distillation

# get embeddings

𝒛𝑺\bm{z^{S}},  𝒛𝑻\bm{z^{T}} = 𝒇𝒒𝑺(BS)\bm{f^{S}_{q}}(B_{S}),  𝒇𝒒𝑻(BS)\bm{f^{T}_{q}}(B_{S})

𝒛𝒒𝑺\bm{z^{S}_{q}},  𝒛𝒒𝑻\bm{z^{T}_{q}} = 𝒇𝒒𝑺(𝓣(BS))\bm{f^{S}_{q}}(\bm{\mathcal{T}}(B_{S}))𝒇𝒒𝑻(𝓣(BS))\bm{f^{T}_{q}}(\bm{\mathcal{T}}(B_{S}))

# calculate the similarity matrix

𝑷𝑻\bm{P^{T}} = Softmax(Similarity(𝒁𝑻\bm{Z^{T}}, 𝒁𝒒𝑻\bm{Z^{T}_{q}})/τ\tau, dim=-1)

𝑷𝑺\bm{P^{S}} = Softmax(Similarity(𝒁𝑺\bm{Z^{S}}, 𝒁𝒒𝑺\bm{Z^{S}_{q}})/τ\tau, dim=-1)

# calculate knowledge distillation loss

𝓛kd\bm{\mathcal{L}}_{kd} = KL_\_divergence(𝑷𝑻\bm{P^{T}}, 𝑺𝑻\bm{S^{T}})

𝓛total\bm{\mathcal{L}}_{total} = λ1𝓛MoCo\lambda_{1}\bm{\mathcal{L}}_{MoCo} + λ2𝓛ESQ\lambda_{2}\bm{\mathcal{L}}_{ESQ} + λ3𝓛kd\lambda_{3}\bm{\mathcal{L}}_{kd}

# update the model

Update(𝒇𝒒𝑺\bm{f^{S}_{q}}, 𝓛total\bm{\mathcal{L}}_{total})

# update the momentum encoder

Momentum_\_update(𝒇𝒒𝑺\bm{f^{S}_{q}}, 𝒇𝒌𝑺\bm{f^{S}_{k}}, mm)

# update the queue and the extra sample queue

Queue_update(𝑸\bm{Q}, 𝒛𝒌𝑺\bm{z^{S}_{k}})

𝒌𝑺\bm{k_{S}} = 𝒇𝒌𝑺(𝓣(BS))\bm{f^{S}_{k}}(\bm{\mathcal{T}}(B_{S}))

Queue_update(𝑬𝑺𝑸\bm{ESQ}, 𝒌𝑺\bm{k_{S}})

end for

# update the teacher network

Momentum_\_update(𝒇𝒒𝑺\bm{f^{S}_{q}}, 𝒇𝒒𝑻\bm{f^{T}_{q}}, mtm_{t})

end for

# get samples

𝑺\bm{S} = 𝑺\bm{S}\cup Sample_\_strategy(𝑫𝒕\bm{D_{t}})

end for

return 𝒇𝒒𝑺\bm{f^{S}_{q}}

B Additional results

Besides linear evaluation, we also evaluate our method under Forgetting and Forward Transfer metrics, that is,

Forgetting:F=1T1i=1T1maxt{1,,T}(at,iaT,i),Forgetting:~{}F=\frac{1}{T-1}\sum_{i=1}^{T-1}\max_{t\in\{1,...,T\}}(a_{t,i}-a_{T,i}),
ForwardTransfer:FT=1T1i=2T(ai1,iRi),Forward~{}Transfer:~{}FT=\frac{1}{T-1}\sum_{i=2}^{T}(a_{i-1,i}-R_{i}),

where ai,ja_{i,j} is the linear evaluation accuracy of the model on dataset DjD_{j} after observing the last sample from dataset DiD_{i}, and RiR_{i} is the linear evaluation accuracy of the model with random initialization on dataset DiD_{i}.

The results are shown in the Table A. We can find that our method outperforms Finetuning under both Forgetting and Forward Transfer evaluation metrics. The results further demonstrate that our method can alleviate the catastrophic forgetting in self-supervised learning.

Table A: Forgetting and Forward Transfer results on ImageNet-Sub under 5/10 incremental steps settings.
Method T=5 T=10
F(\downarrow) TF(\uparrow) F(\downarrow) TF(\uparrow)
Finetuning 0.7 47.1 3.5 45.8
Simple Rehearsal 0.3 48.4 2.0 47.0
Our 0.3 48.3 1.6 47.3

C Implementation Detail

To demonstrate the generalization of our method, we apply our method to SimCLR and InsDisc. We give the implementation detail here.

There are two loss terms, kd\mathcal{L}_{kd} and ESQ\mathcal{L}_{ESQ}, should be added. First, the knowledge distillation term can be directly added to SimCLR and InsDisc without any modification. Second, the extra sample queue provides the extra negative samples from old data. For both SimCLR and InsDisc, these extra negative samples are used to calculate contrastive loss with their original positive samples, and form the ESQ loss term.

The hyperparameters, including balancing weights, temperature and the size of the extra sample queue, are the same as those in the MoCoV2.