Progressive Class-based Expansion Learning For Image Classification
Abstract
In this paper, we propose a novel image process scheme called class-based expansion learning for image classification, which aims at improving the supervision-stimulation frequency for the samples of the confusing classes. Class-based expansion learning takes a bottom-up growing strategy in a class-based expansion optimization fashion, which pays more attention to the quality of learning the fine-grained classification boundaries for the preferentially selected classes. Besides, we develop a class confusion criterion to select the confusing class preferentially for training. In this way, the classification boundaries of the confusing classes are frequently stimulated, resulting in a fine-grained form. Experimental results demonstrate the effectiveness of the proposed scheme on several benchmarks.
Index Terms:
Class-based expansion optimization, image classfication.I Introduction
Convolutional neural networks [1, 2, 3, 4, 5] (CNN) have attracted considerable attention in image classification due to their effectiveness in representation learning [6, 7]. Since they require computationally expensive and memory-consuming operations, CNN training typically resorts to stochastic gradient descent (SGD) [8, 9] for iterative batch-level learning, it traverses the entire training dataset across randomly generated batches throughout successive epochs. With this epoch-by-epoch learning procedure, the classification boundaries of the CNN model are dynamically updated until convergence. Due to the memory resource limit, the samples within a smaller-size batch usually distribute extremely diversely and sparsely, resulting in a low supervision-stimulation frequency for each sample. The low frequency for each sample in turn causes the learning process to pay more attention to the learning quality of the coarse-grained classification boundaries while ignoring fine-grained details. Therefore, seeking for an effective and stable image classification strategy remains a key issue to solve in the CNN learning area.

To date, curriculum learning [10, 11, 12, 13, 14, 15] and self-paced learning [16, 17, 18, 19, 20, 21, 22] have been proposed to improve the speed of convergence of the training process to a minimum and the quality of the local minima obtained. The key concept of the proposed methods is inspired by human behavior, who always learn new things from “easy” to “complex”. But these methods still do not consider the fine-grained classification boundaries.
In this letter, we propose a new learning pattern that arises from the inspirations of the biological learning mechanism. The Hebbian theory [23] delivers an important insight that the increase in synaptic effects of synaptic cells comes from repeated and sustained stimulation. Meanwhile, the human learning pattern usually follows a progressive knowledge expansion learning pipeline, it dynamically learns new knowledge while keeping the old knowledge frequently reviewed. The knowledge that is frequently reviewed is often better learned. Inspired by the biological learning mechanism, we propose a progressive learning pipeline that aims at effectively enhancing the supervision-stimulation frequency for each sample to enhance the quality of the fine-grained classification boundaries as shown in Fig. 1. Specifically, we present a progressive piecewise class-based expansion learning scheme, which first learns fine-grained classification boundaries for a small portion of classes and subsequently expands the classification boundaries with new classes added. Therefore, the presented class-based expansion learning scheme takes a bottom-up growing strategy in a class-based expansion optimization fashion, which puts more emphasis on the quality of learning the fine-grained classification boundaries for dynamically growing local classes. Besides, we propose a class confusion criterion to sort the classes involved in the class-based expansion learning process. The classes where the samples have large intra-class and small inter-class distances on average (i.e., class confusing samples) are preferentially involved in the class-based expansion learning process. Once a particular class is selected, all the samples belonging to this class are added to the training sample pool for the CNN model learning. Such an expansion procedure is repeated until all the class samples participated in the training process. In this way, the classification network model is dynamically refined based on the updated training sample pool, and the classification boundaries of the preferentially selected classes are frequently stimulated, resulting in a fine-grained form.
The main contributions of this work are summarized as follows: i) Motivated by Hebbian theory, we propose to investigate the influence of “stimulation frequency” on neural network learning and make an observation that the poor performance of the confusing classes is partially a result of the low stimulation frequency. ii) We propose a novel class-based expansion learning pipeline to deal with the learning problem. This pipeline progressively trains the CNN model in a hard-to-easy class-based growing manner, thereby the classification boundaries of the preferentially selected confusing classes are frequently stimulated. iii) We develop two class confusion criteria to sort the classes for the class-based expansion learning process. Extensive experiments demonstrate the effectiveness of our work against conventional learning pipelines on several benchmarks.

II Method
In this section, we detail the proposed class-based expansion learning scheme. We first formally define the problem in Section II-A, we describe our algorithm to solve it in Sections II-B and II-C.
II-A Problem Definition
Given a -classes dataset , and the -th class of the dataset contains samples and their corresponding labels :
(1) |
Let denote the mapping function of the CNN model, where represents the model parameters inside . In a typical training process, the goal is to learn an optimal :
(2) |
where is the loss function (e.g. cross-entropy loss).
II-B Class Confusion Criterion
In this section, we introduce our metric for deciding the order in which classes are presented to the class-based expansion framework. Ideally, we want to set the score of a class to a high value when it is easy to confuse.
We use a pre-trained tiny network to evaluate the confusion score for each class. Note the training cost of is much lower than that of . To obtain the score of each class, we start by using to transform samples from image space into feature space and logits space:
(3) |
where and denote the feature extractor and the classfier of the network . Afterwards, we propose two kinds of class confusion criteria:
Distance-based Criterion. To obtain it, we first calculate the class center of each class:
(4) |
where is the number of samples in class . Then, the confusion score of the class can be reformulated as:
(5) |
where denotes the squared euclidean distance.
Entropy-based Criterion. This criterion is formulated as:
(6) |
where denotes the confusion score of .
We can observe that the above two confusion criteria measure the confusion score in different spaces. The confusion score obtained by the distance-based criterion is measured in feature space, which rises as the features in a certain class move away from the center of that class and approach other class centers. The confusion score obtained by the entropy-based criterion is measured in logits space, which rises when the logits of samples of a certain class move away from the one-hot vector.
Based on the obtained scores for each class, we can get an ordered dataset:
(7) |
where is the index of the class with the -th largest confusion score. The sorting process is detailed in Fig. 2.

II-C Progressive Expansion Learning
We now describe our proposed progressive expansion learning pattern for CNN models. With the ordered dataset , we split the optimization of Eq. 2 into stages (For convenience, we set to be divisible by ). We start with an empty training sample pool (). At the first stage, the first classes from the ordered dataset are added to , then the training sample pool is expanded to :
(8) |
The target optimization function of is:
(9) |
where is randomly initialized and represents the optimal model parameters learned from . At the -th stage (), the training sample pool is expanded to :
(10) |
where the last classes of are newly added. In order to find the optimal model parameters for , we have:
(11) |
where the is initialized by the optimal model parameters learned from .
In the simplest form of class-based expansion learning, the classes of the dataset are progressively added to the training sample pool. By analogy, using such a progressive way, we will eventually solve the problem in Eq. 2 after the samples of all the classes participate in the training process.
II-D Complexity Analysis
In this section, we consider the time complexity of class-based expansion learning (CEL). Let be the time cost for a normal training process. For CEL, if we use the same number of epochs for each stage, the time cost will be at stage (the ratio of the dataset size of stage to the size of the entire dataset is ). Then the class-based expansion learning time cost is:
(12) |
which is a linear time algorithm.
In the experiment, we observe that reducing the number of epochs in the early stages by a factor of does not sacrifice accuracy. We then train the network with the full amount of epochs only at the final stage and reducing the epoch number in other stages. In this way, the time cost is:
(13) |
We can reduce the time cost of CEL by controlling the value of . With a large , only a little consumption time is required at the early stage, making the training time of our strategy comparable to the training time of normal training.
III Experiments
III-A Experimental Settings
Method | Network | Runs | Test error (%) | Test error of each class (%) | |||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Airplane | Automobile | Bird | Cat | Deer | Dog | Frog | Horse | Ship | Truck | ||||
Normal Training | ResNet-32 | 5 | 7.08 | 6.00 | 3.50 | 9.60 | 14.40 | 5.70 | 11.90 | 4.70 | 4.90 | 5.00 | 4.80 |
ResNet-110 | 5 | 6.24 | 4.40 | 2.40 | 9.70 | 12.40 | 4.70 | 12.00 | 3.70 | 4.50 | 4.10 | 4.50 | |
CEL | ResNet-32 | 5 | 6.16 | 4.50 | 3.00 | 7.80 | 12.50 | 4.30 | 11.60 | 4.90 | 5.10 | 3.40 | 4.50 |
ResNet-110 | 5 | 5.71 | 4.90 | 2.60 | 7.40 | 11.80 | 4.30 | 9.60 | 2.90 | 4.60 | 4.30 | 4.70 |
Dataset | Network | Normal Training | CEL |
---|---|---|---|
ImageNet100 | ResNet-18 | 29.83 | 26.86 |
Ranking | 1 | 2 | 3 | 4 | 5 |
---|---|---|---|---|---|
class name | Cat | Bird | Dog | Airplane | Deer |
Ranking | 6 | 7 | 8 | 9 | 10 |
class name | Frog | Horse | Truck | Ship | Automobile |
Dataset
We conduct our experiments on three datasets, namely CIFAR10, CIFAR100, and ImageNet100. The CIFAR10 [27] dataset is a labeled subset of the 80 million tiny images dataset [28], which consists of 60,000 RGB images of resolution 3232 in 10 classes, with 5,000 images per class for training and 1,000 per class for testing. The CIFAR100 [27] is similar to the CIFAR10, except that it has 100 classes containing 600 images each. There are 500 training images and 100 testing images for each class of CIFAR100. The ImageNet100 is a subset of ImageNet [1] for ImageNet Large Scale Visual Recognition Challenge 2012. It contains 129,395 training images and 5,000 validation images in the first 100 classes of ImageNet.
Data preprocessing
On CIFAR10 and CIFAR100, we just follow the simple data augmentation in ResNet [4] for training, including random cropping for 4 pixels padded image, per-pixel mean subtraction and horizontal flip. On ImageNet100, the augmentation strategies we use are the 224224 random cropping and the horizontal flip.
Implementation details
We conduct our class-based expansion learning scheme on CIFAR10, CIFAR100, and ImageNet100 by using the state-of-the-art CNN models, including ResNet-18, ResNet-32 and ResNet-110. On CIFAR10, as described in Section II-B, we use a pre-trained ResNet-20 (trained by 60 epochs) on ImageNet to determine the order of classes. Then, as described in Section II-C, we divide the learning of the ordered dataset into stages. At the first four stages, we use epochs to train the network and we train the network with epochs at the last stage, i.e., . At each stage, we train the network using SGD with a mini-batch size of 128, a weight decay of and a momentum of . The initial learning rate is set to and is divided by after and of all epochs. On CIFAR100, we also use the pre-trained ResNet-50 to determine the order of classes. Afterwards, we divide the learning of the ordered dataset into stages. At the first nine stages, we utilize epochs to train the network and we train the network with epochs at the final stage. The other parameters of the experiments are the same as those used on CIFAR10. On ImageNet100, we use a pre-trained ResNet-18 to determine the order of classes (with 30 epochs). We divide the learning of the dataset into stages by the original order. At the first nine stages, we use 60 epochs to train the network and we train the network with epochs at the final stage. The initial learning rate is set to 2 and is divided by 5 after 20, 30, 40 and 50 epochs. The rest of the settings are the same as those on CIFAR10. We implement our scheme with the theano [29] and use an NVIDIA TITAN 1080 Ti GPU to train the network.
III-B Comparisons with the State of the Art
We compare our class-based expansion learning (CEL) with other state-of-the-art methods of Normal Training, CBS [24], DIHCL [25], and Curriculum [26]. The normal training method represents a standard training method, the CEL adopts the distance-based confusion criterion, and the CEL-2 adopts the entropy-based class confusion criterion. The results are summarized in Table II. As shown in Table II, CEL and CEL-2 outperform other state-of-the-art methods, which demonstrates their effectiveness.
We also employ ResNet-18 to conduct experiments on ImageNet100. The results are summarized in Table III. Similar conclusions to those on CIFAR10 datasets can be made. These results demonstrate the generalization of our approach.



III-C Ablation Experiments
Analysis of the class order
We presented the test error results for each class of CIFAR10 in Table I and the class order in Table IV. As shown in Table IV, we can observe the cat class, the bird class, as well as the dog class, are both confusing classes defined by the distance-based confusion criterion, and the error rates of these classes are the largest ones in Table I. Table I gives the results on CIFAR10, which shows that our method outperforms the normal training method on ResNet-32 and ResNet-110. In addition, we observe that the improvement in the performance of the model was mainly due to the preferentially selected classes (i.e. cat, bird, deer, dog, and airplane).
Analysis of the individual components
We carry out an experiment on CIFAR10 to analyze the individual components in the CEL method. In this experiment, without the sorted class order obtained by , we perform class expansion learning in a random class order, which is denoted by “w/o ”. The results in Table V indicate that “w/o ” performs better than normal training due to the class-based expansion learning process. In addition, “w/ ” also can improve the performance of “w/o ”, showing the effectiveness of the sorted class order obtained by .
Dataset | Normal Training | w/ | w/o |
---|---|---|---|
CIFAR10 | 7.08 | 6.16 | 6.32 |
CIFAR100 | 30.40 | 29.82 | 30.16 |
Dataset | Network | Normal Training | CEL | ||
---|---|---|---|---|---|
Epoch time | Test error | Epoch time | Test error | ||
CIFAR10 | ResNet-32 | 420 | 6.50 | 420 | 6.16 |
ImageNet100 | ResNet-18 | 330 | 27.52 | 330 | 26.86 |
Convergence performance of final stage
The convergence performance of the final stage is shown in Fig. 4. From Fig. 4, we observe that our method converges faster than the normal training at the beginning, and performs better in most cases. These observations mean that learning local classes in advance can effectively accelerate network convergence.
Impact of long time training
To evaluate the impact of the long time training, we conduct the experiments on CIFAR10 and ImageNet100 where we make the time cost of the normal training the same as the one of CEL. In these experiments, we increase the number of epochs in normal training to match the one used in the CEL method. Table VI gives the results, which show that our method outperforms the normal training method at the same number of epochs.
IV Conclusion
In this letter, we have presented a novel class-based expansion learning scheme for CNN, which learns the whole dataset by progressively training the CNN model in a bottom-up class growing manner. By using this scheme, the classification boundaries of the preferentially selected classes are frequently stimulated, resulting in a fine-grained form. Based on the characteristics of the scheme, we have also proposed a class confusion criterion that prioritizes the classes that are easily confused. Extensive experimental results demonstrate the effectiveness of our work.
References
- [1] A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” in Neural Information Processing Systems, 2012, pp. 1097–1105.
- [2] K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” arXiv preprint arXiv:1409.1556, 2014.
- [3] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich, “Going deeper with convolutions,” in Computer Vision and Pattern Recognition, 2015, pp. 1–9.
- [4] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Computer Vision and Pattern Recognition, 2016, pp. 770–778.
- [5] G. Huang, Z. Liu, L. Van Der Maaten, and K. Q. Weinberger, “Densely connected convolutional networks,” in Computer Vision and Pattern Recognition, 2017, pp. 4700–4708.
- [6] Y. Bengio, A. Courville, and P. Vincent, “Representation learning: A review and new perspectives,” IEEE transactions on Pattern Analysis and Machine Intelligence, vol. 35, no. 8, pp. 1798–1828, 2013.
- [7] D. Zhang, J. Yin, X. Zhu, and C. Zhang, “Network representation learning: A survey,” IEEE transactions on Big Data, vol. 6, no. 1, pp. 3–28, 2018.
- [8] H. Robbins and S. Monro, “A stochastic approximation method,” The Annals of Mathematical Statistics, pp. 400–407, 1951.
- [9] D. E. Rumelhart, G. E. Hinton, R. J. Williams et al., “Learning representations by back-propagating errors,” Cognitive Modeling, vol. 5, no. 3, p. 1, 1988.
- [10] Y. Bengio, J. Louradour, R. Collobert, and J. Weston, “Curriculum learning,” in International Conference on Machine Learning. ACM, 2009, pp. 41–48.
- [11] V. I. Spitkovsky, H. Alshawi, and D. Jurafsky, “From baby steps to leapfrog: How less is more in unsupervised dependency parsing,” in North American Chapter of the Association for Computational Linguistics. Association for Computational Linguistics, 2010, pp. 751–759.
- [12] S. Basu and J. Christensen, “Teaching classification boundaries to humans,” in American Association for Artificial Intelligence, 2013.
- [13] A. Graves, M. G. Bellemare, J. Menick, R. Munos, and K. Kavukcuoglu, “Automated curriculum learning for neural networks,” in International Conference on Machine Learning, 2017, pp. 1311–1320.
- [14] X. Zhu, J. Qian, H. Wang, and P. Liu, “Curriculum enhanced supervised attention network for person re-identification,” IEEE Signal Processing Letters, vol. 27, pp. 1665–1669, 2020.
- [15] X. Wang, Y. Chen, and W. Zhu, “A survey on curriculum learning,” IEEE Transactions on Pattern Analysis and Machine Intelligence, 2021.
- [16] M. P. Kumar, B. Packer, and D. Koller, “Self-paced learning for latent variable models,” in Neural Information Processing Systems, 2010, pp. 1189–1197.
- [17] L. Jiang, D. Meng, S.-I. Yu, Z. Lan, S. Shan, and A. Hauptmann, “Self-paced learning with diversity,” in Neural Information Processing Systems, 2014, pp. 2078–2086.
- [18] L. Jiang, D. Meng, Q. Zhao, S. Shan, and A. Hauptmann, “Self-paced curriculum learning,” in American Association for Artificial Intelligence, 2015.
- [19] D. Meng, Q. Zhao, and L. Jiang, “A theoretical understanding of self-paced learning,” Information Sciences, vol. 414, pp. 319–328, 2017.
- [20] N. Gu, M. Fan, and D. Meng, “Robust semi-supervised classification for noisy labels based on self-paced learning,” IEEE Signal Processing Letters, vol. 23, no. 12, pp. 1806–1810, 2016.
- [21] T. Yu, C. Guo, L. Wang, S. Xiang, and C. Pan, “Self-paced autoencoder,” IEEE Signal Processing Letters, vol. 25, no. 7, pp. 1054–1058, 2018.
- [22] P. Soviany, R. T. Ionescu, P. Rota, and N. Sebe, “Curriculum self-paced learning for cross-domain object detection,” Computer Vision and Image Understanding, vol. 204, p. 103166, 2021.
- [23] D. O. Hebb, The organization of behavior: A neuropsychological theory. Psychology Press, 2005.
- [24] S. Sinha, A. Garg, and H. Larochelle, “Curriculum by smoothing,” in Neural Information Processing Systems, 2020.
- [25] T. Zhou, S. Wang, and J. A. Bilmes, “Curriculum learning by dynamic instance hardness,” in Neural Information Processing Systems, 2020.
- [26] X. Wu, E. Dyer, and B. Neyshabur, “When do curricula work?” in International Conference on Learning Representations, 2021.
- [27] A. Krizhevsky and G. Hinton, “Learning multiple layers of features from tiny images,” Citeseer, Tech. Rep., 2009.
- [28] A. Torralba, R. Fergus, and W. T. Freeman, “80 million tiny images: A large data set for nonparametric object and scene recognition,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 30, no. 11, pp. 1958–1970, Nov 2008.
- [29] J. Bergstra, O. Breuleux, F. Bastien, P. Lamblin, R. Pascanu, G. Desjardins, J. Turian, D. Warde-Farley, and Y. Bengio, “Theano: a cpu and gpu math expression compiler,” in Scientific Computing with Python conference, 2010.