PEPL: Precision-Enhanced Pseudo-Labeling for Fine-Grained Image Classification in Semi-Supervised Learning
Abstract
Fine-grained image classification has witnessed significant advancements with the advent of deep learning and computer vision technologies. However, the scarcity of detailed annotations remains a major challenge, especially in scenarios where obtaining high-quality labeled data is costly or time-consuming. To address this limitation, we introduce Precision-Enhanced Pseudo-Labeling (PEPL) approach specifically designed for fine-grained image classification within a semi-supervised learning framework. Our method leverages the abundance of unlabeled data by generating high-quality pseudo-labels that are progressively refined through two key phases: initial pseudo-label generation and semantic-mixed pseudo-label generation. These phases utilize Class Activation Maps (CAMs) to accurately estimate the semantic content and generate refined labels that capture the essential details necessary for fine-grained classification. By focusing on semantic-level information, our approach effectively addresses the limitations of standard data augmentation and image-mixing techniques in preserving critical fine-grained features. We achieve state-of-the-art performance on benchmark datasets, demonstrating significant improvements over existing semi-supervised strategies, with notable boosts in accuracy and robustness.Our code has been open sourced at https://github.com/TianSuya/SemiFG
Index Terms:
Fine-grained Image Classification, Semi-Supervised Learning, Label MixingI Introduction
Fine-grained image classification [1, 2, 3], which involves distinguishing between visually similar classes, plays a crucial role in various applications such as species identification, product categorization, and medical diagnostics. Despite the remarkable success of deep learning in computer vision [4, 5, 6], achieving high accuracy in fine-grained classification remains challenging due to the scarcity of labeled data and the subtlety of distinguishing features [7].
The limited availability of labeled data, particularly in fine-grained domains, hinders the development of robust models. To mitigate this issue, semi-supervised learning (SSL) [8, 9, 10] techniques have been proposed to leverage large amounts of unlabeled data alongside a small labeled dataset. SSL methods, including pseudo-labeling [11] and consistency regularization [12], have shown promise in improving model performance with limited supervision. However, existing SSL approaches face significant challenges when applied to fine-grained image classification. Standard data augmentation techniques [13] [14] can disrupt critical visual cues, and fine-grained image features are destroyed. Image region mixing may also overlook the fine details essential for accurate classification [15].

To address these challenges, we present a novel Precision-Enhanced Pseudo-Labeling (PEPL) approach tailored for fine-grained image classification. PEPL leverages CAMs [16] to generate high-quality pseudo-labels that capture the essential details necessary for fine-grained classification. Specifically, our method consists of two key phases: Initial Pseudo-Label Generation and Semantic-Mixed Pseudo-Label Generation. These phases utilize CAMs[16, 17, 18] to accurately estimate the semantic content[19] and generate refined labels that capture the essential details necessary for fine-grained classification. By focusing on semantic-level information, our approach effectively addresses the limitations of standard data augmentation and image-mixing techniques in preserving critical fine-grained features, we have conducted extensive experiments on two commonly used datasets for fine-grained classification, and the results show that our method far exceeds the most advanced and representative semi-supervised methods[20, 21] at present, with a 13% improvement in accuracy over the fully supervised model on the CUB_200_2011 dataset using 20% labeled data, and similar results to supervised learning using only 30% labeled data.

The key contributions of our work are as follows:
(i) We propose the Precision-Enhanced Pseudo-Labeling (PEPL) approach specifically designed for fine-grained image classification.
(ii) Our method generates high-quality pseudo-labels using CAMs, which are progressively refined to enhance the precision of the pseudo-labels.
(iii) We demonstrate significant improvements in performance on benchmark datasets, outperforming existing semi-supervised strategies and achieving state-of-the-art accuracy.
II Methods
II-A Stage I: Initial Pseudo-Label Generation
Inspired by the concept of FreeMatch [22], our approach relies on the adaptive selection of confidence thresholds, which are dynamically adjusted based on the model’s predictive performance on unlabelled data. Rather than adopting a static approach, we holistically evaluate the model’s predictions across all classes for each iteration.
After each round of predictions, we apply the following equation to the class outputs of every unlabelled sample. This collective consideration of all classes ensures that the thresholds are not only category-specific but also responsive to the evolving model performance:
First, we holistically consider all categories to determine the overall predictive values for the current iteration. After each round of predictions, we apply the following equation to the class outputs of every unlabelled sample:
where represents the total number of categories, is a pre-set hyperparameter that controls the ratio of the EMA, indicates the batch size of the current unlabeled data, where indicates the batch size of the labeled data, and is the preset multiple factor, represents the output of the model’s predictions, and represents the global threshold at step .
To address the issue of class imbalance in the model’s predictive capability, we compute an individual model prediction threshold for each class using the following formula:
where represents the current category number.
After obtaining the individual prediction thresholds for each class, we combine the overall threshold and the class-specific thresholds to determine the confidence selection threshold for each class at the current moment:
where . This integration of both global and class-wise thresholds allows us to strike a balance between the general performance of the model and the unique characteristics of each class. By doing so, we can effectively select confident predictions for each class, enhancing the reliability of the pseudo-labels in the semi-supervised learning process.
With the thresholds calculated for each class, we can now employ them in the initial generation of pseudo-labels:
where represents the indicator function, which is 1 when the condition is met and 0 otherwise. This step assigns provisional labels to unlabelled samples based on their highest predicted probabilities using derived thresholds. This creates a set of pseudo-labels reflecting the model’s confidence, which will be refined in subsequent training iterations of the semi-supervised learning algorithm.
II-B Stage II: Hybrid Semantic Pseudo-Label Generation
The utility of pseudo-labels alone in enhancing model performance is somewhat limited. To better exploit the potential of unlabeled images, we propose a two-stage approach. In Stage I, we randomly blend images and then estimate the semantic information contained in the mixed images. Based on the pseudo-labels generated in the previous step, we create hybrid semantic pseudo-labels for the mixed images. To quantify the semantic composition of the mixed images, we need to measure the semantic correlation between each original image’s pixels and their corresponding labels. An effective approach to achieve this is through Class Activation Maps (CAMs), which reveal how regions relate to semantic classes. We initially employ an attention mechanism [23] to compute the class activation map for the input image. Let the feature map at the th layer be represented as:
where represents the input image, and represents the number of categories, height and width of the feature map, respectively. We can match the activation map from the th layer to the size of the input image using upsampling operations:
where represents an activation map of the same size as the input image, and represents the upsampling operation. Next, we normalize the to obtain a map with a sum equal to 1:
where represents the activation map that has been normalized. This step involves transforming the CAM associated with the image such that the sum of all its values equals unity. When blending images, we infer the label of the mixed image based on the semantic proportions of each component in the original images:
The process of estimating the label for the blended image involves considering the relative semantic contributions of the individual parts from the original images. For each blended image, we estimate the proportions and of the semantic pseudo-labels and respectively. represents the part of input that is removed, and represents the part of input that is blended into input . We derive by subtracting the removed portion from 1, and by estimating the semantic proportion of the blended part. These proportions reflect the combined semantic content of the blended image.
For each batch of unlabeled data, we first generate preliminary pseudo-labels. Then, we randomly combine these pseudo-labelled samples to create mixed instances along with their corresponding hybrid semantic pseudo-labels. These hybrid labels are used to iteratively refine and optimize the model during training.
II-C Loss Function for Whole Framework
We can divide the overall loss function into supervised loss and unsupervised loss . The calculation of the loss function can be expressed as follows:
where represents the predicted output for input when the parameter is , and represents the cross-entropy loss function. and represent two semantic pseudo-labels generated by the steps above. and represent the weights of supervised and unsupervised losses, respectively.
III Experiment and Result
Dataset | Method | 10%Label | 20%Label | 30%Label | 100%Label |
---|---|---|---|---|---|
CUB_200_2011 | Supervised-Only | 28.61 | 51.87 | 65.77 | 85.76 |
Pi-Model | 25.52 | 50.65 | 60.79 | 75.56 | |
Pseudo-Label | 32.71 | 54.42 | 68.93 | 86.77 | |
FlexMatch | 30.61 | 55.71 | 70.15 | 87.98 | |
FreeMatch | 30.78 | 56.68 | 67.62 | 88.39 | |
PEPL(Ours) | 38.53 | 64.60 | 76.97 | 88.75 | |
Stanford Car | Supervised-Only | 24.54 | 54.13 | 70.71 | 90.09 |
Pi-Model | 13.07 | 48.52 | 67.75 | 85.49 | |
Pseudo-Label | 26.12 | 60.10 | 74.35 | 90.19 | |
FlexMatch | 26.70 | 61.32 | 73.31 | 90.79 | |
FreeMatch | 26.10 | 62.67 | 75.97 | 89.26 | |
PEPL(Ours) | 32.72 | 74.79 | 86.52 | 91.09 |
III-A Setup
Datasets. To evaluate the effectiveness of PEPL, we conducted experiments on two standard fine-grained classification datasets: CUB_200_2011 [24] and Stanford Cars [25]. The first dataset, introduced by Caltech in 2010, comprises 11,788 images across 200 bird species, with 5,994 images for training and 5,794 for testing. This dataset is widely used as a benchmark for fine-grained classification and recognition research. Stanford Cars, released by Stanford AI Lab in 2013, includes 16,185 images of 196 car models, with 8,144 images for training and 8,041 for testing. The dataset is designed for fine-grained classification tasks and categorizes cars by brand, model, and year.
Label Ratio | Semantic Aware | without Semantic Aware |
---|---|---|
10% | 38.53 | 27.47 |
20% | 64.60 | 56.23 |
30% | 76.97 | 72.50 |
100% | 88.75 | 85.08 |
Settings. We conducted experiments using a single NVIDIA A800 80G GPU. The pre-trained ResNet50 on ImageNet served as the base classification model. The overall training was set for 200 epochs, with a batch size of 16 for labeled data. For unlabeled data, the batch size was set to 112 (). The initial learning rate was 0.01, decreasing by a factor of 0.1 every 80 epochs. After reaching 0.0001, a cosine annealing scheduler was applied to gradually reduce the learning rate to 0 over the last 40 epochs. The hyperparameter for pseudo-label generation in Stage I was set to 0.999 to ensure a stable growth trend. Both the loss weights ( and ) for supervised and unsupervised learning were set to 1.
III-B Results and Analysis
Evaluation Metric. We chose multi-classification accuracy as our evaluation metric, defined below:
where TP+TN represents the number of samples with all the correct classifications, and ALL represents the total number of samples.
Performance. The main experimental results are summarized in Table I. We compared our method with classic semi-supervised learning approaches, Pi-Model [12] and Pseudo-Label [26], as well as sota methods, FlexMatch [27] and FreeMatch [22], under scenarios with 10%, 20%, and 30% of the total data labeled, and also when all label data were used. We also compared with purely supervised learning (Supervised-Only). The perturbation method of Pi-Model and the strong augmentation methods of FlexMatch and FreeMatch all used RandAugment [13] technology. The classification accuracy on the two datasets clearly demonstrates that our proposed PEPL method consistently outperforms other semi-supervised learning methods under different label proportions. Using just 30% of the labels, our method achieves comparable results to supervised training with 100% of the label data. With 10% and 20% of the label data, our method outperforms state-of-the-art semi-supervised methods by approximately 8%, and improves accuracy by about 10% to 13% compared to purely supervised training. These results fully demonstrate the effectiveness of our proposed PEPL semi-supervised learning framework in enhancing fine-grained classification performance across different datasets.
Ablation Study. To further validate the effectiveness of semantically mixed pseudo-labels introduced by PEPL, we compared it with the method of directly mixing and generating pseudo-labels without semantic mixing on the CUB. As shown in Table II and combined with Table I, we find that while direct mixing without semantic mixing still achieves some improvement compared to purely supervised learning, adding semantic mixing results in an additional performance gain of about 4% to 9%. This fully demonstrates the rationale behind introducing semantically mixed pseudo-labels in PEPL.
Case Study. To more intuitively demonstrate the superiority of the PEPL method, we exported models trained using the FreeMatch method and the PEPL method for semi-supervised training on 30% labeled data. We calculated class attention maps using the output of the last convolutional layer and visualized them. As shown in Figure 3, it is evident that the class attention maps based on the PEPL method focus more on areas where the current class may have fine-grained differences with other classes (such as car logos and rearview mirrors). This intuitively indicates that the PEPL method can better enhance the model’s perception of fine-grained features.

IV Conclusion
In this paper, we introduced the PEPL method, which effectively addresses the challenges faced by semi-supervised learning methods in the domain of fine-grained image classification. By leveraging CAMs to generate high-quality pseudo-labels, PEPL overcomes the limitations of standard data augmentation and image-mixing techniques. The simplicity and effectiveness of PEPL make it a valuable addition to the toolkit of researchers and practitioners working in fine-grained classification, alleviating the exceptionally severe label scarcity problem. Its flexibility and strong performance position PEPL as a method that can significantly advance state of the art in semi-supervised learning and inspire further research into innovative approaches for fine-grained image classification.
References
- [1] Yafei Wang and Zepeng Wang, “A survey of recent work on fine-grained image classification techniques,” Journal of Visual Communication and Image Representation, vol. 59, pp. 210–214, 2019.
- [2] Yao Rong, Wenjia Xu, Zeynep Akata, and Enkelejda Kasneci, “Human attention in fine-grained classification,” arXiv preprint arXiv:2111.01628, 2021.
- [3] Peiqin Zhuang, Yali Wang, and Yu Qiao, “Learning attentive pairwise interaction for fine-grained classification,” in Proceedings of the AAAI conference on artificial intelligence, 2020, vol. 34, pp. 13130–13137.
- [4] Mohammed Abdullahi, Olaide Nathaniel Oyelade, Armand Florentin Donfack Kana, Mustapha Aminu Bagiwa, Fatimah Binta Abdullahi, Sahalu Balarabe Junaidu, Ibrahim Iliyasu, Ajayi Ore-ofe, and Haruna Chiroma, “A systematic literature review of visual feature learning: deep learning techniques, applications, challenges and future directions,” Multimedia Tools and Applications, pp. 1–58, 2024.
- [5] Md Eshmam Rayed, SM Sajibul Islam, Sadia Islam Niha, Jamin Rahman Jim, Md Mohsin Kabir, and MF Mridha, “Deep learning for medical image segmentation: State-of-the-art advancements and challenges,” Informatics in Medicine Unlocked, p. 101504, 2024.
- [6] Songning Lai, Xifeng Hu, Haoxuan Xu, Zhaoxia Ren, and Zhi Liu, “Multimodal sentiment analysis: A survey,” Displays, p. 102563, 2023.
- [7] Jingcai Guo, Zhijie Rao, Song Guo, Jingren Zhou, and Dacheng Tao, “Fine-grained zero-shot learning: Advances, challenges, and prospects,” arXiv preprint arXiv:2401.17766, 2024.
- [8] Yves Grandvalet and Yoshua Bengio, “Semi-supervised learning by entropy minimization,” NeurIPS, vol. 17, 2004.
- [9] Fan Yang, Kai Wu, Shuyi Zhang, Guannan Jiang, Yong Liu, Feng Zheng, Wei Zhang, Chengjie Wang, and Long Zeng, “Class-aware contrastive semi-supervised learning,” in CVPR, 2022, pp. 14421–14430.
- [10] Changyu Zeng, Wei Wang, Anh Nguyen, and Yutao Yue, “Self-supervised learning for point cloud data: A survey,” Expert Systems with Applications, p. 121354, 2023.
- [11] Dong-Hyun Lee et al., “Pseudo-label: The simple and efficient semi-supervised learning method for deep neural networks,” .
- [12] Samuli Laine and Timo Aila, “Temporal ensembling for semi-supervised learning,” arXiv preprint arXiv:1610.02242, 2016.
- [13] Ekin D Cubuk, Barret Zoph, Jonathon Shlens, and Quoc V Le, “Randaugment: Practical automated data augmentation with a reduced search space,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition workshops, 2020, pp. 702–703.
- [14] Ekin D. Cubuk, Barret Zoph, Dandelion Mane, Vijay Vasudevan, and Quoc V. Le, “Autoaugment: Learning augmentation policies from data,” 2019.
- [15] Jong-Chyi Su, Zezhou Cheng, and Subhransu Maji, “A realistic evaluation of semi-supervised learning for fine-grained classification,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021, pp. 12966–12975.
- [16] Peng-Tao Jiang, Chang-Bin Zhang, Qibin Hou, Ming-Ming Cheng, and Yunchao Wei, “Layercam: Exploring hierarchical class activation maps for localization,” IEEE Transactions on Image Processing, vol. 30, pp. 5875–5888, 2021.
- [17] Mohammed Bany Muhammad and Mohammed Yeasin, “Eigen-cam: Class activation map using principal components,” in 2020 international joint conference on neural networks (IJCNN). IEEE, 2020, pp. 1–7.
- [18] Hanwei Zhang, Felipe Torres, Ronan Sicre, Yannis Avrithis, and Stephane Ayache, “Opti-cam: Optimizing saliency maps for interpretability,” Computer Vision and Image Understanding, p. 104101, 2024.
- [19] Zhaozheng Chen, Tan Wang, Xiongwei Wu, Xian-Sheng Hua, Hanwang Zhang, and Qianru Sun, “Class re-activation maps for weakly-supervised semantic segmentation,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2022, pp. 969–978.
- [20] Xiangli Yang, Zixing Song, Irwin King, and Zenglin Xu, “A survey on deep semi-supervised learning,” IEEE Transactions on Knowledge and Data Engineering, vol. 35, no. 9, pp. 8934–8954, 2022.
- [21] Yassine Ouali, Céline Hudelot, and Myriam Tami, “An overview of deep semi-supervised learning,” arXiv preprint arXiv:2006.05278, 2020.
- [22] Yidong Wang, Hao Chen, Qiang Heng, Wenxin Hou, Yue Fan, Zhen Wu, Jindong Wang, Marios Savvides, Takahiro Shinozaki, Bhiksha Raj, et al., “Freematch: Self-adaptive thresholding for semi-supervised learning,” arXiv preprint, 2022.
- [23] Fei Wang, Mengqing Jiang, Chen Qian, Shuo Yang, Cheng Li, Honggang Zhang, Xiaogang Wang, and Xiaoou Tang, “Residual attention network for image classification,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 3156–3164.
- [24] C. Wah, S. Branson, P. Welinder, P. Perona, and S. Belongie, ,” Tech. Rep. CNS-TR-2011-001, California Institute of Technology, 2011.
- [25] Jonathan Krause, Michael Stark, Jia Deng, and Li Fei-Fei, “3d object representations for fine-grained categorization,” in Proceedings of the IEEE international conference on computer vision workshops, 2013, pp. 554–561.
- [26] Dong-Hyun Lee, “Pseudo-label : The simple and efficient semi-supervised learning method for deep neural networks,” ICML 2013 Workshop : Challenges in Representation Learning (WREPL), 07 2013.
- [27] Bowen Zhang, Yidong Wang, Wenxin Hou, Hao Wu, Jindong Wang, Manabu Okumura, and Takahiro Shinozaki, “Flexmatch: Boosting semi-supervised learning with curriculum pseudo labeling,” NeurIPS, vol. 34, pp. 18408–18419, 2021.