11email: {qqiao, 20224227022}@stu.suda.edu.cn
11email: [email protected]
11email: [email protected]
A Simple Task-aware Contrastive Local Descriptor Selection Strategy for Few-shot Learning between inter class and intra class
Abstract
Few-shot image classification aims to classify novel classes with few labeled samples. Recent research indicates that deep local descriptors have better representational capabilities. These studies recognize the impact of background noise on classification performance. They typically filter query descriptors using all local descriptors in the support classes or engage in bidirectional selection between local descriptors in support and query sets. However, they ignore the fact that background features may be useful for the classification performance of specific tasks. This paper proposes a novel task-aware contrastive local descriptor selection network (TCDSNet). First, we calculate the contrastive discriminative score for each local descriptor in the support class, and select discriminative local descriptors to form a support descriptor subset. Finally, we leverage support descriptor subsets to adaptively select discriminative query descriptors for specific tasks. Extensive experiments demonstrate that our method outperforms state-of-the-art methods on both general and fine-grained datasets.
Keywords:
few-shot learningtask-awarelocal descriptorimage classification1 Introduction
The purpose of few-shot learning is to enable models to adapt quickly to new tasks with only a small number of training samples in scenarios where data is scarce. Generally, these methods can be divided into three groups: optimization-based methods [5, 1, 11], metric-based methods [23, 24, 10], and data augmentation-based methods [2, 21, 29, 9, 6].
This work is based on a few-shot learning method using local descriptors, falling within the realm of metric learning. Features based on local descriptors exhibit superior representational capabilities compared to image-level features. In previous works, [14] proposed DN4, which directly utilizes all query descriptors. It selects k support descriptors directly for each query local descriptor through k-nearest neighbors (k-NN) and approximates the relationship between query samples and support classes using cosine similarity distances. Based on DN4, [16] introduced DMN4, which believes that not all query descriptors are task-relevant and contain significant background noise. DMN4 establishes mutual nearest neighbor (MNN) relationships, explicitly selecting query descriptors most relevant to each task, thereby avoiding the impact of background noise on classification performance. Similarly, based on DN4, [4] and [30] proposed ATL-Net and TADNet, respectively. Both methods measure the relationships between each query local descriptor and all support classes, adaptively selecting discriminative query descriptors for classification. [19] introduced TALDS-Net, recognizing background noise in query descriptors. It first adaptively selects optimal descriptor subsets composed of support class local descriptors and then adaptively chooses query descriptors for classification from the optimal descriptor subset. However, all these methods aim to eliminate background noise to prevent its influence on the feature representation of local descriptors, either by filtering query descriptors through all support class local descriptors or by bidirectionally selecting between support class local descriptors and query descriptors. Their goal is to remove background noise. We observe that from a human cognitive perspective, for example, considering an image of a dog and an image of a dolphin, not only do the target features differ significantly, but the background features also exhibit substantial distinctions (e.g., dolphin backgrounds are unlikely to be grassy, whereas dog backgrounds might include grass). In such cases, background features can contribute to classification. Conversely, for two images both belonging to the dolphin category, the differences in background features might not be as pronounced, and in this scenario, background features can be considered as noise. For instance, when dealing with unfamiliar images, if the background is an ocean, it can help narrow down the classification to objects commonly found only in the ocean. This aids in identifying the target category among familiar images. Thus, background features within the same category might positively impact classification performance. Furthermore, background features between different categories might also contribute to enhancing classification performance. Determining discriminative local descriptors for methods based on local descriptors is a challenging task. Moreover, it is essential to judiciously retain or discard background noise in the process.
In response to this challenge, a straightforward solution is to select local descriptors in the support class to form a support descriptor subset, and then use the support descriptor subset to select query descriptors. Experimental results have also demonstrated the effectiveness of this simple method.
The above-described method is our proposed Task-Aware Contrastive Discriminative Local Descriptor Selection Network (TCDSNet). Specifically, we first select local descriptors from the support class. For each support descriptor, we calculate the sum of its similarities with the remaining support descriptors in the same category as the intra-class similarity score. Next, we compute the sum of its similarities with support descriptors from other categories as the inter-class similarity score. A high intra-class similarity score indicates that the support descriptor has strong representational capabilities for that class, while a low inter-class similarity score suggests that the support descriptor has high discriminative power across other classes. We calculate the discriminative score by dividing the intra-class similarity score by the inter-class similarity score, which we term the contrastive discriminative score. Then we select support descriptors in descending order of their discriminative scores. Finally, we utilize the selected support descriptors to choose query descriptors. For selecting discriminative query descriptors, we employ a simple learnable module to adaptively predict a threshold. Using the learned threshold and a score map, we select the most discriminative descriptors for final classification. This approach enhances the model’s classification and generalization capabilities.
In summary, our main contributions are three folds:
-
•
We propose a novel method that calculates discriminative scores () for local descriptors in the support class. This enhances the model’s adaptability to different tasks and strengthens the performance of local descriptors in few-shot learning tasks.
-
•
We propose a novel Task-Aware Contrastive Discriminative Local Descriptor Selection Network (TCDSNet) that not only selects a subset of support descriptors based on discriminative scores but also incorporates a learnable module for adaptively choosing the discriminative query descriptors.
-
•
Extensive experimental results demonstrate that TCDSNet outperforms state-of-the-art methods on multiple general and fine-grained datasets.
2 METHOD
Fig. 1 shows an overview of the proposed method.
2.1 Problem Definition
In this paper, we follow the same setting as previous methods[14, 4, 30, 19]. Given a support set , a query set , and an auxiliary set , where the label space of the auxiliary set is disjoint from and is used to learn transferable knowledge. The support set contains classes, each with labeled samples, while the samples in the query set are unlabeled and share the same label space as . We are given a support set consisting of classes, each with samples, and a query image, and the task is to classify the query image into one of the support classes. This constitutes the n-way k-shot few-shot classification problem. Under this setting, we introduce a meta-training mechanism[25] called the episodic training mechanism. We randomly sample from the auxiliary set to construct an n-way k-shot task. Each task consists of a support set and a query set . During the training phase, we construct tens of thousands of tasks to learn transferable knowledge.
2.2 Image Representation Based on Local Descriptors
We obtain a three-dimensional feature representation for the image through the embedding module , which is considered as a set of -dimensional local descriptors (LDs):
(1) |
Where denotes the -th deep local descriptor (LD). Similar to other descriptor-based Methods [14, 4, 16, 30, 19], we consider it as a set of -dimensional descriptors, and .
In each episode, each support class has images. We denote the descriptor set from category as , where there are classes in total, and represent the descriptor representation for each query image as . When using shallower embedding modules (e.g., Conv-4), each support category is represented in its original form. When using deeper embedding modules (e.g., ResNet-12), each support category is represented by the empirical mean of its support descriptors.
Method | Conv-4 | ResNet-12 | ||||||
miniImageNet | tieredImageNet | miniImageNet | tieredImageNet | |||||
MatchingNet[25] | 43.56 0.84 | 55.310.73 | - | - | 63.080.20 | 75.990.15 | 68.500.92 | 80.600.71 |
ProtoNet[23] | 51.200.26 | 68.940.78 | 53.450.15 | 72.320.57 | 62.330.12 | 80.880.41 | 68.400.14 | 84.060.26 |
RelationNet[24] | 50.440.82 | 65.320.70 | 54.480.93 | 71.310.78 | 60.97 | 75.12 | 64.71 | 78.41 |
FRN[28] | 54.87 | 71.56 | 55.54 | 74.68 | 66.450.19 | 82.830.13 | 72.060.22 | 86.890.14 |
Meta-OLE[27] | 56.820.84 | 73.870.67 | 58.820.88 | 75.850.87 | 67.040.72 | 82.230.67 | 68.820.71 | 85.510.59 |
Approximate GAP[12] | 53.520.88 | 70.750.67 | 57.470.99 | 71.660.76 | - | - | - | - |
GAP[12] | 54.860.85 | 71.550.61 | 58.560.93 | 72.82 0.77 | - | - | - | - |
DeepEMD[31] | 51.720.20 | 65.100.39 | 51.220.14 | 65.810.68 | 65.910.82 | 82.410.56 | 71.160.87 | 86.030.58 |
DN4[14] | 51.240.74 | 71.020.64 | 52.890.23 | 73.360.73 | 65.35 | 81.10 | 69.60 | 83.41 |
DMN4[16] | 55.77 | 74.22 | 56.99 | 74.13 | 66.58 | 83.52 | 72.10 | 85.72 |
ATL-Net[4] | 54.300.76 | 73.220.63 | - | - | - | - | - | - |
TADNet[30] | 56.14 0.20 | 74.680.15 | 57.880.21 | 75.980.17 | 67.260.20 | 84.230.13 | 71.29 0.22 | 86.460.15 |
TCDSNet(ours) | 57.140.22 | 75.890.35 | 58.670.61 | 76.060.33 | 68.530.19 | 85.120.42 | 72.430.72 | 87.350.55 |
2.3 Contrastive Discriminative Scores for Support Local Descriptors Selection
As mentioned above, represents an image in the support class, fed into the embedding module to obtain local descriptors , where . Here, denotes a supporting local descriptor in , represents the set of remaining support descriptors in excluding the current , and represents the set of local descriptors from the remaining support classes. Thus, we obtain -dimensional local descriptors (LDs) for an image in the support class. Under the n-way k-shot setting, there are a total of -dimensional support LDs. Previous methods [19] only considered the average similarity between each LD and the remaining LDs within the same class as the discriminative score. However, our goal is not only to maintain discriminative relationships within the same class but also across other classes. For each , we calculate its average similarity with all other LDs within the same support class, referred to as intra-class similarity, and then calculate its average similarity with LDs from the remaining support classes, referred to as inter-class similarity. We seek support LDs with high intra-class similarity and low inter-class similarity. High intra-class similarity indicates strong representational capabilities of the support LD for its corresponding class, while low inter-class similarity signifies poorer representational capabilities of the support LD for other classes. Support LDs exhibiting these characteristics suggest discriminative capabilities, potentially incorporating discriminative background features to enhance classification results. Therefore, the calculations for intra-class and inter-class similarities are as follows:
(2) |
Where represents the set of remaining support descriptors in excluding the current (in the case of 1-shot, it corresponds to the remaining local descriptors of the current image). denotes the set of local descriptors from the remaining support classes, denotes the intra-class similarity score, and denotes the inter-class similarity score. Furthermore, we normalize these two similarity scores and subsequently calculate their discriminative scores:
(3) |
Where denotes the discriminative score of the local descriptor within its own class, and represents its discriminative score across classes.
Based on the above results, we can calculate the two discriminative scores and for each support descriptor using a comparative approach. Subsequently, an optimized Contrastive Discriminative Score () can be computed:
(4) |
We can observe that aligns well with our initial idea, indicating that the current local descriptor exhibits high similarity with other local descriptors within the same class and low similarity with local descriptors from other classes, where is a sigmoid function. Furthermore, in Fig. 2, based on the descending order of , we select the top support descriptors with the contrastive discriminative scores for each class, forming a discriminative support descriptor set:
(5) |
The value of will be discussed in Ablation Studies 3.4. We will form a set with the support descriptors selected after screening.

2.4 Query Local Descriptors Selection
Given a query image embedded as . denotes a query descriptor in . Previous works[30, 19] employed -NN to select support descriptors from each support class. However, we have observed that, after computing the discriminative support descriptor set , it is not necessary to use -NN for selecting support LDs from . We directly compute the sum of similarities between each query descriptor and the discriminative support descriptor set for each support class :
(6) |
Where denotes a support class, and is one discriminative support LD from the discriminative support descriptor set of support class . Similarly, the discriminative score for each query descriptor is calculated as:
(7) |
Previous works [14, 16, 4, 30, 19] have employed methods that directly select query descriptors by using a fixed threshold and the top- query descriptors with the highest similarity. However, both of these methods suffer from poor generalization, as they may overlook some discriminative LDs. Thus, inspired by [4, 30, 19, 8], we employ a network consisting of two fully connected layers as an MLP to adaptively predict the threshold for each query descriptor. Finally, we utilize the predicted threshold to learn a query descriptor weights map . We feed the discriminative support descriptor set and the query descriptor into , ultimately predicting the threshold :
(8) |
Where denotes a query LD, and denotes a support class. The final calculation for the query descriptor weights map is as follows:
(9) |
Where, when is sufficiently large and , the values of approximates . Conversely, the values of approximates .
Therefore, we can utilize to select query LDs. The calculation for the similarity scores between each query image and each support class is as follows:
(10) |
The cross-entropy loss is used to meta-train the network:
(11) |
(12) |
3 EXPERIMENTS
In this section, we validate the effectiveness of our proposed method on several few-shot benchmark datasets and compare it with other state-of-the-art LDs-based methods. Additionally, we compare our method with small-sample methods using different parameter settings. Furthermore, we conduct ablation experiments to further analyze and validate the effectiveness of our proposed method.
3.1 Datasets
miniImageNet[25] is a subset of ImageNet [3]. It is divided into a training set with classes, a validation set with classes, and a test set with classes. Each class consists of image samples, each of size pixels.
tieredImageNet[20] is another subset of ImageNet. It comprises classes, with each class containing images. These classes are divided into for training, for validation, and for testing.
CUB-200[26] is a fine-grained dataset that consists of bird images, encompassing different bird species. We partition it into classes for training, classes for validation, and classes for testing. For fine-grained datasets, we resized the images in them to the same size as miniImageNet, which is pixels.
Method | Conv-4 | ResNet-12 | ||
1-shot | 5-shot | 1-shot | 5-shot | |
ProtoNet[23] | 63.73 | 81.50 | 66.09 | 82.50 |
DSN[22] | 66.01 | 85.41 | 80.80 | 91.19 |
FRN[28] | 73.48 | 88.43 | 83.16 | 92.59 |
Meta-OLE[27] | 71.32 | 86.11 | - | - |
Approximate GAP[12] | 43.77 | 62.92 | - | - |
GAP[12] | 44.74 | 64.88 | - | - |
DeepEMD[31] | - | - | 77.14 | 88.98 |
DN4[14] | 73.42 | 90.38 | - | - |
DMN4[16] | 78.36 | 92.16 | - | - |
TADNet[30] | 82.47 | 93.36 | 87.62 | 94.80 |
TCDSNet(ours) | 82.73 | 95.04 | 88.71 | 95.82 |
Conv-4 | ResNet-12 | ||||
K | miniImagenet | CUB-200 | K | minImagenet | CUB-200 |
1% | 74.94 | 90.11 | 3% | 83.92 | 89.21 |
2% | 75.89 | 90.23 | 5% | 85.12 | 89.25 |
5% | 74.23 | 92.37 | 10% | 84.39 | 92.33 |
10% | 72.02 | 95.04 | 25% | 84.41 | 95.82 |
30% | 71.11 | 94.57 | 30% | 83.83 | 94.29 |
3.2 Implementation Details
Model architecture. We use Conv-4 and ResNet-12 as feature extraction networks , similar to previous work [14, 4, 30, 19]. Conv-4 consists of 4 convolutional blocks, each containing a convolutional layer, batch normalization layer, and Leaky ReLU layer. ResNet-12 is composed of 4 residual blocks, with each block consisting of 3 convolutional layers with kernels, 3 batch normalization layers, 3 Leaky ReLU layers, and a max-pooling layer. Conv-4 and ResNet-12 generate feature maps of size and for images, respectively. These feature maps are then mapped through a transformation layer , which consists of a convolutional layer, a batch normalization layer, and a LeakyReLU layer. Finally, is implemented with two fully connected layers.
Training and evaluation details. During the meta-training phase, we followed the settings in [16, 30, 19]. For Conv-4, we set the learning rate to and decay every epoch, training for a total of 30 epochs using the Adam optimizer. For ResNet-12, we pre-trained it first and then conducted meta-training for 40 epochs using momentum SGD with an initial learning rate of and decay every epochs. During the test, as in [16, 30, 19], we randomly constructed episodes from the test set to calculate the classification accuracy. This process was repeated five times, and we reported the average accuracy along with a confidence interval.
3.3 Comparisons with State-of-the-art Methods
We choose generic few-shot learning state-of-the-art baselines[25, 23, 24, 28, 27, 12], as well as SOTA baselines based on LDs[31, 14, 16, 4, 30]. For fine-grained datasets, we also selected state-of-the-art baselines[23, 14, 22, 12, 27, 31, 28, 16, 30].
Results on miniImageNet dataset. As shown in Table 1, the performance of our method in the 5-way 1-shot and 5-shot settings exceeds that of all current LDs-based methods [31, 14, 16, 4, 30]. Compared to the baseline DN4, our method exhibits significant improvement. In the 5-way 1-shot and 5-shot settings, using Conv-4 as the backbone, it achieved improvements of and , respectively. Compared to the state-of-the-art (SOTA), our method also improved by and , respectively. When using ResNet-12 as the backbone, improvements of and were achieved, surpassing SOTA by and , respectively.
Results on tieredImageNet dataset. As shown in Table 1, our method outperforms the current state-of-the-art LDs-based methods as well. In the 5-way 1-shot and 5-shot settings, when using Conv-4 as the backbone, our method improves by and , respectively, compared to the state-of-the-art method based on LDs. When using ResNet-12 as the backbone, our method improves by and , respectively, compared to the state-of-the-art method based on LDs.
Results on fine-grained CUB-200 dataset. As shown in Table 2, our method also achieves state-of-the-art performance on fine-grained datasets. In the 5-way 1-shot and 5-shot settings, when using Conv-4 as the backbone, our method improves by and , respectively, compared to the state-of-the-art method based on LDs. When using ResNet-12 as the backbone, our method improves by and , respectively, compared to the state-of-the-art method based on LDs.
Method | Backbone | Params | miniImageNet | |
CTM[13] | ResNet-18 | 11.7 M | 64.12 0.82 | 80.51 0.13 |
Neg-Cosine[15] | ResNet-18 | 11.7 M | 62.330.82 | 80.94 0.59 |
UniSiam+dist[17] | ResNet-18 | 11.7 M | 64.10 ± 0.36 | 82.26 0.25 |
Meta-OLE[12] | WRN-28-10 | 36.5 M | 75.220.30 | 86.120.28 |
MetaQDA[32] | WRN-28-10 | 36.5 M | 67.830.64 | 84.280.69 |
OM[18] | WRN-28-10 | 36.5 M | 66.780.30 | 85.290.41 |
FewTURE[7] | ViT-Small | 22 M | 68.020.88 | 84.510.53 |
FewTURE[7] | Swin-Tiny | 29 M | 72.400.78 | 86.380.49 |
TCDSNet(ours) | ResNet-12 | 12.4 M | 68.530.19 | 85.120.42 |
3.4 Ablation Studies
Influence of Top in support LDs selection. In subsection 2.2, we selected ( as a percentage) LDs based on for each support class to form a discriminative LDs set. As shown in Table 3, we conducted experiments on the miniImagenet and CUB-200 datasets under the 5-way 5-shot setting. When using Conv-4 as the backbone, we set to respectively. When using ResNet-12 as the backbone, we set to respectively. Through experiments, we found that when using Conv-4 as the backbone, the performance is best when and on both datasets. When using ResNet-12 as the backbone, the performance is best when and on both datasets. The experimental results indicate that compared to general datasets, fine-grained datasets require more discriminative LDs. Similarly, under the 5-way 1-shot setting, the best performance is achieved with Conv-4 as the backbone when and , and with ResNet-12 as the backbone, the best performance is achieved when and .
Comparison with methods using backbones with different parameters. As shown in Table 4, we selected three baselines[13, 17, 15] using ResNet-18 as the backbone, three baselines[12, 32, 18] using WRN-28-10 as the backbone, and baselines[7] using ViT-Small and Swin-Tiny as the backbone. These methods are not LDs-based baselines. Compared to the baselines using ResNet-18 as the backbone, our method outperforms the best-performing method by and in the 1-shot and 5-shot settings, respectively. When compared to baselines using WRN-28-10 as the backbone[12, 32, 18], our method achieves a improvement in the 1-shot setting and is only lower than OM[18] in the 5-shot setting, despite WRN-28-10 having three times the parameters of ResNet-12. Compared to FewTURE[7] with ViT-Small as the backbone, our method achieves improvements of and , and is only lower than FewTURE with Swin-Tiny as the backbone in the 5-shot setting. However, Swin-Tiny has 2.3 times the parameters of ResNet-12. Additionally, FewTURE’s ViT-Small and Swin-Tiny were trained using 4 and 8 Nvidia A100 40GB GPUs, respectively, making their GPU requirements relatively high.
4 CONCLUSION
We propose a novel Task-Aware Contrastive Discriminative Local Descriptor Selection Network (TCDSNet), which utilizes a novel contrastive discriminative measure to filter discriminative local descriptors from the support class. Subsequently, it further selects discriminative query local descriptors from the filtered discriminative support descriptors, ensuring the selection of task-relevant query local descriptors. Extensive experiments validate the superiority and effectiveness of our proposed method. We anticipate that TCDSNet provides a new perspective for research in few-shot learning based on local descriptors.
Acknowledgment
This work was supported in part by the National Key R&D Program of China (2018YFA0701700; 2018YFA0701701) and by the National Natural Science Foundation of China under Grant No.61672364, No.62176172 and No.62002253.
References
- [1] Antoniou, A., Edwards, H., Storkey, A.: How to train your maml. arXiv preprint arXiv:1810.09502 (2018)
- [2] Antoniou, A., Storkey, A., Edwards, H.: Data augmentation generative adversarial networks. arXiv preprint arXiv:1711.04340 (2017)
- [3] Deng, J., Dong, W., Socher, R., Li, L.J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: 2009 IEEE conference on computer vision and pattern recognition. pp. 248–255. Ieee (2009)
- [4] Dong, C., Li, W., Huo, J., Gu, Z., Gao, Y.: Learning task-aware local representations for few-shot learning. In: Proceedings of the Twenty-Ninth International Conference on International Joint Conferences on Artificial Intelligence. pp. 716–722 (2021)
- [5] Finn, C., Abbeel, P., Levine, S.: Model-agnostic meta-learning for fast adaptation of deep networks. In: International conference on machine learning. pp. 1126–1135. PMLR (2017)
- [6] He, F., Li, G., Zhang, M., Yan, L., Si, L., Li, F.: Freestyle: Free lunch for text-guided style transfer using diffusion models (2024)
- [7] Hiller, M., Ma, R., Harandi, M., Drummond, T.: Rethinking generalization in few-shot classification. Advances in Neural Information Processing Systems 35, 3582–3595 (2022)
- [8] Huang, S., Cao, Z., Qin, L., Gao, J., Zhang, J.: Contrastive learning with high-quality and low-quality augmented data for query-focused summarization. In: ICASSP 2024-2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). pp. 11536–11540. IEEE (2024)
- [9] Huang, S., Qin, L., Cao, Z.: Diffusion language model with query-document relevance for query-focused summarization. In: Findings of the Association for Computational Linguistics: EMNLP 2023. pp. 11020–11030 (2023)
- [10] Jiang, M., Li, F.: Lie group continual meta learning algorithm. Applied Intelligence 52(10), 10965–10978 (2022)
- [11] Jiang, M., Li, F., Liu, L.: Continual meta-learning algorithm. Applied Intelligence pp. 1–16 (2022)
- [12] Kang, S., Hwang, D., Eo, M., Kim, T., Rhee, W.: Meta-learning with a geometry-adaptive preconditioner. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 16080–16090 (2023)
- [13] Li, H., Eigen, D., Dodge, S., Zeiler, M., Wang, X.: Finding task-relevant features for few-shot learning by category traversal. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. pp. 1–10 (2019)
- [14] Li, W., Wang, L., Xu, J., Huo, J., Gao, Y., Luo, J.: Revisiting local descriptor based image-to-class measure for few-shot learning. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. pp. 7260–7268 (2019)
- [15] Liu, B., Cao, Y., Lin, Y., Li, Q., Zhang, Z., Long, M., Hu, H.: Negative margin matters: Understanding margin in few-shot classification. In: ECCV (2020)
- [16] Liu, Y., Zheng, T., Song, J., Cai, D., He, X.: Dmn4: Few-shot learning via discriminative mutual nearest neighbor neural network. In: Proceedings of the AAAI Conference on Artificial Intelligence. vol. 36, pp. 1828–1836 (2022)
- [17] Lu, Y., Wen, L., Liu, J., Liu, Y., Tian, X.: Self-supervision can be a good few-shot learner. In: European Conference on Computer Vision. pp. 740–758. Springer (2022)
- [18] Qi, G., Yu, H., Lu, Z., Li, S.: Transductive few-shot classification on the oblique manifold. In: Proceedings of the IEEE/CVF International Conference on Computer Vision. pp. 8412–8422 (2021)
- [19] Qiao, Q., Xie, Y., Zeng, Z., Li, F.: Talds-net: Task-aware adaptive local descriptors selection for few-shot image classification. In: ICASSP 2024-2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). pp. 3750–3754. IEEE (2024)
- [20] Ren, M., Triantafillou, E., Ravi, S., Snell, J., Swersky, K., Tenenbaum, J.B., Larochelle, H., Zemel, R.S.: Meta-learning for semi-supervised few-shot classification. arXiv preprint arXiv:1803.00676 (2018)
- [21] Schwartz, E., Karlinsky, L., Shtok, J., Harary, S., Marder, M., Kumar, A., Feris, R., Giryes, R., Bronstein, A.: Delta-encoder: an effective sample synthesis method for few-shot object recognition. Advances in neural information processing systems 31 (2018)
- [22] Simon, C., Koniusz, P., Nock, R., Harandi, M.: Adaptive subspaces for few-shot learning. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. pp. 4136–4145 (2020)
- [23] Snell, J., Swersky, K., Zemel, R.: Prototypical networks for few-shot learning. Advances in neural information processing systems 30 (2017)
- [24] Sung, F., Yang, Y., Zhang, L., Xiang, T., Torr, P.H., Hospedales, T.M.: Learning to compare: Relation network for few-shot learning. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp. 1199–1208 (2018)
- [25] Vinyals, O., Blundell, C., Lillicrap, T., Wierstra, D., et al.: Matching networks for one shot learning. Advances in neural information processing systems 29 (2016)
- [26] Wah, C., Branson, S., Welinder, P., Perona, P., Belongie, S.: The caltech-ucsd birds-200-2011 dataset (2011)
- [27] Wang, Z., Lu, Y., Qiu, Q.: Meta-ole: Meta-learned orthogonal low-rank embedding. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision. pp. 5305–5314 (2023)
- [28] Wertheimer, D., Tang, L., Hariharan, B.: Few-shot classification with feature map reconstruction networks. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. pp. 8012–8021 (2021)
- [29] Xian, Y., Sharma, S., Schiele, B., Akata, Z.: f-vaegan-d2: A feature generating framework for any-shot learning. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. pp. 10275–10284 (2019)
- [30] Yan, L., Li, F., Zheng, X., Zhang, L.: Few-shot learning via task-aware discriminant local descriptors network. In: Proceedings of the 32nd ACM International Conference on Information and Knowledge Management. pp. 2887–2894 (2023)
- [31] Zhang, C., Cai, Y., Lin, G., Shen, C.: Deepemd: Few-shot image classification with differentiable earth mover’s distance and structured classifiers. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. pp. 12203–12213 (2020)
- [32] Zhang, X., Meng, D., Gouk, H., Hospedales, T.M.: Shallow bayesian meta learning for real-world few-shot recognition. In: Proceedings of the IEEE/CVF International Conference on Computer Vision. pp. 651–660 (2021)