∎
22email: [email protected] 33institutetext: Zeynep Akata 44institutetext: Cluster of Excellence Machine Learning, University of Tübingen, Tübingen AI Center, 72076, Tübingen, Germany
44email: [email protected]
Semantically Tied Paired Cycle Consistency for Any-Shot Sketch-based Image Retrieval
Abstract
Low-shot sketch-based image retrieval is an emerging task in computer vision, allowing to retrieve natural images relevant to hand-drawn sketch queries that are rarely seen during the training phase. Related prior works either require aligned sketch-image pairs that are costly to obtain or inefficient memory fusion layer for mapping the visual information to a semantic space. In this paper, we address any-shot, i.e. zero-shot and few-shot, sketch-based image retrieval (SBIR) tasks, where we introduce the few-shot setting for SBIR. For solving these tasks, we propose a semantically aligned paired cycle-consistent generative adversarial network (SEM-PCYC) for any-shot SBIR, where each branch of the generative adversarial network maps the visual information from sketch and image to a common semantic space via adversarial training. Each of these branches maintains cycle consistency that only requires supervision at the category level, and avoids the need of aligned sketch-image pairs. A classification criteria on the generators’ outputs ensures the visual to semantic space mapping to be class-specific. Furthermore, we propose to combine textual and hierarchical side information via an auto-encoder that selects discriminating side information within a same end-to-end model. Our results demonstrate a significant boost in any-shot SBIR performance over the state-of-the-art on the extended version of the challenging Sketchy, TU-Berlin and QuickDraw datasets.
1 Introduction
Matching natural images with free-hand sketches, i.e. sketch-based image retrieval (SBIR) Yu2015 ; Yu2016 ; Liu2017DSH ; Pang2017FGSBIR ; Song2017SpatSemAttn ; Shen2018ZSIH ; Zhang2018GDH ; Chen2018DeepSB3DSR ; Yelamarthi2018ZSBIR ; Dutta2019SEM-PCYC ; Dey2019doodle2search has received a lot of attention. Since sketches can effectively express shape, pose and some fine-grained details of the target images, SBIR serves a favorable scenario complementary to the conventional text-image cross-modal retrieval or the classical content based image retrieval protocol. This may be because in some situations it is difficult to provide a textual description or a suitable image of the desired query, whereas, an user can easily draw a sketch of the desired object on a touch screen.

As the visual information from all classes gets explored by the system during training, with overlapping training and test classes, existing SBIR methods perform well Zhang2018GDH . Since for practical applications there is no guarantee that the training data would include all possible queries, a more realistic setting is low-shot or any-shot SBIR (AS-SBIR) Shen2018ZSIH ; Yelamarthi2018ZSBIR ; Dutta2019SEM-PCYC ; Dey2019doodle2search , which combines zero- and few-shot learning Lampert2014ZSL ; Vinyals2016MatchingNet ; Xian2018ZSLGBU ; Ravi2017FSL and SBIR as a single task, where the aim is an accurate class prediction and a competent retrieval performance. However, this is an extremely challenging task, as it simultaneously deals with domain gap, intra-class variability and limited or no knowledge on novel classes. Additionally, fine-grained SBIR Pang2017FGSBIR ; Pang2019GenFGSBIR is an alternative sketch-based image retrieval task, allowing to search for specific object images, which has already received remarkable attention in the computer vision community. However, it has never been explored in low shot setting, which is an extremely challenging and at the same time of high practical relevance.
One of the major shortcomings of the prior work on any-shot SBIR is that a natural image is retrieved after learning a mapping from an input sketch to an output image using a training set of labelled aligned pairs Yelamarthi2018ZSBIR . The supervision of the pair correspondence is to enhance the correlation of multi-modal data (here, sketch and image) so that learning can be guided by semantics. However, for many realistic scenarios, paired (aligned) training data is either unavailable or obtaining it is very expensive. Furthermore, often a joint representation of two or more modalities is learned by using a memory fusion layer Shen2018ZSIH , such as, tensor fusion Hu2017TFN , bilinear pooling Yu2017MFBP etc. These fusion layers are often expensive in terms of memory Yu2017MFBP , and extracting useful information from this high dimensional space could result in information loss Yu2018MHBN .
To alleviate these shortcomings, we propose a semantically aligned paired cycle consistent generative adversarial network (SEM-PCYC) model for any-shot SBIR task, where each branch either maps the sketch or image features to a common semantic space via an adversarial training. These two branches dealing with two different modalities (sketch and image) constitute an essential component for solving SBIR task. The cycle consistency constraint on each branch guarantees that the mapping of sketch or image modality to a common semantic space and their translation back to the original modality, avoiding the necessity of aligned sketch-image pairs. Imposing a classification loss on the semantically aligned outputs from the sketch and image space enforces the generated features in the semantic space to be discriminative which is very crucial for effective any-shot SBIR. Furthermore, inspired by the previous works on label embedding Akata2015OutputEmbedding , we propose to combine side information from text-based and hierarchical models via a feature selection auto-encoder Wang2017FSAE which selects discriminating side information based on intra and inter class covariance.
This paper extends our CVPR 2019 conference paper Dutta2019SEM-PCYC , with the following additional contributions: (1) We propose to apply the SEM-PCYC model for any-shot SBIR task, i.e. addition to zero-shot paradigm, we introduce few-shot setting for SBIR and combine it with generalized setting, which has been experimentally proven to be effective for difficult or confusing classes. (2) We adapt the recent zero-shot SBIR models and ours to fine-grained SBIR in the generalized low-shot setting and provide an extensive benchmark including quantitative and qualitative evaluations. (3) We evaluate our model on one recent dataset, i.e. QuickDraw, in addition to extending our experiments to new settings with Sketchy and TU-Berlin. We show that our proposed model consistently improves the state-of-the-art results of any-shot SBIR on all the three datasets.
2 Related Work
As our work belongs at the verge of sketch-based image retrieval and any-shot learning task, we briefly review the relevant literature from these fields.
Sketch Based Image Retrieval (SBIR). Attempts for solving SBIR task mostly focus on bridging the domain gap between sketch and image, which can roughly be grouped in hand-crafted and cross-domain deep learning-based methods Liu2017DSH . Hand-crafted methods mostly work by extracting the edge map from natural image and then matching them with sketch using a Bag-of-Words model on top of some specifically designed SBIR features, viz., gradient field HOG Hu2013 , histogram of oriented edges Saavedra2014SoftComp , learned key shapes Saavedra2015LKS etc. However, the difficulty of reducing domain gap remained unresolved as it is extremely challenging to match edge maps with unaligned hand drawn sketch. This domain shift issue is further addressed by neural network models where domain transferable features from sketch to image are learned in an end-to-end manner. Majority of such models use variant of siamese networks Qi2016SBIRSiamese ; Sangkloy2016 ; Yu2016 ; Song2017FineGrained that are suitable for cross-modal retrieval. These frameworks either use generic ranking losses, viz., contrastive loss Chopra2005 , triplet ranking loss Sangkloy2016 or more sophisticated HOLEF based loss Song2017SpatSemAttn ) for the same. Further to these discriminative losses, Pang et al. Pang2017FGSBIR introduced a discriminative-generative hybrid model for preserving all the domain invariant information useful for reducing the domain gap between sketch and image. Alternatively, Liu2017DSH ; Zhang2018GDH focus on learning cross-modal hash code for category level SBIR within an end-to-end deep model.
In addition to the above coarse-grained SBIR models, fine-grained sketch-based image retrieval (FG-SBIR) has gained popularity recently Li2014FGSBIRDeformPart ; Song2017FineGrained ; Song2017SpatSemAttn ; Pang2017FGSBIR . In this more realistic setting, a FG-SBIR model allows to search a specific object or image. First, models tackled this task using deformable part model and graph matching Li2014FGSBIRDeformPart . Recently, different ranking frameworks and corresponding losses, such as, siamese Pang2017FGSBIR , triplet Sangkloy2016 , quadruplet Song2017FineGrained networks were used for the same. Song2017SpatSemAttn proposed attention model for FG-SBIR task, Zhang2018GDH improving retrieval efficiency using a hashing scheme.
Zero-Shot Learning (ZSL) and Few-Shot Learning (FSL). Zero-shot learning in computer vision refers to recognizing objects whose instances are not seen during the training phase; a comprehensive and detailed survey on ZSL is available in Xian2018ZSLGBU . Early works on ZSL Lampert2014ZSL ; Jayaraman2014ZSR ; Changpinyo2016ZSL ; Al-Halah2016ZSL make use of attributes within a two-stage approach to infer the label of an image that belong to the unseen classes. However, the recent works Frome2013Devise ; Romera-Paredes2015ESA ; Akata2015OutputEmbedding ; Akata2016LabelEmbedding ; Kodirov2017SAE directly learn a mapping from image feature space to a semantic space. Many other ZSL approaches learn non-linear multi-modal embedding Socher2013ZSLCrossModalT ; Akata2016LabelEmbedding ; Xian2016ZSLLatentEmbedding ; Changpinyo2017ZSL ; Zhang2017ZSLDeepEmbedding , where most of the methods focus to learn a non-linear mapping from the image space to the semantic space. Mapping both image and semantic features into another common intermediate space is another direction that ZSL approaches adapt Zhang2015ZSLSemSim ; Fu2015ZSOR ; Zhang2016ZSLJointLatentSim ; Akata2016ZSLSS ; Long2017ZSL . Although, most of the deep neural network models in this domain are trained using a discriminative loss function, a few generative models also exist Wang2018ZSL ; Xian2018ZSL ; Chen2018ZSVR that are used as a data augmentation mechanism. In ZSL, some form of side information is required, so that the knowledge learned from seen classes gets transferred to unseen classes. One popular form of side information is attributes Lampert2014ZSL that, however, require costly expert annotation. Thus, there has been a large group of studies Mensink2014COSTA ; Akata2015OutputEmbedding ; Xian2016ZSLLatentEmbedding ; Reed2016LDR ; Qiao2016LiM ; Ding2017 which utilize other auxiliary information, such as, text-based Mikolov2013a or hierarchical model Miller1995WN for label embedding.
On the other hand, few-shot learning (FSL) refers to the task of recognizing images or detecting objects with a model trained on very few samples Xian2019VAEGAND2 ; Schonfeld2018GenZFSL . Directly training a given model with small amount of training samples could have the risk of over fitting. Hence a general step to overcome this hurdle is to initially train the model on classes with sufficient examples, and then generalize it to classes with fewer examples without learning any new parameters. This setup already attracted a lot of attention within the computer vision community. One of the first attempts Koch2015SiameseOneShot is a siamese convolutional network model for computing similarity between pair of images, and then the learned similarity was used to solve the one-shot problem by k-nearest neighbors classification. On the other hand, matching network model Vinyals2016MatchingNet uses cosine distance to predict image label based on support sets and apply the episodic training strategy that mimics few-shot learning. An extension, i.e. prototypical network Snell2017PrototypNet , used Euclidean distance instead of cosine distance and built a prototype representation of each class for the few-shot learning scenario. As an orthogonal direction Ravi2017FSL introduced meta-learning framework for FSL, which updates weights of a classifier for a given episode. Model agnostic meta-learner Finn2017MAML learns better weight initialization capable to generalize in FSL scenario with fewer gradient descent steps. There also exist few low shot methods that learn a generator from the base class data to generate novel class features for data augmentation Girshick2015 ; Wang2018LowShotImaginaryData . Alternatively, GNN Kipf2016 was also proposed as a framework for few-shot learning task Garcia2018 .
Our work. The prior work on zero-shot sketch-based image retrieval (ZS-SBIR) Shen2018ZSIH , proposed a generative cross-modal hashing scheme using a graph convolution network for aligning the sketch and image in the semantic space. Inspired by them, Yelamarthi2018ZSBIR proposed two similar autoencoder-based generative models for zero-shot SBIR, where they have used the aligned pairs of sketch and image for learning the semantics between them. In this work, we propose a paired cycle consistent generative model where each branch either maps sketch or image features to a common semantic space via adversarial training, which we found to be effective for reducing the domain gap between sketch and image. The cycle consistency constraint on each branch allows supervision only at category level, and avoids the need of aligned sketch-image pairs. Furthermore, we address zero-shot and few-shot cross-modal (sketch to image) retrieval, for that, we effectively combine different side information within an end-to-end framework, and map visual information to the semantic space through an adversarial training. Finally, we unify low-shot learning models and generalize them to fine-grained SBIR scenario.
3 Semantically Aligned Paired Cycle Consistent GAN (SEM-PCYC)

Our Semantically Aligned Paired Cycle Consistent GAN (SEM-PCYC) model uses the sketch and image data from the seen categories for training the underlying model. It then encodes and matches the sketch and image categories that remain novel during the training phase. The overall pipeline of our end-to-end deep architecture is shown in Fig. 2.
We define to be a collection of sketch-image data from the training categories , which contains sketch images as well as natural images , where is the total number of sketch and image pairs that are not necessarily aligned. Without loss of generality, a sketch and an image have the same index , and share the same category label. The set indicates the side information necessary for transferring knowledge from seen to the novel classes (a.k.a unseen classes in zero-shot learning literature). In our setting, we also use an auxiliary training set from the unseen classes which is disjoint from , where the number of samples per class is fixed to .
Our aim is to learn two deep functions and respectively for sketch and image for mapping them to a common semantic space where the learned knowledge is applied to the novel classes. Now, given a second set from the test categories , the proposed deep networks , ( is the dimension of the original data and is the targeted dimension of the common representation) map the sketch and natural image to a common semantic space where the retrieval is performed. Depending on , i.e. the number of samples considered per class as an auxiliary set, the scenario is called -shot. In the classical zero-shot sketch-based image retrieval setting, the test categories belong to , in other words, at test time the assumption is that every image will come from a previously unseen class. This is not realistic as the true generalization performance of the classifier can only be measured with how well it generalizes to unseen classes without forgetting the classes it has seen. Hence, in the generalized zero-shot sketch based image retrieval scenario the search space contains both and . In other words, at test time an image may come either from a previously seen or an unseen class. As this setting is significantly more challenging, the accuracy decreases for all the methods considered.
3.1 Paired Cycle Consistent Generative Model
To achieve the flexibility to handle sketch and image individually, i.e. even without aligned sketch-image pairs, during training and , we propose a cycle consistent generative model whose each branch is semantically aligned with a common discriminator. The cycle consistency constraint on each branch of the model ensures the mapping of sketch or image modality to a common semantic space, and their translation back to the original modality, which only requires supervision at the category level. Imposing a classification loss on the output of and allows generating highly discriminative features.
Our main goal is to learn two mappings and that can respectively translate the unaligned sketch and natural image to a common semantic space. Zhu et al. Zhu2017CycleGAN pointed out about the existence of underlying intrinsic relationship between modalities and domains, for example, sketch or image of same object category have the same semantic meaning, and possess that relationship. Even though, we lack visual supervision as we do not have access to aligned pairs, we can exploit semantic supervision at category levels. We train a mapping so that , where is the corresponding side information and is indistinguishable from via an adversarial training that classifies different from . The optimal thereby translates the modality into a modality which is identically distributed to . Similarly, another function can be trained via the same discriminator such that .
Adversarial Loss. As shown in Fig. 2, for mapping the sketch and image representation to a common semantic space, we introduce four generators , , and . In addition, we bring in three adversarial discriminators: , and , where discriminates among original side information , sketch transformed to side information and image transformed to side information ; likewise discriminates between original sketch representation and side information transformed to sketch representation ; in a similar way distinguishes between and . For the generators , and their common discriminator , the objective is:
(1) |
where and generate side information similar to the ones in while distinguishes between the generated and original side information. Here, and minimize the objective against an opponent that tries to maximize it, namely
In a similar way, for the generator and its discriminator , the objective is:
minimizes the objective and its adversary intends to maximize it, namely
Similarly, another adversarial loss is introduced for the mapping and its discriminator , i.e.
Cycle Consistency Loss. The adversarial mechanism effectively reduces the domain or modality gap, however, it is not guaranteed that an input and an output are matched well. To this end, we impose cycle consistency Zhu2017CycleGAN . When we map the feature of a sketch of an object to the corresponding semantic space, and then further translate it back from the semantic space to the sketch feature space, we should reach back to the original sketch feature. This cycle consistency loss also assists in learning mappings across domains where paired or aligned examples are not available. Specifically, if we have a function and another mapping , then both and are reverse of each other, and hence form a one-to-one correspondence or bijective mapping.
where is the semantic features of the class which is the category label of . Similarly, a cycle consistency loss is imposed for the mappings and : . These consistent loss functions also behave as a regularizer to the adversarial training to assure that the learned function maps a specific input to a desired output .
Classification Loss. On the other hand, adversarial training and cycle-consistency constraints do not explicitly ensure whether the generated features by the mappings and are class discriminative, i.e. a requirement for the zero-shot sketch-based image retrieval task. We conjecture that this issue can be alleviated by introducing a discriminative classifier pre-trained on the input data. At this end we minimize a classification loss over the generated features.
where is the category label of , denotes the probability of ) being predicted with its true class label . The conditional probability is computed by a linear softmax classifier parameterized by . Similarly, a classification loss is also imposed on the generator .
3.2 Selection of Side Information
Learning a compatibility or a matching function between multiple modalities in zero-shot scenario Shen2018ZSIH ; Dey2019doodle2search ; Liu2019SKP requires structure in the class embedding space where the image features are mapped to. Attributes provide one such a structured class embedding space Lampert2014ZSL , however obtaining attributes requires costly human annotation. On the other hand, side information can also be learned at a much lower cost from large-scale text corpora such as Wikipedia. Similarly, output embeddings built from hierarchical organization of classes such as WordNet can also provide structure in the output space and substitute the attributes. Motivated by attribute selection for zero-shot learning Guo2018ZSL , indicating that a subset of discriminative attributes are more effective than the whole set of attributes for ZSL, we incorporate a joint learning framework integrating an auto-encoder to select side information. Let be the side information with as the original dimension. The loss function is:
(2) |
where , , with , and , respectively as the weights and biases for the function and . Additionally, denotes the Frobenius norm defined as the square root of the sum of the absolute squares of its elements and indicates norm Nie2010 . Selecting side information reduces the dimensionality of embeddings, which further improves retrieval time. Therefore, the training objective of our model:
(3) |
where different s are the weights on respective loss terms. For obtaining the initial side information, we combine a text-based and a hierarchical model, which are complementary and robust Akata2015OutputEmbedding . Below, we provide a description of our text-based and hierarchical models for side information.
Text-based Model. We use three different text-based side information. (1) Word2Vec Mikolov2013 is a two layered neural network that are trained to reconstruct linguistic contexts of words. During training, it takes a large corpus of text and creates a vector space of several hundred dimensions, with each unique word being assigned to a corresponding vector in that space. The model can be trained with a hierarchical softmax with either skip-gram or continuous bag-of-words formulation for target prediction. (2) GloVe Pennington2014GloVe considers global word-word co-occurrence statistics that frequently appear in a corpus. Intuitively, co-occurrence statistics encode important semantic information. The objective is to learn word vectors such that their dot product equals to the probability of their co-occurrence. (3) FastText Joulin2016FastText extends the Word2Vec model, where instead of learning vector for words directly, FastText represents each word as n-gram of characters and then trains a skip-gram model to learn the embeddings. FastText works well with rare words, even if a word was not seen during training, it can be broken down into n-grams to get its embeddings, which is a huge advantage of this model.
Hierarchical Model. Semantic distance (or similarity) between words can also be approximated by their distance (or similarity) in a large ontology such as WordNet111https://wordnet.princeton.edu with words in English. One can measure the similarity ( in eqn. (4)) between words represented as nodes in the ontology using techniques, such as path similarity, e.g. counting the number of hops required to reach from one node to the other, and Jiang-Conrath Jiang1997SemSim . For a set of nodes in a dictionary that consists of a set of classes, similarities between every class and all the other nodes considered in the same order in to determine the entries of the class embedding vector Akata2015OutputEmbedding of ( in eqn. (4)):
(4) |
Note that, considers all the nodes on the path from each node in to its highest level ancestor. The WordNet hierarchy contains most of the classes of the Sketchy Sangkloy2016 , Tu-Berlin Eitz2012TUBerlin and QuickDraw Dey2019doodle2search datasets. Few exceptions are: jack-o-lantern which we replaced with lantern that appears higher in the hierarchy, similarly human skeleton with skeleton, and octopus with octopods etc. , i.e. the number of nodes, for Sketchy, TU-Berlin and QuickDraw datasets are respectively , and .
4 Experiments
In this section, we detail our datasets, implementation protocol and present our results on (generalized) zero-shot, (generalized) few-shot and fine-grained settings.
Datasets. We experimentally validate our model on three popular SBIR datasets, namely Sketchy (Extended), TU-Berlin (Extended) and QuickDraw (Extended). For brevity, we refer to these extended datasets as Sketchy, TU-Berlin and QuickDraw respectively.
The Sketchy Dataset Sangkloy2016 is a large collection of sketch-photo pairs. The dataset originally consists of images from different classes, with photos each. The sketch images of the objects that appear in these images are collected via crowd sourcing. This dataset also contains a fine grained correspondence (alignment) between particular photos and sketches as well as various data augmentations for deep learning based methods. Liu et al. Liu2017DSH extended the dataset by adding photos yielding in total images. We randomly pick classes as the novel test set, and the data from remaining training classes.
The original TU-Berlin Dataset Eitz2012TUBerlin contains categories with a total of sketches extended by Liu2017DSH with natural images corresponding to the sketch classes. classes of sketches and images are randomly chosen to respectively from the query set and the retrieval gallery. The remaining classes are utilized for training. We follow Shen et al. Shen2018ZSIH and select classes with at least images to form a test set.
The QuickDraw (Extended), a large-scale dataset proposed recently in Dey2019doodle2search , contains the sketch-image pairs of classes consisting of images and sketches, i.e.approximately images/class and sketches/class. The main difference of this dataset from the previous ones is in the abstractness of the sketches which are collected from the Quick, Draw!222https://quickdraw.withgoogle.com online game. The increased abstractness in the drawings has eventually enlarged the sketch-image domain gap, and hence increased the challenge of SBIR task.
Implementation details. We implemented the SEM-PCYC model using PyTorch Paszke2017PyTorch deep learning toolbox333Our code and models are available at: https://github.com/AnjanDutta/sem-pcyc-ijcv on a single TITAN Xp or TITAN V graphics card. Unless otherwise mentioned, we extract features from sketch and image from the VGG- Simonyan2014 network model pre-trained on ImageNet Deng2009ImageNet (before the last pooling layer). In Section 4.1, we compare the VGG-16 features with SE-ResNet-50 features for zero-shot SBIR task, which is only restricted to that experimentation. Since in this work, we deal with single object retrieval and an object usually spans only on certain regions of a sketch or image, we apply an attention mechanism inspired by Song et al. Song2017SpatSemAttn without the shortcut connection for extracting only the informative regions from sketch and image. The attended d representation is obtained by a pooling operation guided by the attention model and fully connected (fc) layer. This entire model is fine tuned on our training set ( classes for Sketchy, classes for TU-Berlin and classes for QuickDraw). Both the generators and are built with a fc layer followed by a ReLU non-linearity that accept d vector and output d representation, whereas, the generators and take d features and produce d vector. Accordingly, all discriminators are designed to take the output of respective generators and produce a single dimensional output. The auto-encoder is designed by stacking two non-linear fc layers respectively as encoder and decoder for obtaining a compressed and encoded representation of dimension . We experimentally set , , , , , , , to give the optimum performance of our model.
While constructing the hierarchy for the class embedding, we only consider the training classes belong to that dataset. In this way, the WordNet hierarchy or the knowledge graph for the Sketchy, TU-Berlin and QuickDraw datasets respectively contain and nodes. Although our method does not produce binary hash code as a final representation for matching sketch and image, for the sake of comparison with some related works, such as, ZSH Yang2016ZSH , ZSIH Shen2018ZSIH , GDH Zhang2018GDH , that produce hash codes, we have used the iterative quantization (ITQ) Gong2013ITQ algorithm to obtain the binary codes for sketch and image. We have used final representation of sketches and images from the train set to learn the optimized rotation which later used on our final representation for obtaining the binary codes.
4.1 (Generalized) Zero-Shot Sketch-based Image Retrieval
Apart from the two prior Zero-Shot SBIR works closest to ours, i.e. ZSIH Shen2018ZSIH and ZS-SBIR Yelamarthi2018ZSBIR , we adopt fourteen ZSL and SBIR models to the zero-shot SBIR task. Note that in this setting, the training classes are indicated as “seen” and novel classes as “unseen” since none of the sketches of these classes are visible to the model during training.
The SBIR methods that we evaluate are SaN Yu2015 , 3D Shape Wang2015a , Siamese CNN Qi2016SBIRSiamese , GN Triplet Sangkloy2016 , DSH Liu2017DSH and GDH Zhang2018GDH . A softmax baseline is also added, which is based on computing the d VGG- Simonyan2014 feature vector pre-trained on the seen classes for nearest neighbour search. The ZSL methods that we evaluate are: CMT Socher2013ZSLCrossModalT , DeViSE Frome2013Devise , SSE Zhang2015ZSLSemSim , JLSE Zhang2016ZSLJointLatentSim , ZSH Yang2016ZSH , SAE Kodirov2017SAE and FRWGAN Felix2018FRWGAN . We use the same seen-unseen splits of categories for all the experiments for a fair comparison. We compute the mean average precision (mAP@all) and precision considering top (Precision@100) Su2015PerfEvalIR ; Shen2018ZSIH retrievals for the performance evaluation and comparison.
Sketchy (Extended) | TU-Berlin (Extended) | QuickDraw (Extended) | |||||||||||
Method | mAP | Prec. | Feat. | Retr. | mAP | Prec. | Feat. | Retr. | mAP | Prec. | Feat. | Retr. | |
@all | @100 | dim | Time (s) | @all | @100 | dim | Time (s) | @all | @100 | dim | Time (s) | ||
SBIR | Softmax Baseline | ||||||||||||
Siamese CNN Qi2016SBIRSiamese | |||||||||||||
SaN Yu2016a | |||||||||||||
GN Triplet Sangkloy2016 | |||||||||||||
3D Shape Wang2015 | |||||||||||||
DSH (binary) Liu2017DSH | |||||||||||||
GDH (binary) Zhang2018GDH | |||||||||||||
ZSL | CMT Socher2013ZSLCrossModalT | ||||||||||||
DeViSE Frome2013Devise | |||||||||||||
SSE Zhang2015BitScalable | |||||||||||||
JLSE Zhang2016ZSLJointLatentSim | |||||||||||||
SAE Kodirov2017SAE | |||||||||||||
FRWGAN Felix2018FRWGAN | |||||||||||||
ZSH (binary) Yang2016 | |||||||||||||
Zero-Shot SBIR | ZSIH (binary) Shen2018ZSIH | ||||||||||||
ZS-SBIR Yelamarthi2018ZSBIR | |||||||||||||
SEM-PCYC | |||||||||||||
SEM-PCYC (binary) | |||||||||||||
Generalized Zero-Shot SBIR | ZSIH (binary) Shen2018ZSIH | ||||||||||||
ZS-SBIR Yelamarthi2018ZSBIR | |||||||||||||
SEM-PCYC | |||||||||||||
SEM-PCYC (binary) |
![]() |
![]() |
![]() |
![]() |
(a) | (b) | (c) | (d) |
Table 1 shows that most of the SBIR and ZSL methods perform worse than the zero-shot SBIR methods. Among them, the ZSL methods usually suffer from the domain gap between the sketch and image modalities. The majority SBIR methods although have performed better than their ZSL counterparts, fail to generalize the learned representations to unseen classes. However, GN Triplet Sangkloy2016 , DSH Liu2017DSH , GDH Zhang2018GDH have shown reasonable potential to generalize information only from object with common shape.
As per the expectation, the specialized zero-shot SBIR methods have surpassed most of the ZSL and SBIR baselines as they possess both the ability of reducing the domain gap and generalizing the learned information for the unseen classes. ZS-SBIR learns to generalize between sketch and image from the aligned sketch-image pairs, as a result it performs well on the Sketchy dataset, but not on the TU-Berlin or QuickDraw datasets, as in these datasets, aligned sketch-image pairs are not available. Our proposed method has excels the state-of-the-art method by mAP@all on the Sketchy, mAP@all on the TU-Berlin and mAP@all on the QuickDraw, which shows the effectiveness of our proposed SEM-PCYC model due to the cycle consistency between sketch, image and semantic space, as well as the compact and discriminative side information.
In general, the main challenge in TU-Berlin dataset is the large number of visually similar and overlapping classes. On the other hand, in QuickDraw datatset there is a the large domain gap that is intentionally introduced for designing future realistic models. Also, the ambiguity in annotation, e.g. non-professional sketches, is a major challenge in this dataset. Although our results are encouraging in that they show that the cycle consistency helps zero-shot SBIR task and our model sets the new state-of-the-art in this domain, we hope that our work will encourage further research in improving these results.
Finally, the PR-curves of SEM-PCYC and considered baselines on Sketchy, TU-Berlin and QuickDraw are respectively shown in Fig. 3(a)-(c) which show that the precision-recall curves correspond to our SEM-PCYC model (dark blue line) are always plotted above the other methods. This indicates that our proposed model consistently exhibits the superiority on all three datasets, which clearly show the benefit of our proposal.
Generalized Zero-Shot Sketch-based Image Retrieval. We conducted experiments on generalized ZS-SBIR setting where search space contains both seen and unseen classes. This task is significantly more challenging than ZS-SBIR as seen classes create distraction to the test queries. Our results in Table 1 show that our model significantly outperforms both the existing models Shen2018ZSIH ; Yelamarthi2018ZSBIR , due to the benefit of our cross-modal adversarial mechanism and heterogeneous side information.
![]() |
![]() |
![]() |
![]() |
![]() |
swan | duck | owl | penguin | standing bird |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
✓ | ✗ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✗ | ✓ | ✓ | ✗ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
✓ | ✓ | ✓ | ✗ | ✗ | ✓ | ✓ | ✗ | ✓ | ✓ | ✓ | ✓ | ✗ | ✓ | ✓ | ✓ | ✗ | ✗ | ✓ | ✓ | |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
✓ | ✗ | ✗ | ✓ | ✓ | ✗ | ✓ | ✗ | ✗ | ✗ | ✓ | ✓ | ✓ | ✗ | ✓ | ✓ | ✓ | ✗ | ✓ | ✓ | |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
✓ | ✓ | ✓ | ✓ | ✗ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✗ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
✓ | ✓ | ✓ | ✗ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
✓ | ✓ | ✗ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✗ | ✓ | ✗ | ✓ | ✓ | ✓ | ✓ |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✗ | ✓ | ✓ | ✓ | ✓ | |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
✓ | ✓ | ✓ | ✗ | ✓ | ✓ | ✓ | ✓ | ✗ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
✓ | ✓ | ✓ | ✓ | ✗ | ✓ | ✓ | ✓ | ✓ | ✗ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
✓ | ✗ | ✓ | ✓ | ✗ | ✓ | ✓ | ✓ | ✓ | ✓ | ✗ | ✓ | ✓ | ✓ | ✓ | ✓ | ✗ | ✓ | ✓ | ✓ | |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
✗ | ✓ | ✓ | ✓ | ✓ | ✗ | ✗ | ✓ | ✓ | ✗ | ✓ | ✗ | ✓ | ✓ | ✗ | ✓ | ✓ | ✗ | ✗ | ✗ | |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
✗ | ✓ | ✓ | ✓ | ✓ | ✓ | ✗ | ✗ | ✗ | ✓ | ✓ | ✓ | ✗ | ✗ | ✓ | ✗ | ✓ | ✓ | ✗ | ✗ | |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
✓ | ✓ | ✓ | ✓ | ✗ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✗ | ✓ | ✓ | ✓ | ✓ | ✓ | ✗ | ✗ | ✓ | |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
✓ | ✓ | ✓ | ✗ | ✓ | ✓ | ✗ | ✓ | ✓ | ✓ | ✓ | ✓ | ✗ | ✗ | ✗ | ✗ | ✓ | ✗ | ✗ | ✓ | |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
✓ | ✓ | ✓ | ✓ | ✓ | ✗ | ✓ | ✓ | ✗ | ✓ | ✓ | ✓ | ✗ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✗ | ✗ | ✗ | ✗ | ✓ | ✓ | ✓ | ✓ | ✓ | |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
✗ | ✓ | ✗ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✗ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✗ | ✗ |
Qualitative Results. We analyze the retrieval performance of our proposed model qualitatively in Fig. 5, Fig. 6 and Fig. 7. Some notable examples are as follows. Sketch query of tank retrieves some examples of motorcycle probably because both of them have wheels in common (row of Fig. 5). Similar explanation can be given in the case of car and motorcycle (row of Fig. 7). For having visual and semantic similarity, sketching guitar retrieves some violins (row of Fig. 5). This can also be observed in case of train and van in row of Fig. 7.
For having visual and semantic similarity, querying bear retrieves some squirrels (row of Fig. 5). Querying objects with wheel (e.g., wheelchair, motorcycle) sometime wrongly retrieves other vehicles, probably because of having wheels in common (row of Fig. 5). Sketch query of spoon retrieves some examples of racket (row of Fig. 5), possibly for having significant visual similarity. Sketch of burger retrieves some examples of jack-o-lantern (row of Fig. 5), perhaps for having same shape. Querying castle, retrieves images having large portion of sky (row of Fig. 6), because the images of its semantically similar classes, such as, skyscraper, church, are mostly captured with sky in background. Similar phenomenon can be observed in case of tree and electrical post in row of Fig. 7. Querying duck, retrieves images of swan or shark (row of Fig. 6), probably for having watery background in common. Sketch of pickup truck retrieves some images from traffic light class for having a truck like object in the scene (row of Fig. 6). Sketching bookshelf retrieves some examples of cabinet for having significant visual and semantic similarity (row of Fig. 6).
Sometimes too much abstraction in sketches can produce wrong retrieval results. For example, in row of Fig. 7, it is difficult to understand whether the sketch is of eiffel tower or any other tower or a hill. Furthermore, we have observed certain ambiguities in annotation of images in QuickDraw dataset. Currently, the images are much complex, which often contain two or more objects, and most of the currently available SBIR datasets provide single object annotation ignoring the object in background. For example see row of Fig. 7, many of the wrongly retrieved images truly contain flower, whereas some of them are annotated as tower or trees etc. Additionally, as the images from QuickDraw dataset are collected from the Flickr website, it contains many subsequent captures which can be confused as identical frames. Hence, although some retrievals on QuickDraw dataset appear identical, they are not in terms of the actual pixel values.
In general, we observe that the wrongly retrieved candidates mostly have a closer visual and semantic relevance with the queried ones. This effect is more prominent in TU-Berlin dataset, which may be due to the inter-class similarity of sketches between different classes. As shown in Fig. 4, the classes swan, duck and owl, penguin have substantial visual similarity, and all of them are standing bird which is a separate class of the same dataset. Therefore, for TU-Berlin dataset, it is challenging to generalize the unseen classes from the learned representation of seen classes.
Text Embedding | Hierarchical Embedding | Sketchy (Extended) | TU-Berlin (Extended) | QuickDraw (Extended) | ||||||||||
GloVe Pennington2014GloVe | Word2Vec Mikolov2013 | FastText Joulin2016FastText | Path | Lin Lin1998ITSim | Ji-Cn Jiang1997SemSim | dim | dim | dim | dim | dim | dim | dim | dim | dim |
✓ | ||||||||||||||
✓ | ||||||||||||||
✓ | ||||||||||||||
✓ | ||||||||||||||
✓ | ||||||||||||||
✓ | ||||||||||||||
✓ | ✓ | |||||||||||||
✓ | ✓ | |||||||||||||
✓ | ✓ | |||||||||||||
✓ | ✓ | |||||||||||||
✓ | ✓ | |||||||||||||
✓ | ✓ | |||||||||||||
✓ | ✓ | |||||||||||||
✓ | ✓ | |||||||||||||
✓ | ✓ |
Effect of Side-Information. In zero-shot learning, side information is as important as the visual information as it is the only means the model can discover similarities between classes. As the type of side information has a high effect in performance of any method, we analyze the effect of side-information and present zero-shot SBIR results by considering different side information and their combinations. We compare the effect of using GloVe Pennington2014GloVe , Word2Vec Mikolov2013a and FastText Joulin2016FastText as text-based model, and three similarity measurements, i.e. path, Lin Lin1998ITSim and Jiang-Conrath Jiang1997SemSim for constructing three different side information that are based on WordNet hierarchy. Table 2 contains the quantitative results on Sketchy, TU-Berlin and QuickDraw datasets with different side information mentioned and their combinations, where we set . We have observed that in majority of cases combining different side information increases the performance by to .
On Sketchy, the combination of Word2Vec and Jiang-Conrath hierarchical similarity as well as FastText and Path reach the highest mAP of with 64d embedding while on TU Berlin dataset, in addition to the combination of Word2Vec and path similarity, FastText and Path lead with mAP with 64d, and for QuickDraw the combination of GloVe and Lin hierarchical similarity reaches to for d. We conclude from these experiments that indeed text-based and hierarchy-based class embeddings are complementary.
Effect of Visual Features. Visual features are also crucial for the zero-shot SBIR task. For having some overview on that, addition to VGG-16 Simonyan2014 features obtained before the last fc layer, we also consider SE-ResNet-50 Hu2019SENet ; He2015ResNet features, and perform zero-shot SBIR experiments on the Sketchy, TU-Berlin and QuickDraw datasets with different semantic models mentioned above. In Table 3, we present the mAP@all values obtained by the considered visual features and semantic models, where we observe that SE-ResNet-50 features work consistently better than VGG-16 on all the three datasets. Especially, the performance gain on the challenging TU-Berlin dataset should be noted, which we speculate as the benefit of feature calibration strategy involved in the SE blocks, that effectively produces robust features minimizing inter-class confusion as presented in Fig. 4.
Visual | Semantic | Sketchy | TU-Berlin | QuickDraw |
Features | Model | (Extended) | (Extended) | (Extended) |
VGG-16 Simonyan2014 | GloVe Pennington2014GloVe | |||
Word2Vec Mikolov2013 | ||||
FastText Joulin2016FastText | ||||
Path | ||||
Lin Lin1998ITSim | ||||
Ji-Cn Jiang1997SemSim | ||||
SE-ResNet-50 Hu2019SENet ; He2015ResNet | GloVe Pennington2014GloVe | |||
Word2Vec Mikolov2013 | ||||
FastText Joulin2016FastText | ||||
Path | ||||
Lin Lin1998ITSim | ||||
Ji-Cn Jiang1997SemSim |
Model Ablations. The baselines of our ablation study are built by modifying some parts of the SEM-PCYC model and analyze the effect of different losses of our model. First, we train the model only with adversarial loss, and then alternatively add cycle consistency and classification loss for the training. Second, we train our model by only withdrawing the adversarial loss for the semantic domain, which should indicate the effect of side information in our case. We also train the model without the side information selection mechanism, for that, we only take the original text or hierarchical embedding or their combination as side information, which can give an idea on the advantage of selecting side information via the auto-encoder. Next, we experiment reducing the dimensionality of the class embedding to a percentage of the full dimensionality. Finally, to demonstrate the effectiveness of the regularizer used in the auto-encoder for selecting discriminative side information, we experiment by making in eqn. (2).
Description | Sketchy | TU-Berlin | QuickDraw |
(Extended) | (Extended) | (Extended) | |
Only adversarial loss | |||
Adversarial + cycle consistency loss | |||
Adversarial + classification loss | |||
Adversarial (sketch + image) + cycle consistency + classification loss | |||
Without selecting side information | |||
Without regularizer in eqn. (2) | |||
SEM-PCYC (full model) |
The mAP@all values obtained by respective baselines mentioned above are shown in Table 4. We consider the best side information setting according to Table 2 depending on the dataset. The assessed baselines have typically underperformed the full SEM-PCYC model. Only with adversarial losses, the performance of our system drops significantly. We suspect that only adversarial training although maps sketch and image input to a semantic space, there is no guarantee that sketch-image pairs of same category are matched. This is because adversarial training only ensures the mapping of input modality to target modality that matches its empirical distribution Zhu2017CycleGAN , but does not guarantee an individual input and output are paired up.
Imposing cycle-consistency constraint ensures the one-to-one correspondence of sketch-image categories. However, the performance of our system does not improve substantially while the model is trained both with adversarial and cycle consistency loss. We speculate that this issue could be due to the lack of inter-category discriminating power of the learned embedding functions; for that, we set a classification criteria to train discriminating cross-modal embedding functions. We further observe that only imposing classification criteria together with adversarial loss, neither improves the retrieval results. We conjecture that in this case the learned embedding could be very discriminative but the two modalities might be matched in wrong way. Hence, it can be concluded that all these three losses are complimentary to each other and absolutely essential for effective zero-shot SBIR.
Next, we analyze the effect of side information and notice that without the adversarial loss for the semantic domain, our model performs better than the previously mentioned three configurations but does not reach near to the full model. This is due to the fact that without semantic mapping, the resulting embeddings are not semantically related to each other, which do not help in cross modal retrieval in zero-shot scenario. We further observe that without the encoded and compact side information, we achieve better mAP@all with a compromise on retrieval time, as the original dimension (d for Sketchy, d for TU-Berlin and d for QuickDraw) of considered side information is much higher than the encoded ones (d). We further investigate by reducing its dimension as a percentage of the original one (see Fig. 3(c)), and we have observed that at the beginning, reducing a small part (mostly to ) usually leads to a better performance, which reveals that not all the side information are necessary for effective zero-shot SBIR and some of them are even harmful. In fact, the first removed ones have low information content, and can be regarded as noise.
We have also perceived that removing more side information (beyond to ) deteriorates the performance of the system, which is quite justifiable because the compressing mechanism of auto-encoder progressively removes important and predictable side information. However, it can be observed that with highly compressed side information as well, our model provides a very good deal with performance and retrieval time.
Finally, without using the regularizer in eqn. (2) although our system performs reasonably, the mAP@all value is still lower than the best obtained performance. We explain this as a benefit of using -norm based regularizer that effectively select representative side information.
![]() |
![]() |
![]() |
(a) | (b) | (c) |
![]() |
![]() |
![]() |
(d) | (e) | (f) |
Query | 0-shot | 1-shot | 5-shot | 10-shot | ||||||||||||||||||||
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
||||
✗ | ✗ | ✗ | ✗ | ✓ | ✗ | ✗ | ✓ | ✗ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | |||||
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
||||
✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | |||||
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
||||
✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✓ | ✗ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | |||||
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
||||
✗ | ✗ | ✗ | ✗ | ✗ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | |||||
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
||||
✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ |
Query | 0-shot | 1-shot | 5-shot | 10-shot | ||||||||||||||||||||
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
||||
✗ | ✗ | ✗ | ✗ | ✗ | ✓ | ✓ | ✗ | ✗ | ✗ | ✓ | ✗ | ✓ | ✓ | ✗ | ✓ | ✓ | ✓ | ✓ | ✓ | |||||
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
||||
✗ | ✓ | ✗ | ✓ | ✓ | ✓ | ✗ | ✗ | ✗ | ✗ | ✓ | ✓ | ✓ | ✗ | ✗ | ✓ | ✓ | ✓ | ✓ | ✓ | |||||
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
||||
✗ | ✗ | ✗ | ✗ | ✓ | ✓ | ✗ | ✗ | ✗ | ✗ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | |||||
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
||||
✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | |||||
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
||||
✗ | ✗ | ✓ | ✗ | ✗ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ |
Query | 0-shot | 1-shot | 5-shot | 10-shot | ||||||||||||||||||||
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
||||
✗ | ✗ | ✗ | ✗ | ✗ | ✓ | ✗ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✗ | ✓ | ✓ | ✓ | ✓ | ✓ | |||||
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
||||
✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | |||||
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
||||
✗ | ✗ | ✓ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✓ | ✗ | ✗ | ✓ | ✗ | ✓ | ✓ | ✓ | ✓ | ✓ | |||||
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
||||
✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✗ | ✓ | ✓ | ✗ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | |||||
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
||||
✗ | ✗ | ✗ | ✗ | ✗ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ |
4.2 (Generalized) Few-Shot Sketch-based Image Retrieval
For the few-shot scenario, we start with the pre-trained model trained in the zero-shot setting, and then fine tune it using a few example images, e.g. -shot, from “novel” classes. For fine tuning the model in -shot setting, we consider different sketch and image instances from each of the unseen classes and cross-combine according to the coarse-grained and fine-grained settings to fine tune the model. The performance is evaluated on the rest of the instances from each class at test time.
Few-Shot Sketch-based Image Retrieval. Fig. 8(a)-(c) present the few-shot SBIR performance of our SEM-PCYC model together with ZSIH Shen2018ZSIH and ZS-SBIR Yelamarthi2018ZSBIR respectively on the Sketchy, TU-Berlin and QuickDraw databases. All these plots show that the considered methods have performed consistently with the increment of . However, this growth slowly gets saturated after . In this case also our proposed SEM-PCYC model consistently outperforms the other prior works, which clearly points out the supremacy of our proposal.
Generalized Few-Shot Sketch-based Image Retrieval. We also tested our few-shot model in generalized scenario, where during the test phase the search space includes both the seen and novel classes. Typically, this setting poses remarkably challenging scenario as the seen classes may create significant confusion to the novel queries. However, the generalized setting is more realistic as it allows to query the system with sketch from any classes. In this setting as well, we considered ZSIH Shen2018ZSIH and ZS-SBIR Yelamarthi2018ZSBIR as two benchmark methods and trained them with the same experimental settings as ours. In FS-SBIR the generalized setting results follow the non-generalized setting quite closely (see Fig. 8(d)-(f)). This eventually indicates the convergence of the generalization ability of different models. In this setting as well, our proposed model steadily surpassed both the benchmark models, which indicates the advantage of our proposed model.
Qualitative Results. Fig. 9, Fig. 10 and Fig. 11 present a selection of qualitative results obtained by our SEM-PCYC model respectively on the Sketchy, TU-Berlin and QuickDraw datasets in the scenario of increasing number of shots, which show an evolution of model performance with the increment of () for the classes where 0-shot results are weak. From these results, we can see that sometimes a single unseen example is sufficient to correctly retrieve images (row of Fig. 9, row of Fig. 10 and row of Fig. 11), however, sometimes it needs more examples (row and of Fig. 9, row , , of Fig. 10 and row , , of Fig. 11) to remove the confusion from the other similar classes. This uncertainty may either come from visual or semantic similarity. As expected, increasing the number of examples also improves the performance.
Description | Sketchy (5-shot) | TU-Berlin (5-shot) | QuickDraw (5-shot) |
Only adversarial loss | |||
Adversarial + cycle consistency loss | |||
Adversarial + classification loss | |||
Adversarial (sketch + image) + cycle consistency + classification loss | |||
Without regularizer in eqn. (2) | |||
SEM-PCYC (full model) |
Model Ablations. Similar to zero-shot setting, we perform an ablation study for few-shot scenario as well, where we consider the same model baselines as of Table 4. The mAP@all values obtained by those baselines in -shot scenario are shown in Table 5. In this case, all the baselines have achieved much better performance than the corresponding zero-shot performance on that dataset, which is absolutely justified since the model is already trained to zero-shot setting and having few examples from novel classes provide some gain with any combination of losses. We observe that the first three configurations (first three rows of Table 5) work quite closely across all the three datasets and we haven’t found any prominent difference among these three baselines on the considered datasets. However, the baselines with more criterion or losses (bottom three rows of Table 5) achieve much better performance from the previously mentioned three baselines. Among these baselines, we have not found much difference between the ones that do and do not use side information. This is due to the consideration of pre-trained zero-shot model which already has past knowledge of side information, and in this case training with side information could be slightly redundant.
![]() |
![]() |
(a) | (b) |
Fine-Grained Settings. We have further evaluated our model in fine-grained setting where the task is to find a specific object image of a drawn sketch, and we have combined it with the above mentioned variations of -shot scenarios. For this experiment, we only considered the Sketchy dataset as only this corpus contains aligned sketch-image pairs, which are often used for fine-grained SBIR evaluation tasks. We have not considered other fine-grained datasets, such as shoe, chair etc Song2017FineGrained as they do not contain class information which we need for semantic space mapping. For this setting as well, we have considered ZSIH Shen2018ZSIH and ZS-SBIR Yelamarthi2018ZSBIR as the two benchmark methods and the same experimental protocol.
Fig. 12(a) and Fig. 12(b) show the performance of our model in fine-grained generalized few-shot together with ZSIH Shen2018ZSIH and ZS-SBIR Yelamarthi2018ZSBIR . In fine-grained setting, all the methods have performed remarkably poor. We explain this fact as the drawback of semantic space mapping which intends to map visual information from sketch and image to the same neighborhood and ignores fine-grained information. Therefore the proposed solution to low-shot task and the notion of fine-grained problem contradicts, and as a consequence the performance of all the considered models deteriorates. In generalized setting, we have observed that all the models have performed slightly better. We conjecture that the considered models can memorize the fine-grained information of the training or seen samples, which gives a slight rise (as they are very few in number) in performance in generalized scenario. However, we see that low-shot fine-grained paradigm is very important for SBIR. Nevertheless, we admit that it is an extremely challenging task, which needs substantial research work to be solved.
5 Conclusion
In this paper, we proposed the SEM-PCYC model for the any-shot SBIR task. Our SEM-PCYC model is a semantically aligned paired cycle consistent generative adversarial network whose each branch either maps a sketch or an image to a common semantic space via adversarial training with a shared discriminator. Thanks to cycle consistency on both the branches our model does not require aligned sketch-image pairs. Moreover, it acts as a regularizer in the adversarial training. The classification losses on the generators guarantee the features to be discriminative. We show that combining heterogeneous side information through an auto-encoder, which encodes a compact side information useful for adversarial training, is effective. In addition to the model, in this paper, we introduced (generalized) few-shot SBIR as a new task, which is combined with fine-grained setting. We considered three benchmark datasets with varying difficulties and challenges, and performed exhaustive evaluation with the above mentioned paradigms. Our assessment on these three datasets has shown that our model consistently outperforms the existing methods in (generalized) zero- and few-shot, and fine-grained settings. We encourage future work to evaluate sketch based image retrieval methods in these incrementally challenging and realistic settings.
Acknowledgments
This work has received funding from the European Union under Marie Skłodowska-Curie grant agreement No. 665919, from the ERC under the Horizon 2020 program (grant agreement No. 853489), the Spanish Ministry project RTI2018-102285-A-I00 and DFG-EXC-Nummer 2064/1-Projektnummer 390727645. The TITAN Xp and TITAN V used for this research were donated by the NVIDIA Corporation.
References
- (1) Akata, Z., Malinowski, M., Fritz, M., Schiele, B.: Multi-cue zero-shot learning with strong supervision. In: CVPR, pp. 59–68 (2016)
- (2) Akata, Z., Perronnin, F., Harchaoui, Z., Schmid, C.: Label-embedding for image classification. IEEE TPAMI 38(7), 1425–1438 (2016)
- (3) Akata, Z., Reed, S., Walter, D., Lee, H., Schiele, B.: Evaluation of output embeddings for fine-grained image classification. In: CVPR, pp. 2927–2936 (2015)
- (4) Al-Halah, Z., Tapaswi, M., Stiefelhagen, R.: Recovering the missing link: Predicting class-attribute associations for unsupervised zero-shot learning. In: CVPR, pp. 5975–5984 (2016)
- (5) Changpinyo, S., Chao, W., Gong, B., Sha, F.: Synthesized classifiers for zero-shot learning. In: CVPR, pp. 5327–5336 (2016)
- (6) Changpinyo, S., Chao, W., Sha, F.: Predicting visual exemplars of unseen classes for zero-shot learning. In: ICCV, pp. 3496–3505 (2017)
- (7) Chen, J., Fang, Y.: Deep cross-modality adaptation via semantics preserving adversarial learning for sketch-based 3d shape retrieval. In: ECCV, pp. 624–640 (2018)
- (8) Chen, L., Zhang, H., Xiao, J., Liu, W., Chang, S.: Zero-shot visual recognition using semantics-preserving adversarial embedding networks. In: CVPR, pp. 1043–1052 (2018)
- (9) Chopra, S., Hadsell, R., LeCun, Y.: Learning a similarity metric discriminatively, with application to face verification. In: CVPR, pp. 539–546 (2005)
- (10) Deng, J., Dong, W., Socher, R., Li, L.J., Li, K., Fei-Fei, L.: ImageNet: A Large-Scale Hierarchical Image Database. In: CVPR, pp. 248–255 (2009)
- (11) Dey, S., Riba, P., Dutta, A., Lladós, J., Song, Y.Z.: Doodle to search: Practical zero-shot sketch-based image retrieval. In: CVPR (2019)
- (12) Ding, Z., Shao, M., Fu, Y.: Low-rank embedded ensemble semantic dictionary for zero-shot learning. In: CVPR, pp. 6005–6013 (2017)
- (13) Dutta, A., Akata, Z.: Semantically tied paired cycle consistency for zero-shot sketch-based image retrieval. In: CVPR (2019)
- (14) Eitz, M., Hays, J., Alexa, M.: How do humans sketch objects? ACM TG 31(4), 1–10 (2012)
- (15) Felix, R., Kumar, V.B.G., Reid, I., Carneiro, G.: Multi-modal cycle-consistent generalized zero-shot learning. In: ECCV, pp. 21–37 (2018)
- (16) Finn, C., Abbeel, P., Levine, S.: Model-agnostic meta-learning for fast adaptation of deep networks. In: ICML, pp. 1126–1135 (2017)
- (17) Frome, A., Corrado, G.S., Shlens, J., Bengio, S., Dean, J., Ranzato, M.A., Mikolov, T.: Devise: A deep visual-semantic embedding model. In: NIPS, pp. 2121–2129 (2013)
- (18) Fu, Z., Xiang, T., Kodirov, E., Gong, S.: Zero-shot object recognition by semantic manifold distance. In: CVPR, pp. 2635–2644 (2015)
- (19) Girshick, R.: Fast r-cnn. In: ICCV, pp. 1440–1448 (2015)
- (20) Gong, Y., Lazebnik, S., Gordo, A., Perronnin, F.: Iterative quantization: A procrustean approach to learning binary codes for large-scale image retrieval. IEEE TPAMI 35(12), 2916–2929 (2013)
- (21) Guo, Y., Ding, G., Han, J., Tang, S.: Zero-shot learning with attribute selection. In: AAAI, pp. 6870–6877 (2018)
- (22) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. arXiv abs/1512.03385 (2015)
- (23) Hu, G., Hua, Y., Yuan, Y., Zhang, Z., Lu, Z., Mukherjee, S.S., Hospedales, T.M., Robertson, N.M., Yang, Y.: Attribute-enhanced face recognition with neural tensor fusion networks. In: ICCV, pp. 3764–3773 (2017)
- (24) Hu, J., Shen, L., Albanie, S., Sun, G., Wu, E.: Squeeze-and-excitation networks. IEEE TPAMI pp. 1–1 (2019)
- (25) Hu, R., Collomosse, J.: A performance evaluation of gradient field hog descriptor for sketch based image retrieval. CVIU 117(7), 790 – 806 (2013)
- (26) Jayaraman, D., Grauman, K.: Zero-shot recognition with unreliable attributes. In: NIPS, pp. 3464–3472 (2014)
- (27) Jiang, J.J., Conrath, D.W.: Semantic similarity based on corpus statistics and lexical taxonomy. In: ROCLING, pp. 19–33 (1997)
- (28) Joulin, A., Grave, E., Bojanowski, P., Douze, M., Jégou, H., Mikolov, T.: Fasttext.zip: Compressing text classification models. In: ICLR (2017)
- (29) Kipf, T.N., Welling, M.: Semi-supervised classification with graph convolutional networks. In: ICLR, pp. 1–10 (2017)
- (30) Kiran Yelamarthi, S., Krishna Reddy, S., Mishra, A., Mittal, A.: A zero-shot framework for sketch based image retrieval. In: ECCV, pp. 316–333 (2018)
- (31) Koch, G., Zemel, R., Salakhutdinov, R.: Siamese neural networks for one-shot image recognition. In: ICML DLW, pp. 1–8 (2015)
- (32) Kodirov, E., Xiang, T., Gong, S.: Semantic autoencoder for zero-shot learning. In: CVPR, pp. 4447–4456 (2017)
- (33) Lampert, C.H., Nickisch, H., Harmeling, S.: Attribute-based classification for zero-shot visual object categorization. IEEE TPAMI 36(3), 453–465 (2014)
- (34) Li, Y., Hospedales, T.M., Song, Y.Z., Gong, S.: Fine-grained sketch-based image retrieval by matching deformable part models. In: BMVC (2014)
- (35) Lin, D.: An information-theoretic definition of similarity. In: ICML, pp. 296–304 (1998)
- (36) Liu, L., Shen, F., Shen, Y., Liu, X., Shao, L.: Deep sketch hashing: Fast free-hand sketch-based image retrieval. In: CVPR, pp. 2298–2307 (2017)
- (37) Liu, Q., Xie, L., Wang, H., Yuille, A.L.: Semantic-aware knowledge preservation for zero-shot sketch-based image retrieval. In: ICCV (2019)
- (38) Long, Y., Liu, L., Shao, L., Shen, F., Ding, G., Han, J.: From zero-shot learning to conventional supervised classification: Unseen visual data synthesis. In: CVPR, pp. 6165–6174 (2017)
- (39) Mensink, T., Gavves, E., Snoek, C.G.M.: Costa: Co-occurrence statistics for zero-shot classification. In: CVPR, pp. 2441–2448 (2014)
- (40) Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. In: ICLR (2013)
- (41) Mikolov, T., Sutskever, I., Chen, K., Corrado, G.S., Dean, J.: Distributed representations of words and phrases and their compositionality. In: NIPS, pp. 3111–3119 (2013)
- (42) Miller, G.A.: Wordnet: A lexical database for english. ACM 38(11), 39–41 (1995)
- (43) Nie, F., Huang, H., Cai, X., Ding, C.H.: Efficient and robust feature selection via joint -norms minimization. In: NIPS, pp. 1813–1821 (2010)
- (44) Pang, K., Li, K., Yang, Y., Zhang, H., Hospedales, T.M., Xiang, T., Song, Y.Z.: Generalising fine-grained sketch-based image retrieval. In: CVPR (2019)
- (45) Pang, K., Song, Y.Z., Xiang, T., Hospedales, T.M.: Cross-domain generative learning for fine-grained sketch-based image retrieval. In: BMVC, pp. 1–12 (2017)
- (46) Paszke, A., Gross, S., Chintala, S., Chanan, G., Yang, E., DeVito, Z., Lin, Z., Desmaison, A., Antiga, L., Lerer, A.: Automatic differentiation in PyTorch. In: NIPS-W (2017)
- (47) Pennington, J., Socher, R., Manning, C.D.: Glove: Global vectors for word representation. In: EMNLP, pp. 1532–1543 (2014)
- (48) Qi, Y., Song, Y.Z., Zhang, H., Liu, J.: Sketch-based image retrieval via siamese convolutional neural network. In: ICIP, pp. 2460–2464 (2016)
- (49) Qiao, R., Liu, L., Shen, C., v. d. Hengel, A.: Less is more: Zero-shot learning from online textual documents with noise suppression. In: CVPR, pp. 2249–2257 (2016)
- (50) Ravi, S., Larochelle, H.: Optimization as a model for few-shot learning. In: ICLR (2017)
- (51) Reed, S., Akata, Z., Lee, H., Schiele, B.: Learning deep representations of fine-grained visual descriptions. In: CVPR, pp. 49–58 (2016)
- (52) Romera-Paredes, B., Torr, P.H.S.: An embarrassingly simple approach to zero-shot learning. In: ICML, pp. 2152–2161 (2015)
- (53) Saavedra, J.M.: Sketch based image retrieval using a soft computation of the histogram of edge local orientations (s-helo). In: ICIP, pp. 2998–3002 (2014)
- (54) Saavedra, J.M., Barrios, J.M.: Sketch based image retrieval using learned keyshapes (lks). In: BMVC, pp. 1–11 (2015)
- (55) Sangkloy, P., Burnell, N., Ham, C., Hays, J.: The sketchy database: Learning to retrieve badly drawn bunnies. ACM TOG 35(4), 1–12 (2016)
- (56) Satorras, V.G., Estrach, J.B.: Few-shot learning with graph neural networks. In: ICLR (2018)
- (57) Schönfeld, E., Ebrahimi, S., Sinha, S., Darrell, T., Akata, Z.: Generalized zero- and few-shot learning via aligned variational autoencoders. In: CVPR (2018)
- (58) Shen, Y., Liu, L., Shen, F., Shao, L.: Zero-shot sketch-image hashing. In: CVPR (2018)
- (59) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv abs/1409.1556 (2014)
- (60) Snell, J., Swersky, K., Zemel, R.: Prototypical networks for few-shot learning. In: Advances in Neural Information Processing Systems 30, pp. 4077–4087 (2017)
- (61) Socher, R., Ganjoo, M., Manning, C.D., Ng, A.: Zero-shot learning through cross-modal transfer. In: NIPS, pp. 935–943 (2013)
- (62) Song, J., Song, Y.Z., Xiang, T., Hospedales, T.: Fine-grained image retrieval: the text/sketch input dilemma. In: BMVC, pp. 1–12 (2017)
- (63) Song, J., Yu, Q., Song, Y.Z., Xiang, T., Hospedales, T.M.: Deep spatial-semantic attention for fine-grained sketch-based image retrieval. In: ICCV, pp. 5552–5561 (2017)
- (64) Su, W., Yuan, Y., Zhu, M.: A relationship between the average precision and the area under the roc curve. In: ICTIR, pp. 349–352 (2015)
- (65) Vinyals, O., Blundell, C., Lillicrap, T., kavukcuoglu, k., Wierstra, D.: Matching networks for one shot learning. In: NIPS, pp. 3630–3638 (2016)
- (66) Wang, F., Kang, L., Li, Y.: Sketch-based 3d shape retrieval using convolutional neural networks. In: CVPR, pp. 1875–1883 (2015)
- (67) Wang, M., Wang, C., Yu, J.X., Zhang, J.: Community detection in social networks: An in-depth benchmarking study with a procedure-oriented framework. In: VLDB, pp. 998–1009 (2015)
- (68) Wang, S., Ding, Z., Fu, Y.: Feature selection guided auto-encoder. In: AAAI, pp. 2725–2731 (2017)
- (69) Wang, W., Pu, Y., Verma, V.K., Fan, K., Zhang, Y., Chen, C., Rai, P., Carin, L.: Zero-shot learning via class-conditioned deep generative models. In: AAAI (2018)
- (70) Wang, Y., Girshick, R., Hebert, M., Hariharan, B.: Low-shot learning from imaginary data. In: CVPR, pp. 7278–7286 (2018)
- (71) Xian, Y., Akata, Z., Sharma, G., Nguyen, Q., Hein, M., Schiele, B.: Latent embeddings for zero-shot classification. In: CVPR, pp. 69–77 (2016)
- (72) Xian, Y., Lampert, C.H., Schiele, B., Akata, Z.: Zero-shot learning - a comprehensive evaluation of the good, the bad and the ugly. IEEE TPAMI pp. 1–14 (2018)
- (73) Xian, Y., Lorenz, T., Schiele, B., Akata, Z.: Feature generating networks for zero-shot learning. In: CVPR, pp. 5542–5551 (2018)
- (74) Xian, Y., Sharma, S., Schiele, B., Akata, Z.: f-vaegan-d2: A feature generating framework for any-shot learning. In: CVPR (2019)
- (75) Yang, Y., Luo, Y., Chen, W., Shen, F., Shao, J., Shen, H.T.: Zero-shot hashing via transferring supervised knowledge. In: ACM MM, pp. 1286–1295 (2016)
- (76) Yang, Z., Cohen, W.W., Salakhutdinov, R.: Revisiting semi-supervised learning with graph embeddings. In: ICML, pp. 40–48 (2016)
- (77) Yu, Q., Liu, F., Song, Y.Z., Xiang, T., Hospedales, T.M., Loy, C.C.: Sketch me that shoe. In: CVPR, pp. 799–807 (2016)
- (78) Yu, Q., Yang, Y., Liu, F., Song, Y.Z., Xiang, T., Hospedales, T.M.: Sketch-a-net: A deep neural network that beats humans. IJCV pp. 1–15 (2016)
- (79) Yu, Q., Yang, Y., Song, Y.Z., Xiang, T., Hospedales, T.: Sketch-a-net that beats humans. In: BMVC, pp. 1–12 (2015)
- (80) Yu, T., Meng, J., Yuan, J.: Multi-view harmonized bilinear network for 3d object recognition. In: CVPR, pp. 186–194 (2018)
- (81) Yu, Z., Yu, J., Fan, J., Tao, D.: Multi-modal factorized bilinear pooling with co-attention learning for visual question answering. In: ICCV, pp. 1839–1848 (2017)
- (82) Zhang, J., Shen, F., Liu, L., Zhu, F., Yu, M., Shao, L., Tao Shen, H., Van Gool, L.: Generative domain-migration hashing for sketch-to-image retrieval. In: ECCV, pp. 304–321 (2018)
- (83) Zhang, L., Xiang, T., Gong, S.: Learning a deep embedding model for zero-shot learning. In: CVPR, pp. 3010–3019 (2017)
- (84) Zhang, R., Lin, L., Zhang, R., Zuo, W., Zhang, L.: Bit-scalable deep hashing with regularized similarity learning for image retrieval and person re-identification. IEEE TIP 24(12), 4766–4779 (2015)
- (85) Zhang, Z., Saligrama, V.: Zero-shot learning via semantic similarity embedding. In: ICCV, pp. 4166–4174 (2015)
- (86) Zhang, Z., Saligrama, V.: Zero-shot learning via joint latent similarity embedding. In: CVPR, pp. 6034–6042 (2016)
- (87) Zhu, J., Park, T., Isola, P., Efros, A.A.: Unpaired image-to-image translation using cycle-consistent adversarial networks. In: ICCV, pp. 2242–2251 (2017)