DALL-E for Detection: Language-driven Compositional Image Synthesis for Object Detection
Abstract
We propose a new paradigm to automatically generate training data with accurate labels at scale using the text-to-image synthesis frameworks (e.g., DALLE, Stable Diffusion, etc.). The proposed approach decouples training data generation into foreground object mask generation and background (context) image generation. For foreground object mask generation, we use a simple textual template with object class name as input to DALLE to generate a diverse set of foreground images. A foreground-background segmentation algorithm is then used to generate foreground object masks. Next, in order to generate context images, first a language description of the context is generated by applying an image captioning method on a small set of images representing the context. These language descriptions are then used to generate diverse sets of context images using the DALL-E framework. These are then composited with object masks generated in the first step to provide an augmented training set for a classifier. We demonstrate the advantages of our approach on four object detection datasets including on Pascal VOC and COCO object detection tasks. Furthermore, we also highlight the compositional nature of our data generation approach on out-of-distribution and zero-shot data generation scenarios.
1 Introduction
Training modern deep learning models require large labeled datasets [38, 22, 44]. Obtaining such datasets is both expensive and time-consuming due to large human effort requirements. This raises a question: can we efficiently generate a large scale labelled data that also achieves high accuracy on a new downstream task? We first hypothesize that any such approach should satisfy these qualities (Table. 1): no (or minimal) human involvement, automatic generalization of the images for any new classes and environments, scalable, generation of high quality and diverse set of images, explainable, compositional, and privacy-preserving.
To this end, synthetic techniques could be used as possible sources for generating labelled data for training computer vision models. One popular approach is to use computer graphics to generate data [41, 45, 48]. However, these approaches may require gathering 3D models of both objects and scenes, which can require a large amount of skilled labor, such as 3D modeling expertise, which prohibits the scalability of such graphics generated synthetic labelled data. Another approach is to use object-cut and paste [12], which is a 2D synthetic generation approach, but these approach can have limitations as they still require a source for foreground objects and accurate foreground masks for those objects. A third approach could be to use machine learning based neural renderer techniques like NeRF based approaches [33, 15]. These approaches generally require retraining models for any new object class and so can not easily scale to large number of object classes.

Recently, there has been a revolution in large scale text-to-image synthesis models like DALL-E [37], RU-DALLE [1], CogView [8], and Stable Diffusion [42]. They have been shown to achieve photorealism even for complex scenes, able to understand semantics and compositional nature of the real world. In addition, these models can understand language descriptions of a scene and so can act a natural bridge between human and synthesis approaches. Given these qualities, can these text-to-image synthesis approaches be used to generate large scale training data with accurate labels for computer vision problems? For simplicity, from now on, we will use DALL-E to represent text-to-image synthesize models (including Stable Diffusion and RU-DALLE).
Method | Quality | No Human | Adapt | Scalable | Explainable | Privacy | Comp. |
---|---|---|---|---|---|---|---|
Human capture | ✓ | ✗ | ✗ | ✗ | ✓ | ✗ | ✓ |
Web image | ✓ | ✓ | ✗ | ✓ | ✗ | ✗ | ✗ |
Public dataset | ✓ | ✗ | ✗ | ✓ | ✗ | ✗ | ✗ |
Generative models | ✓ | ✓ | ✗ | ✓ | ✗ | ✓ | ✗ |
Ours | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ |
In this work, we propose a new text-to-image synthesis paradigm involving two components to generate large scale training data with accurate labels for tasks like object detection and instance segmentation. First part involves accurately generating foreground object masks for the object classes of interest. This is due to the reason that DALL-E can not generate annotations, e.g., they are not able to generate bounding box for the objects. In order to generate diverse foreground object masks, we first generate images containing mostly one object corresponding to the interest class. We use a simple template with class name as input to the DALL-E pipeline. For example, in order to generate images with cat, we use input as an image of cat on pure background. Next, a simple background-foreground segmentation approach is used to get foreground object masks of the interest classes.
The second step involves generating diverse background images that provide good context information for training recognition models. Context plays an important role in learning a good object recognition model. Divalla et al. [9] provide empirical evidence to support this claim. Dvornik [11] showed that finding congruent context helped improve accuracy on object detection tasks. For example, placing airplanes and boats in their natural context helped to improve accuracy, e.g., airplanes are generally found in the sky and boats are on the water.
We again leverage DALL-E to generate diverse set of high quality context images. At the core of our approach lies utilizing an interplay between language description of context and language-driven image generation. Given a small number of images that represent the context environment, we use image captioning to generate a high-level language description of the context automatically. The language description of the context is used within a text-to-image generation pipeline (e.g., DALL-E) to generate a diverse set of images. These diverse sets of generated images are used as context images.
Finally, to generate labelled data, we follow a simple strategy where we paste the foreground object masks (from step one) onto the random context images (from step two), as in object-cut and paste approaches [12]. The proposed pipeline satisfies all the desired properties of labelled image generation (Fig. 2). The data can be generated efficiently without human involvement effortlessly. Language descriptions for both the foreground and the background image generation help to provide an explainable and compositional data generation. Adding or removing objects or settings can be easily done in the language domain. For example, a description as an environment with a table can be easily modified to kitchen environment by utilizing the compositional properties of language as a kitchen environment with a table. Table. 1 shows the benefit of our approach in generating context images over other approaches.
We have conducted extensive experiments on four publicly available benchmark object detection datasets and compared them against different ways to generate context images. We demonstrate that our approach can achieve much better accuracy compared to the prior approaches. We also demonstrate the benefit of our approaches in out-of-distribution context and zero-shot data generation scenarios that utilize the compositional nature of our method. Our main contributions are: (1) We propose a language-driven compositional image generation approach to automatically generate both foreground objects and background context images and form large-scale datasets. (2) We demonstrate benefit of our proposed pipeline over several prior approaches. (3) We highlight compositional nature by generating context images from out-of-context images and zero-shot data generation scenarios. To the best of our knowledge, this is the first work to use vision and language models for generating object detection and segmentation datasets.
2 Related works
Text-to-Image Synthesis Frameworks.
Text-to-image synthesis approaches have become new revolution in generating high-quality images, capturing semantics and compositionality of the real world scenes. Some of these approaches include DALL-E [37], RU-DALLE [1], Stable Diffusion [42], CogView [8]. These approaches leverage benefits of using large scale transformer based models trained using large scale vision and text data. Though they have shown to generate high quality images of complex real world scenes, they are not able to generate ground truth labels for objects. For example, they are not able to provide bounding box and per-pixel annotation for objects of interest. In our work, we have proposed an automatic approach to generate high quality images with ground truth bounding box and per-pixel labels.
Synthetic Data Generation.
A series of works on using synthetic data for training computer vision problems have been proposed. Some of them include using graphics pipeline or computer games to generate high quality labelled data [41, 40, 43, 26, 49, 21]. Generally using graphics pipeline requires having 3D models of both objects and environment, that may limit their scalability. Some of them use generative models (e.g., GAN) [6, 16] or zero-shot synthesis [14] to augment datasets and remove bias. However, they need a relatively large initial dataset to train the model, and not easy to generalize to new domains.
The idea of pasting foreground objects on background images has emerged as a easy and scalable approach for large scale data generation. The idea has been used to solve other problems like object instance segmentation tasks [12], object detection and pose estimation problems [46, 23, 35, 47, 50], optical flow [10], domain adaptation problem [55], semi-supervised learning task [18]. These approaches generally require accurate foreground object masks. This limits their scalability. While we utilize a cut-and-paste approach, in contrast to previous works in this space, our work can generate foreground object masks for any new object class.
Language for object recognition.
Language has been used to solve computer vision tasks. Vision-language based models have been developed for image captioning task [52, 39, 4, 20], visual question answering tasks [5, 51, 2, 7] and others [32, 54, 29]. In recent years, vision and language based multi-modal models have been developed for self-supervised training. Recent work CLIP [36] showed how training a model on large image-text pair dataset can generalize to several benchmark image classification datasets where current image based models performed very poorly. Language information been used along with vision information to solve other computer vision tasks like object detection [19, 27, 30] and semantic image segmentation task [28]. These works have also demonstrated benefits of leveraging language information in solving computer vision tasks.
Another line of work involves using large image-text pairs for text to image generation tasks. These models learnt to generate reaslitic environment including out-of-context settings which may not be present during the training phase. Some of the examples include DALL-E [37], RU-DALLE [1], CogView [8] and Stable Diffusion [42]. A recent concurrent work X-Paste [57] has used Stable Diffusion to solve object detection. We are motivated by generation quality of these text to image generation methods in this work.
3 Method
The goal of the paper is to efficiently generate a large set of labeled data for training object detection models using text-to-image synthesis frameworks. In particular, the proposed approach decouples training data generation into diverse set of foreground object mask generation and diverse set of background (context) image generation. The foreground object masks are then composited onto the background images following the object cut-and-paste strategy [12]. Furthermore, the proposed approach also allows for compositional and explainable data generation as well. Our pipeline leverages off-the-shelf language generative frameworks [24, 42] under textual guidance [25] to generate both the foreground masks and context images.
3.1 Language-driven Context generation

Context image generation consists of two steps (a visualization of our pipeline is provided in Fig.2). The first step involves generating a diverse set of language descriptions of context by applying image captioning methods on few images taken from the context environment. Next, text-to-image generation pipeline synthesizes a large set of context images from the language descriptions.
Context description images (CDI). We assume that we have been given a small set of images that describes the context, which can be as small as 1 image. We call them context description images (CDI). These images can come from an environment that contextually looks similar to the test environment. For example, if the test scenario includes a kitchen environment, the small set of kitchen images can be taken from any public dataset or from web images.
Image caption. Next step involves describing context information from the given CDIs. Language can be used to provide a concise description of the context. In addition, language is both compositional and explainable in nature. Given these advantages, we leverage those language descriptions to represent context. This raises a question: how can these language descriptions be generated?
In order to automatically generate such a description of the context, we use image captioning methods. These captioning approaches generate a set of diverse textual captions for input images. Over the years, many image captioning methods have been developed [39, 4, 20]. We use self-critique sequence training (SCST) for image captioning work developed by Rennie et.al. [39]. SCST method have shown to achieve very good accuracy on different caption generation datasets including COCO caption generation challenge. However, it should be noted that our method does not rely on any specific image to caption generation method. Any popular or new SOTA method can be used in place of the SCST method. For each CDI, language descriptions are generated using the SCST method. If there are CDIs, the caption generation steps provide natural language descriptions. Next, we use two approaches to create new context description sentences. Approach 1: Given small number of captions, we manually select context words (e.g., grass field), and then we fill them in a set of templates. For instance, a real image of grass filed. Approach 2: further, we also augment more related context words based on prior knowledge, for example, animals (dogs) can be found in forest and then fill them in a set of templates. More over, it should be noted that the manual extraction of the context word could be substituted by noun extraction from sentences followed by interest class removal by using WordNet. We could also use ConceptNet to automatically provide augmented context words. Approach 2 can be used in extrame cases (zero-shot setting), where we have no CDI.
Image generation. Next step involves generating a diverse set of images for each language description of the context information. Recent time has seen a remarkable breakthrough in the text to image generation approach. Some popular frameworks for text to image data generation are DALL-E [37], Stable Diffusion[42], CogView [8], RU-DALLE [1]. They use large scale text-image data pairs to train large transformer models in a self-supervised fashion. This allows them to generate really high quality photorealistic images of any real world environment, incorporating semantics and compositional knowledge of the world. For each text description, we use DALL-E to generate images. This approach helps us to generate a total of images from CDIs. So, even for single context images, we are able to automatically generate a large set of new context images. As a post process, we use CLIP [36] to filter and further ensure that the generated images have no interest class.
3.2 Language-driven Foreground generation
We note that image generation process can also be used to generate diverse and high-quality foreground images. We manually design several fixed prompt templates, such as A photo of <object>, A realistic photo of <object>, and A photo of <object>in pure background111We provide the full set of prompt templates in appendix.. <object> is replaced by various category labels and then the prompt is fed into DALL-E to generate high-quality iconic object foreground images. We highlight that our foreground generation is 0-shot, and only category labels are required. Benefits of our process is that we can easily separate objects as engineering prompts allows to generate object on simple isolated backgrounds. We can then use generic unsupervised foreground extraction method [34] to get the masks.
The text-to-image generation model has several benefits. First, it is a compact version of web-scale image-text pair data. That makes it both portable and scalable. Also, being a generative model, the generation pipeline could create new scenarios that were not present in the training data. Furthermore, the synthetic nature of the data generation procedure allows our method to be privacy-preserving. Finally, language based data generation allows the model to be compositional. This whole approach allows us to generate large scale foreground object masks and background context images. Many of these generated images have been provided in the supplementary material.
3.3 Compositional dataset generation
We describe how to combine foreground object masks and background images obtained in Section 3.1 and Section 3.2. We composite foreground object masks onto the background images to create labelled synthetic data.
Label data generation. At each step a group of foreground object masks is selected and pasted into a sampled background image, and such procedure is repeated until all foreground object masks are pasted. The foreground mask, after 2D geometric augmentation such as rotation and scaling, is pasted on a random location in the image. In addition, following [12, 18], we apply a Gaussian blur on the object boundary with blur kernel as to blend pasted foregrounds.
Compositional data generation. The natural language description allows compositional data generation. We can intervene in the language description to add or remove a key word (Fig. 3). For example, the word kitchen can be added to generate context. This allows us to generate new images with new context information. Similarly, we can remove some unwanted objects. For example, if initial context description involves people, we can remove people from generated images by simply not mentioning the word people. The text to image generation pipeline can then generate large set of images without people. More details are in the supplementary material.
4 Experiments
We demonstrate the effectiveness of the proposed approach in automatically generating large datasets on three scenarios. First we evaluate our method on large scale object detection tasks on Pascal VOC and COCO datasets. Here, large scale training data has been created by synthetically generating the foreground masks for both VOC and COCO object classes, and background (context) images as discussed in the method section. Next, we highlight the benefits our approach on instance detection tasks. We consider GMU-Kitchen [17], Active Vision [3], and YCB-video datasets [53]. We follow the training strategy of [12] and since these datasets already provide object masks, we use our method to only generate good context images for downstream object instance detection tasks on the above three datasets. Finally, we also provide results highlighting compositional nature of our data generation process. For Pascal VOC and COCO dataset experiments, we use Stable Diffusion [42] as the text-to-image generation model, and for other experiments, we use RU-DALLE [1].
Training procedure and evaluation criterion. We use faster RCNN [38] with ResNet-50 [22] as the backbone to train both object detection and object instance detection network. Models have been trained till convergence for both the baselines and our approaches. In our experiments we set learning rate as with a weight decay . Furthermore, we report standard mean average precision (mAP) for the evaluation of object detection results.

4.1 DALL-E for general large dataset: both foreground and context generation
EXP id | Dataset | #CDI | Foreground | Background | mAP@50 | mAP |
G-1 EXP-1 | VOC 1.4k train | 1464 | Real | 45.50 | 17.00 | |
G-2 EXP-1 | VOC 0 shot | 0 | Syn | 43.24 | 19.78 | |
G-2 EXP-2 | Syn | Web-bg | 38.35 | 17.76 | ||
G-3 EXP-1 | VOC 1 shot | Real | 0.14 | 0.04 | ||
G-3 EXP-2 | + cut paste | 6.03 | 2.07 | |||
G-3 EXP-3 | use syn fg | Syn + Real | Real | 37.97 | 17.53 | |
G-3 EXP-4 | only syn | Syn | 44.24 | 20.63 | ||
G-3 EXP-5 | syn + real | Syn + Real | 45.62 | 21.45 | ||
G-3 EXP-6 | COCO-bg + real | Syn + Real | COCO-bg + Real | 38.79 | 18.04 | |
G-4 EXP-1 | VOC 10 shot | Real | 9.12 | 2.35 | ||
G-4 EXP-2 | + cut paste | 29.60 | 10.82 | |||
G-4 EXP-3 | use syn fg | Syn + Real | Real | 48.14 | 21.62 | |
G-4 EXP-4 | only syn | Syn | 44.59 | 20.72 | ||
G-4 EXP-5 | syn + real | Syn + Real | 51.82 | 25.87 | ||
G-4 EXP-6 | COCO-bg + real | Syn + Real | COCO-bg + Real | 48.20 | 23.95 |
EXP id | Dataset | #CDI | Foreground | Background | mAP@50 | mAP |
---|---|---|---|---|---|---|
EXP-1 | COCO 1 shot | Real | 1.47 | 0.92 | ||
EXP-2 | + cut paste | 2.89 | 1.23 | |||
EXP-3 | use syn fg | Syn + Real | Real | 17.87 | 8.64 | |
EXP-4 | only syn | Syn | 16.22 | 7.66 | ||
EXP-5 | syn + real | Syn + Real | 20.82 | 10.63 |
4.1.1 PASCAL-VOC dataset
We first evaluate our method on PASCAL VOC 2012 object detection task [13].
Dataset. The dataset has 20 foreground classes. The training and validation set consist of 1,464 images and 1,449 images, respectively with bounding box along with instance segmentation masks. We use the instance segmentation masks from the training set ground truth labels as our real foreground masks.
Experiments set up. As shown in Table. 2, we conduct four groups of experiments. The first group (G-1) is the baseline experiment (EXP). We train the model using the whole 1,464 training images without any foreground or background synthesis. The second group (G-2) is zero-shot setting, where we use no CDIs for background image generation (approach 1) but use only approach 2 method (class specific context words with prior knowledge) for context description (G-2 EXP-1). To compare our background context image generation pipeline with directly collecting background images from internet (web image), we build a web-image baseline (G-2 EXP-2) , where we collect same number of images using the same context description sentences with Google Search engine (EXP-2).
The thrid group (G-3) is VOC-1-shot, where we randomly sample 1 image per class (20 images total) from the real 1,464 training set as context description images (CDIs). The VOC-1-shot group consists of 6 EXPs. (G-3 EXP-1) is the baseline which trains the object detection model on the 20 real training images without any augmentation of the images using cut-and-paste steps. (G-3 EXP-2) applies cut-and-paste method (Section 3.3) into the baseline experiment. Specifically, we randomly paste the real foreground object masks from the CDIs onto the real training images. For next EXPs, we first use information provided by the CDIs to generate context images from Stable Diffusion (Syn-back). We also use Stable Diffusion to synthesize foreground images and obtain foreground masks (Syn-obj) with our method. For (G-3 EXP-3), we use both real foreground object mask and syn-obj as foregrounds and paste them on real CDIs as backgrounds. (G-3 EXP-4) uses only synthetic foregrounds (syn-obj) and paste them on the synthetic backgrounds (syn-back). (G-3 EXP-5) uses both real object mask and syn-obj as foreground, and also, for background, it use both real CDIs and syn-back. The combined real+syn foreground are then pasted on combined real+syn background. (G-3 EXP-6) substitutes the syn-back from EXP-5 with random COCO images (COCO-img), the rest setting is the same. We select COCO images that contains only the remaining 60 classes objects so that they are disjoint from VOC objects.
The fourth group (G-4) is VOC-10-shot that consists of similar 6 experiments as the 6 experiments in the third group. The only difference is the number of CDIs, where we randomly sample 10 image per class (200 images total) from the real 1,464 training set as context description images (CDIs). For all above EXP, we repeat the foregrounds and backgrounds accordingly so that the total number of synthesized dataset after paste is 60,000.
Results Table. 2 shows the results of the four groups of different experiments. First, simply using only synthetic foreground and background images in the zero-shot setting achieves 43.2% on mAP@50 that is close to fully supervised setting that use all 1,464 Pascal training images (G-1 EXP-1). It should be noted that if we use web images (collected from Google search)(G-1 EXP-2), we observe a -4.89% decrease on mAP@50 than our method. This shows that our proposed approach provides better context images over collected web images.
Next, we provide results when we have CDIs in VOC-1-shot (G-3) and VOC-10-shot (G-4) settings. In G-3 EXP-4, after adding 1-shot of CDIs, our approach with just synthetic foreground and synthetic background generated from Stable Diffusion achieves +38.21% improvement over the baseline trained on 20 CDI images (1-shot per class) with cut-and-paste (G-3 EXP-2). Additionally, if we combine just around 1.4% of real training images (20 CDIs’ real foreground and real background) with our synthesized foreground and background, we observe better results (45.62 mAP@50 G-3 EXP-5) over baseline that use all 1,464 Pascal training images (G-1 EXP-1). To further show the effectiveness of the synthesized background images, we substitute the Stable Diffusion synthesized background image in G-3 EXP-5 with random COCO image (G-3 EXP-6), and we observer a -6.83% decrease on mAP@50, which demonstrate that our used caption descriptions are important to generate congruent context images.
If we further increase the number of CDIs, where we use 10-shot real images per class (in total 200 images) (Group-2 EXPs), we observe significant improvement of our method over the corresponding baselines. For instance, combining only 200 real images with our synthetic pipeline (G-4 EXP-5), model trained with our dataset achieves 51.82% on mAP@50 which has +6.32% improvement over baseline that use all 1,464 Pascal training images (G-1 EXP-1). Note that the used real images (200 CDIs) in G-4 EXP-5 is just 13.7% of all 1,464 training images used in G-1 EXP-1.
4.1.2 COCO dataset
We also evaluate our method on COCO dataset [31], we use the same experiment set up on model, training parameter, and evaluation metrics as the PASCAL VOC dataset in Sec. 4.2.1. We only conduct the 1-shot experiments.
Results Table. 3 shows the results of different experiments in the one-shot setting. First, our approach with just synthetic foreground and synthetic background generated from Stable Diffusion (EXP-4) achieves +14.75% improvement over the baseline trained on 80 CDI images (1-shot per class) (EXP-1). In addition, we observe +13.33% improvement over applying cut and paste augmentation on baseline methods (EXP-2). Then, if we combine the 80 CDIs’ real foreground and real background (CDIs) with our synthesized foreground and background, we observe further +4.60% points improvement on mAP@50 with the help of our method (EXP-5) over (EXP-4).
4.2 DALL-E for instance-specific dataset: only context generation
Next we present results of our method on object instance detection tasks on three datasets: GMU-Kitchen, Active-Vision and YCB-video datasets. We follow the experiment setup of the prior approaches [12] and use the object instance masks provided with the dataset.
Dataset. GMU-kitchen [17] consists of images from 9 kitchen scenes for 11 object instance classes. We use 3-fold cross-validation for testing on this data, following the object cut-and-paste paper [12]. Active Vision consists of images with test images for 6 object classes. We also evaluate our approach on the YCB-video objects [53] that consists of 1000 training and 1000 test images for 21 object classes. Additional details about these datasets have been provided in the supplementary material including information about foreground masks for each dataset.
Baselines. We evaluate our approach with other standard approaches for selecting context images. The main baseline consists of a comparison with the approach described in the Object Cut-and-Paste method [12]. This approach involves selecting context images from the UW dataset [17]. We also compare with other approaches: black background images, images from COCO dataset [31], context description images (CDIs), random images generated by RU-DALLE.
4.2.1 Results

GMU-Kitchen dataset. Quantitative results are shown in the Table 4. We observe that our approach is able to get almost percentage points improvement over the Object Cut-and-Paste baseline. This highlights that our approach generates a better diverse set of context images compared to the images from the UW dataset. Furthermore, we also observe points, , and points of improvements over black, CDIs, and COCO images respectively. Next, we conduct experiments to demonstrate that caption descriptions are important to generate congruent context images. To this end, we use random language descriptions to generate images from Ru-DALLE. We observe almost points improvement of our approach over the random images from Ru-DALLE. Finally, the use of real-world GMU training data combined with our synthetic data helps to achieve the best performance on the GMU test set, i.e., almost points improvement over training with real GMU training data. Compared to all the baselines, our approach is able to achieve better performance. This highlights the benefit of using our language-driven context image generation approach to synthesize congruent context images.
Dataset | #CDI | mAP |
---|---|---|
Real GMU train | - | 86.3 |
Black | 1500 | 41.2 |
CDI | 10 | 15.0 |
Random (COCO) | 1500 | 62.8 |
Random (DALL-E) | 1500 | 66.8 |
UW-Kitchen | 1500 | 76.1 |
DALL-E (ours) | 1500 | 78.3 |
DALL-E (ours) | 2400 | 80.1 |
DALL-E (ours)+Real | 1500 | 91.4 |
Dataset | Active Vision | YCB-video |
---|---|---|
UW-Kitchen | 22.6 | 38.3 |
DALL-E (ours) | 25.8 | 45.5 |
Active Vision dataset. Quantitative results are shown in the Table 5. We observe that our approach is able to get percentage points improvement over the object cut-and-paste baseline. This highlights that our approach generates better context images compared to the images from UW dataset.
YCB-video dataset. Table 5 shows quantitative numbers of both the baseline and our approaches. Note that our approach can get points improvement over the Object Cut-and-Paste [12] with UW-kitchen real context images. This highlights the benefit that our language-driven DALL-E generated images provide congruent context images compared to using real world images from other public datasets.
4.3 Compositional Model
Here we demonstrate the compositional nature of our approach and highlight how language, as a self-interpretable modality with compositionality property, can provide several benefits for synthetic data generation.
Dataset | No Intervention | After Intervention |
---|---|---|
Cartoon Kitchen | 70.0 | 76.7 |
Skeleton Kitchen | 64.6 | 74.8 |
Objects in Kitchen | 71.8 | 77.0 |
Kitchens with Human | 70.9 | 76.9 |
Out of distribution CDIs. We first consider scenarios where the context description images are out-of-distribution images. For example, suppose the task is to do evaluation in the real kitchen environment, but the context description images are sketch or cartoon images of the kitchen. Even in these scenarios our approach can generate very good context images. This is achieved because the image caption method still works on these out-of-distribution images. Some of these out-of-distribution images, their corresponding captions and context images generated by our approach are provided in the Fig. 3.
Language intervention. Compositional nature of our language based context image generation allows us to remove unnecessary and noisy information or add relevant missing information from the original textual description of the CDIs. For instance, the test set scenario is a real kitchen with people present in it. Language description of these CDIs may contain people as distractor that may hamper quality of the generated images and may effect the accuracy. Using our language-based image generation pipeline, we can remove the distractor by automatically detecting and removing the distractor word (people) from the caption by word detection before using them within DALL-E framework. Similar approach can also add relevant missing information to the original textual description of the CDIs.
We use the generated context images with the original caption from the noisy CDIs and the generated context images with modified captions to form two datasets and demonstrate the advantages of our compositional advantage in the Table 6. As seen, using context images after modification helps to improve performance by almost points, points, points, and points over generated images from non-modified caption on 4 out-of-distribution scenario (Cartoon kitchen, Skeleton kitchen, objects in Kitchen and Kitchen with human). Please refer to supplementary materials for more qualitative images and further analysis.
5 Conclusion
We have proposed a new paradigm to generate large scale labelled data for object detection and segmentation tasks using large vision and language-based text-to-image synthesis frameworks. We demonstrate effortless labelled data generation on popular benchmarks for object detection tasks. Computer vision models trained using these data improves the performance over the models trained with large real data. Thus reducing the need for expensive human labeling process. We also highlight the compositional nature of our data generation approach on out-of-distribution and zero-shot data generation scenarios.
There are interesting extensions. Our approach can be used to solve other tasks like instance segmentation task, handling long-tail and imbalanced data problems. Next, the presented approach can be used to make models robust by iteratively generating the data where the detection model fails, which are then used to improve the robustness of the model. We believe the presented ideas of using vision-text models for automatically generating large labelled data opens door to democratizing computer vision models.
Acknowledgments This work was supported in part by C-BRIC (one of six centers in JUMP, a Semiconductor Research Corporation (SRC) program sponsored by DARPA), DARPA (HR00112190134) and the Army Research Office (W911NF2020053). The authors affirm that the views expressed herein are solely their own, and do not represent the views of the United States government or any agency thereof.
References
- [1] sberbank-ai/ru-dalle: Generate images from texts. in russian. https://github.com/sberbank-ai/ru-dalle. (Accessed on 03/07/2022).
- [2] Aishwarya Agrawal, Dhruv Batra, Devi Parikh, and Aniruddha Kembhavi. Don’t just assume; look and answer: Overcoming priors for visual question answering. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 4971–4980, 2018.
- [3] Phil Ammirato, Patrick Poirson, Eunbyung Park, Jana Košecká, and Alexander C Berg. A dataset for developing and benchmarking active vision. In 2017 IEEE International Conference on Robotics and Automation (ICRA), pages 1378–1385. IEEE, 2017.
- [4] Peter Anderson, Xiaodong He, Chris Buehler, Damien Teney, Mark Johnson, Stephen Gould, and Lei Zhang. Bottom-up and top-down attention for image captioning and visual question answering. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 6077–6086, 2018.
- [5] Stanislaw Antol, Aishwarya Agrawal, Jiasen Lu, Margaret Mitchell, Dhruv Batra, C Lawrence Zitnick, and Devi Parikh. Vqa: Visual question answering. In Proceedings of the IEEE international conference on computer vision, pages 2425–2433, 2015.
- [6] Christopher Bowles, Liang Chen, Ricardo Guerrero, Paul Bentley, Roger Gunn, Alexander Hammers, David Alexander Dickie, Maria Valdés Hernández, Joanna Wardlaw, and Daniel Rueckert. Gan augmentation: Augmenting training data using generative adversarial networks. arXiv preprint arXiv:1810.10863, 2018.
- [7] Abhishek Das, Samyak Datta, Georgia Gkioxari, Stefan Lee, Devi Parikh, and Dhruv Batra. Embodied question answering. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1–10, 2018.
- [8] Ming Ding, Zhuoyi Yang, Wenyi Hong, Wendi Zheng, Chang Zhou, Da Yin, Junyang Lin, Xu Zou, Zhou Shao, Hongxia Yang, et al. Cogview: Mastering text-to-image generation via transformers. Advances in Neural Information Processing Systems, 34, 2021.
- [9] Santosh K Divvala, Derek Hoiem, James H Hays, Alexei A Efros, and Martial Hebert. An empirical study of context in object detection. In 2009 IEEE Conference on computer vision and Pattern Recognition, pages 1271–1278. IEEE, 2009.
- [10] Alexey Dosovitskiy, Philipp Fischer, Eddy Ilg, Philip Hausser, Caner Hazirbas, Vladimir Golkov, Patrick van der Smagt, Daniel Cremers, and Thomas Brox. Flownet: Learning optical flow with convolutional networks. In ICCV, December 2015.
- [11] Nikita Dvornik, Julien Mairal, and Cordelia Schmid. Modeling visual context is key to augmenting object detection datasets. In Proceedings of the European Conference on Computer Vision (ECCV), pages 364–380, 2018.
- [12] Debidatta Dwibedi, Ishan Misra, and Martial Hebert. Cut, paste and learn: Surprisingly easy synthesis for instance detection. In Proceedings of the IEEE international conference on computer vision, pages 1301–1310, 2017.
- [13] Mark Everingham, Luc Van Gool, Christopher KI Williams, John Winn, and Andrew Zisserman. The pascal visual object classes (voc) challenge. International journal of computer vision, 88(2):303–338, 2010.
- [14] Yunhao Ge, Sami Abu-El-Haija, Gan Xin, and Laurent Itti. Zero-shot synthesis with group-supervised learning. arXiv preprint arXiv:2009.06586, 2020.
- [15] Yunhao Ge, Harkirat Behl, Jiashu Xu, Suriya Gunasekar, Neel Joshi, Yale Song, Xin Wang, Laurent Itti, and Vibhav Vineet. Neural-sim: Learning to generate training data with nerf. In Shai Avidan, Gabriel Brostow, Moustapha Cissé, Giovanni Maria Farinella, and Tal Hassner, editors, Computer Vision – ECCV 2022, pages 477–493, Cham, 2022. Springer Nature Switzerland.
- [16] Yunhao Ge, Jiaping Zhao, and Laurent Itti. Pose augmentation: Class-agnostic object pose transformation for object recognition. In European Conference on Computer Vision, pages 138–155. Springer, 2020.
- [17] Georgios Georgakis, Md Alimoor Reza, Arsalan Mousavian, Phi-Hung Le, and Jana Košecká. Multiview rgb-d dataset for object instance detection. In 2016 Fourth International Conference on 3D Vision (3DV), pages 426–434. IEEE, 2016.
- [18] Golnaz Ghiasi, Yin Cui, Aravind Srinivas, Rui Qian, Tsung-Yi Lin, Ekin D Cubuk, Quoc V Le, and Barret Zoph. Simple copy-paste is a strong data augmentation method for instance segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 2918–2928, 2021.
- [19] Xiuye Gu, Tsung-Yi Lin, Weicheng Kuo, and Yin Cui. Zero-shot detection via vision and language knowledge distillation. arXiv e-prints, pages arXiv–2104, 2021.
- [20] Danna Gurari, Yinan Zhao, Meng Zhang, and Nilavra Bhattacharya. Captioning images taken by people who are blind. In European Conference on Computer Vision, pages 417–434. Springer, 2020.
- [21] Ankur Handa, Viorica Patraucean, Vijay Badrinarayanan, Simon Stent, and Roberto Cipolla. Scenenet: Understanding real world indoor scenes with synthetic data. CoRR, abs/1511.07041, 2015.
- [22] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770–778, 2016.
- [23] Stefan Hinterstoisser, Vincent Lepetit, Paul Wohlhart, and Kurt Konolige. On pre-trained image features and synthetic images for deep learning. arXiv preprint arXiv:1710.10710, 2017.
- [24] Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems, 33:6840–6851, 2020.
- [25] Jonathan Ho and Tim Salimans. Classifier-free diffusion guidance. arXiv preprint arXiv:2207.12598, 2022.
- [26] Tomáš Hodaň, Vibhav Vineet, Ran Gal, Emanuel Shalev, Jon Hanzelka, Treb Connell, Pedro Urbina, Sudipta Sinha, and Brian Guenter. Photorealistic image synthesis for object instance detection. ICIP, 2019.
- [27] Aishwarya Kamath, Mannat Singh, Yann LeCun, Gabriel Synnaeve, Ishan Misra, and Nicolas Carion. Mdetr-modulated detection for end-to-end multi-modal understanding. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 1780–1790, 2021.
- [28] Boyi Li, Kilian Q Weinberger, Serge Belongie, Vladlen Koltun, and René Ranftl. Language-driven semantic segmentation. arXiv preprint arXiv:2201.03546, 2022.
- [29] Gen Li, Nan Duan, Yuejian Fang, Ming Gong, and Daxin Jiang. Unicoder-vl: A universal encoder for vision and language by cross-modal pre-training. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, pages 11336–11344, 2020.
- [30] Liunian Harold Li, Pengchuan Zhang, Haotian Zhang, Jianwei Yang, Chunyuan Li, Yiwu Zhong, Lijuan Wang, Lu Yuan, Lei Zhang, Jenq-Neng Hwang, et al. Grounded language-image pre-training. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 10965–10975, 2022.
- [31] Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, and C Lawrence Zitnick. Microsoft coco: Common objects in context. In European conference on computer vision, pages 740–755. Springer, 2014.
- [32] Jiasen Lu, Vedanuj Goswami, Marcus Rohrbach, Devi Parikh, and Stefan Lee. 12-in-1: Multi-task vision and language representation learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 10437–10446, 2020.
- [33] Ben Mildenhall, Pratul P Srinivasan, Matthew Tancik, Jonathan T Barron, Ravi Ramamoorthi, and Ren Ng. Nerf: Representing scenes as neural radiance fields for view synthesis. In European conference on computer vision, pages 405–421. Springer, 2020.
- [34] Lu Qi, Jason Kuen, Yi Wang, Jiuxiang Gu, Hengshuang Zhao, Zhe Lin, Philip Torr, and Jiaya Jia. Open-world entity segmentation. arXiv preprint arXiv:2107.14228, 2021.
- [35] Mahdi Rad and Vincent Lepetit. Bb8: A scalable, accurate, robust to partial occlusion method for predicting the 3d poses of challenging objects without using depth. In International Conference on Computer Vision, volume 1, page 5, 2017.
- [36] Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual models from natural language supervision. In International Conference on Machine Learning, pages 8748–8763. PMLR, 2021.
- [37] Aditya Ramesh, Mikhail Pavlov, Gabriel Goh, Scott Gray, Chelsea Voss, Alec Radford, Mark Chen, and Ilya Sutskever. Zero-shot text-to-image generation. In International Conference on Machine Learning, pages 8821–8831. PMLR, 2021.
- [38] Shaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun. Faster r-cnn: towards real-time object detection with region proposal networks. IEEE Transactions on Pattern Analysis & Machine Intelligence, (6):1137–1149, 2017.
- [39] Steven J Rennie, Etienne Marcheret, Youssef Mroueh, Jerret Ross, and Vaibhava Goel. Self-critical sequence training for image captioning. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 7008–7024, 2017.
- [40] Stephan R. Richter, Zeeshan Hayder, and Vladlen Koltun. Playing for benchmarks. In ICCV, 2017.
- [41] Stephan R Richter, Vibhav Vineet, Stefan Roth, and Vladlen Koltun. Playing for data: Ground truth from computer games. In European conference on computer vision, pages 102–118. Springer, 2016.
- [42] Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. High-resolution image synthesis with latent diffusion models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 10684–10695, 2022.
- [43] Germán Ros, Laura Sellart, Joanna Materzynska, David Vázquez, and Antonio M. López. The SYNTHIA dataset: A large collection of synthetic images for semantic segmentation of urban scenes. In CVPR, 2016.
- [44] Evan Shelhamer, Jonathan Long, and Trevor Darrell. Fully convolutional networks for semantic segmentation. PAMI, 2017.
- [45] Shuran Song, Fisher Yu, Andy Zeng, Angel X Chang, Manolis Savva, and Thomas Funkhouser. Semantic scene completion from a single depth image. Proceedings of 30th IEEE Conference on Computer Vision and Pattern Recognition, 2017.
- [46] Hao Su, Charles Ruizhongtai Qi, Yangyan Li, and Leonidas J. Guibas. Render for CNN: viewpoint estimation in images using cnns trained with rendered 3d model views. In 2015 IEEE International Conference on Computer Vision, ICCV 2015, Santiago, Chile, December 7-13, 2015, pages 2686–2694, 2015.
- [47] Bugra Tekin, Sudipta N. Sinha, and Pascal Fua. Real-time seamless single shot 6d object pose prediction. CoRR, abs/1711.08848, 2017.
- [48] J. Tremblay, T. To, and S. Birchfield. Falling things: A synthetic dataset for 3d object detection and pose estimation. In 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pages 2119–21193, Los Alamitos, CA, USA, jun 2018. IEEE Computer Society.
- [49] Jonathan Tremblay, Thang To, and Stan Birchfield. Falling things: A synthetic dataset for 3d object detection and pose estimation. In CVPR, 2018.
- [50] Shashank Tripathi, Siddhartha Chandra, Amit Agrawal, Ambrish Tyagi, James M Rehg, and Visesh Chari. Learning to generate synthetic data via compositing. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 461–470, 2019.
- [51] Ramakrishna Vedantam, Karan Desai, Stefan Lee, Marcus Rohrbach, Dhruv Batra, and Devi Parikh. Probabilistic neural symbolic models for interpretable visual question answering. In International Conference on Machine Learning, pages 6428–6437. PMLR, 2019.
- [52] Oriol Vinyals, Alexander Toshev, Samy Bengio, and Dumitru Erhan. Show and tell: A neural image caption generator. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 3156–3164, 2015.
- [53] Yu Xiang, Tanner Schmidt, Venkatraman Narayanan, and Dieter Fox. Posecnn: A convolutional neural network for 6d object pose estimation in cluttered scenes. arXiv preprint arXiv:1711.00199, 2017.
- [54] Xu Yang, Hanwang Zhang, Guojun Qi, and Jianfei Cai. Causal attention for vision-language tasks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 9847–9857, 2021.
- [55] Woo-Han Yun, Taewoo Kim, Jaeyeon Lee, Jaehong Kim, and Junmo Kim. Cut-and-paste dataset generation for balancing domain gaps in object instance detection. IEEE Access, 9:14319–14329, 2021.
- [56] Lingzhi Zhang, Tarmily Wen, Jie Min, Jiancong Wang, David Han, and Jianbo Shi. Learning object placement by inpainting for compositional data augmentation. In European Conference on Computer Vision, pages 566–581. Springer, 2020.
- [57] Hanqing Zhao, Dianmo Sheng, Jianmin Bao, Dongdong Chen, Dong Chen, Fang Wen, Lu Yuan, Ce Liu, Wenbo Zhou, Qi Chu, et al. X-paste: Revisit copy-paste at scale with clip and stablediffusion. arXiv preprint arXiv:2212.03863, 2022.
Appendix
We provide details of our methodology in Section 6, including ablation studies of various components in our pipeline and some observations on those components’ importance. Further we show that our pipeline can naturally be extended to other tasks such as low resource instance segmentation in Section 7. Moreover, we demonstrate that blending real data can further boost the performance in Section 8. Lastly, in Section 9 we show that by utilizing compositionality of natural language we can narrow the domain gap and tackle out-of-domain problem. For completeness, we give model prediction in Section 10.
6 Details of our method
We provide more details of our synthesized dataset here. Consider VOC [13] dataset with 20 objects222Synthetic dataset for other datasets like COCO [31] is generated similarly..
For context background generation, approach 1 uses CDI to provide context description. Specifically, captions are generated for each of the CDI, and each caption generates images. As a post-processing step, we use CLIP [36] to filter images such that only context images remain. Therefore for each CDI we generate images. In 10 shot we have a total of synthesized context images; while for 1 shot we have a total of context images. For approach 2, we design 16 templates, and 600 images are generated for each template. We observe that generated images are of high quality and generally do not contain interested objects. Therefore we prune 5% images by CLIP only, which results in 9,120 synthetic backgrounds.
For synthesizing images for the foreground objects (Section 6.2), we generate 500 images for each of the templates and also use CLIP [56] to select 250 best images. Thus we have a total of synthesized foreground images for the 20 objects in VOC. Since foreground mask extraction process will drop some bad examples by design, the number of final processed extracted foregrounds is . We repeat cut-and-paste such that the final pasted dataset have size . Specifically, in each step, we first randomly select a background image, then 4 randomly selected foreground masks are pasted at random locations in the background image. Note that our generated dataset contains no duplicate.
Fig. 6 shows more example training images generated by our pipeline on PASCAL VOC dataset: both foreground object and background context images are generated by our method with Stable Diffusion.
A photo of <object > | A realistic photo of <object > | A photo of <object >in pure background |
<object >in a white background | <object >without background | <object >isolated on white background |
6.1 CLIP as variance reduction in quality control
As shown in the main paper Fig. 2, to control the quality and correctness of synthesized image with Stable Diffusion, we use CLIP [36] to filter and rank the quality of Stable Diffusion synthesized context background images. This ensures that the generated background images have no interest classes. Specifically, CLIP ranks images with two rules: 1) images are semantically similar to the input caption to Stable Diffusion, 2) images are semantically dissimilar to any of the interest class labels that we explicitly detect and substituted in the previous steps. For each of the images generated for each caption of every CDI we only keep 30 images.
EXP id | Dataset | #CDI | Foreground | Background | mAP@50 | mAP@75 | mAP |
---|---|---|---|---|---|---|---|
G-4 EXP-3 | use syn fg | Syn + Real | Real | 48.14 | 16.15 | 21.62 | |
G-4 EXP-3-50% | use 50% syn fg | Syn (50%) + Real | Real | 29.17 | 10.10 | 13.06 | |
G-4 EXP-3-30% | use 30% syn fg | Syn (30%) + Real | Real | 23.90 | 2.90 | 7.67 | |
G-4 EXP-5 | syn + real | Syn + Real | 51.82 | 22.58 | 25.87 | ||
G-4 EXP-5-50% | syn (50%) + real | Syn (50%) + Real | 45.96 | 16.81 | 21.32 | ||
G-4 EXP-5-30% | syn (30%) + real | Syn (30%) + Real | 44.43 | 14.04 | 19.25 |
6.2 High quality foreground masks are crucial
We observe Stable Diffusion is able to synthesize high quality foregrounds as well. Given six manually designed templates provided in Table 7, we replace each <object >with desired class labels, e.g. “Bus isolated on white background”, and our assumption is that by the design of our templates Stable Diffusion will generate easy-to-recognize (e.g. centered and clean) target foregrounds in a background that is easy to differentiate between foreground. Fig. 7 first row shows the generated image with foreground objects.
Foreground object mask extraction After high quality foreground images with easy-to-separate background are obtained, in order to retrieve the foreground object masks, we use off-the-shelf unsupervised segmentation methods such as entity segmentation [34] to produce image segments. With these images of known categories, we can train an image classifier effortlessly. Then we use the image classifier to select the foreground segment. Fig. 7 shows the extracted foreground masks.

The more foreground object masks, the better We note that our foreground generation is zero-shot, and only class labels are required. We can potentially generate as many foreground masks as we want, and we observe performance improvement as the number of foreground masks increase. In Table 8, we showed that using 50% of synthetic foregrounds outperforms 30% of foregrounds in both settings (EXP-3 and EXP-5) and in 10 shot setting; while 100% is even better than 50%. We emphasize that since this approach is zero-shot and unsupervised, this is almost a free lunch and we can perhaps further improve performance by increasing the number of foregrounds, which we leave to future work. Our method can be viewed as extracting information from generative model to enhance discriminative model.
EXP id | Dataset | #CDI | Foreground | Background | mAP@50 | mAP@75 | mAP |
---|---|---|---|---|---|---|---|
G-3 EXP-1 | VOC 1 shot | Real | 0.00 | 0.00 | 0.00 | ||
G-3 EXP-3 | use syn fg | Syn + Real | Real | 42.23 | 20.62 | 21.80 | |
G-3 EXP-5 | syn + real | Syn + Real | 46.74 | 24.03 | 24.81 | ||
G-4 EXP-1 | VOC 10 shot | Real | 24.50 | 4.96 | 9.24 | ||
G-4 EXP-3 | use syn fg | Syn + Real | Real | 51.29 | 29.15 | 29.11 | |
G-4 EXP-5 | syn + real | Syn + Real | 55.19 | 30.58 | 30.77 |
7 Our method enhances low resource instance segmentation
Although we mainly discuss object detection task in the main paper, we highlight that our method is not restricted to detection tasks only. In this section we demonstrate the effectiveness of our pipeline in instance segmentation. This is a simple extension since during the paste process we obtain not only bounding box but also segmentation task.
We report the performance in Table 9. Similar to experiments we layout in the main paper, we apply our method in low resource setting on PASCAL VOC [13] dataset. We conduct experiments on 10 shot and 1 shot. We observe a strong performance in both low resource settings, in which our method significantly outperforms the direct supervision (+30.69% improvement on mAP@50 in 10-shot setting). In 1 shot scenario in which direct supervision can not make any correct prediction, our method can achieve 46.74 mAP@50.
Dataset | CC | CM | HB | HS | MR | NV1 | NV2 | PO | PS | Pbbq | RB | mAP |
---|---|---|---|---|---|---|---|---|---|---|---|---|
DALL-E (ours) | 79.0 | 92.9 | 90.4 | 44.9 | 77.0 | 92.1 | 88.0 | 77.5 | 64.1 | 75.7 | 80.2 | 78.3 |
100% Real | 81.9 | 95.3 | 92.0 | 87.3 | 86.5 | 96.8 | 88.9 | 80.5 | 92.3 | 88.9 | 58.6 | 86.3 |
DALL-E (ours) + 10% Real | 90.5 | 96.9 | 93.2 | 74.0 | 60.4 | 90.7 | 86.5 | 48.7 | 97.7 | 86.4 | 72.1 | 81.6 |
DALL-E (ours) + 40% Real | 91.8 | 97.4 | 94.5 | 84.9 | 75.1 | 90.7 | 78.6 | 52.1 | 96.9 | 87.6 | 77.9 | 84.3 |
DALL-E (ours) + 70% Real | 92.7 | 98.2 | 95.2 | 90.9 | 88.0 | 93.1 | 89.7 | 50.3 | 97.6 | 92.2 | 78.3 | 87.9 |
DALL-E (ours) + 100% Real | 94.4 | 98.2 | 95.2 | 90.7 | 92.5 | 94.1 | 93.0 | 72.8 | 98.3 | 98.7 | 79.8 | 91.4 |
8 Blending real data for further improvement
We first demonstrate the effect of incorporating different percentages of real-world training images together with our synthesized images for training object detection models. We conduct experiments on the GMU kitchen [17] test set and the real-world training images are from the GMU kitchen training dataset (100% set contains 3837 images). This experimental setup is similar to the one followed in the Object cut-and-paste paper [12]. All the results are provided in the Table 10. We highlight the mAP accuracy of training with all synthetic data plus 10%, 40%, 70% and 100% real data.
Observe how using only a subset of real-world data () with our synthesized images achieves better performance than full (100%) real-world data only. This suggests the advantages of our data generation approach saving the amount of human efforts required in labeling the real-world data significantly. Further, we also observe that accuracy gradually improves from to as we increase the amount of real-world data.
Dataset | #CDI | CC | CM | HB | HS | MR | NV1 | NV2 | PO | PS | Pbbq | RB | mAP |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Real GMU train | - | 81.9 | 95.3 | 92.0 | 87.3 | 86.5 | 96.8 | 88.9 | 80.5 | 92.3 | 88.9 | 58.6 | 86.3 |
Black | 1500 | 42.3 | 62.4 | 64.7 | 5.3 | 23.3 | 61.1 | 56.5 | 75.3 | 1.6 | 26.7 | 33.9 | 41.2 |
CDI | 10 | 51.4 | 26.4 | 2.1 | 12.2 | 12.1 | 0.4 | 0.1 | 1.0 | 0.1 | 29.8 | 30.0 | 15.0 |
Random (COCO) | 1500 | 50.7 | 80.1 | 77.5 | 15.3 | 32.2 | 81.7 | 87.9 | 71.7 | 66.8 | 59.0 | 68.5 | 62.8 |
Random (DALL-E) | 1500 | 64.8 | 86.9 | 78.7 | 49.2 | 62.2 | 84.8 | 83.8 | 72.6 | 70.9 | 57.2 | 24.1 | 66.8 |
UW-Kitchen | 1500 | 75.7 | 91.1 | 87.7 | 51.6 | 66.5 | 91.5 | 88.7 | 76.2 | 63.2 | 70.5 | 75.2 | 76.1 |
DALL-E (ours) | 1500 | 79.0 | 92.9 | 90.4 | 44.9 | 77.0 | 92.1 | 88.0 | 77.5 | 64.1 | 75.7 | 80.2 | 78.3 |
DALL-E (ours) | 2400 | 79.5 | 93.4 | 88.5 | 59.0 | 71.5 | 91.4 | 88.1 | 76.1 | 78.7 | 75.7 | 80.6 | 80.1 |
DALL-E (ours)+Real | 1500 | 94.4 | 98.2 | 95.2 | 90.7 | 92.5 | 94.1 | 93.0 | 72.8 | 98.3 | 98.7 | 79.8 | 91.4 |
9 Our method enables compositional manipulation
Context image generation from only one CDI
Here we provide some visualization examples of context images generated from just one given Context Description Images (CDI). To demonstrate that our model is very generic and can generate a large set of diverse context images from given input as little as one image, we include some examples of generated images from our pipeline as shown in Fig. 5.
Compositional property of language can help to correct context description by remove/add/style change.
In this section, we demonstrate more results for the compositional experiments present in section 4.3 of the main paper.
In table 6 of the main paper, we evaluate the models with synthetic data generated before intervention and after an intervention. Here in table 12, we provide additional results of training models just on the CDIs. We observe that the model can not yield good results due to the insufficient amount of training instances and the large domain gap. However, if we apply our method without intervention, we can get a significant performance boost by providing diverse training instances. Furthermore, by applying intervention, we can narrow the domain gap and yield even more performance gain, reinforcing the effectiveness of our approach. For instance, Fig.8 visualize the context images generated by DALL-E for noisy and out-of-distribution CDIs. In the left column, inputs CDIs are cartoon kitchen which provide noise information about environment. The compositional property allow us change the style from cartoon to real and generate high-quality context images (right column)
Here we show qualitative examples of out-of-distribution images with their corresponding captions and how the compositional property of our method can help to correct context descriptions by remove/add and style change.
Specifically, we include figures that exemplify our generated images in the compositional experiments. Fig. 8, fig. 9, fig. 10, and fig. 11 are generated results before and after intervention for the experiment Cartoon Kitchen, Skeleton Kitchen, Objects in Kitchen, and Kitchens with Human in the Table. 12, respectively.
Dataset | only CDI | No Intervention | After Intervention |
---|---|---|---|
Cartoon Kitchen | 11.2 | 70.0 | 76.7 |
Skeleton Kitchen | 10.3 | 64.6 | 74.8 |
Objects in Kitchen | 9.4 | 71.8 | 77.0 |
Kitchens with Human | 10.2 | 70.9 | 76.9 |






Details of GMU kitchen results
We add per-class details of the performance on GMU kitchen dataset (Table. 11)
10 Model Inference Visualization
Fig. 12 demonstrates the predictions of faster RCNN [38] models on PASCAL VOC [13]. The model was trained with the setting of main paper Table 2 G-2 EXP-5. Fig. 13 demonstrates the predictions of faster RCNN on the GMU-Kitchen dataset. The model was trained with # CDI = 1500 from UW dataset [17]. Although the model is trained on synthetic data, the model is able to yield good predictions without being accessible to GMU Kitchen’s train data.

