This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Text2Street: Controllable Text-to-image Generation for Street Views

Jinming Su, Songen Gu, Yiting Duan, Xingyue Chen and Junfeng Luo
Meituan
[email protected]
Abstract

Text-to-image generation has made remarkable progress with the emergence of diffusion models. However, it is still a difficult task to generate images for street views based on text, mainly because the road topology of street scenes is complex, the traffic status is diverse and the weather condition is various, which makes conventional text-to-image models difficult to deal with. To address these challenges, we propose a novel controllable text-to-image framework, named Text2Street. In the framework, we first introduce the lane-aware road topology generator, which achieves text-to-map generation with the accurate road structure and lane lines armed with the counting adapter, realizing the controllable road topology generation. Then, the position-based object layout generator is proposed to obtain text-to-layout generation through an object-level bounding box diffusion strategy, realizing the controllable traffic object layout generation. Finally, the multiple control image generator is designed to integrate the road topology, object layout and weather description to realize controllable street-view image generation. Extensive experiments show that the proposed approach achieves controllable street-view text-to-image generation and validates the effectiveness of the Text2Street framework for street views.

Refer to caption
Figure 1: Challenges of text-to-image generation for street views. There are three primary challenges: (1) complex road topology, including road structure in the first row and topological marks in the second row, (2) diverse traffic status, e.g., varying traffic objects in the third row, and (3) various weather conditions like the rainy day in the last row. Note that Reference are original images from nuScenes [3], Stable Diffusion [25]/Midjourney [19]/DALLE3 [2] are tested on their official APIs, and Stable Diffusion and Ours are finetuned on nuScenes.
footnotetext: Equal contribution.

1 Introduction

Refer to caption
Figure 2: Framework of Text2Street. We begin by introducing the lane-aware road topology generator, which utilizes textual input to create a local semantic map representing the intricate road topology with lane information. Next, we present the position-based object layout generator, which captures the diversity of traffic status and generate the traffic object layout. Subsequently, the road topology and object layout are projected into the camera’s perspective through pose sampling. Finally, the projected road topology, object layout, and textual weather description are integrated through the multiple control image generator to produce the ultimate street-view image.

Text-to-image generation [17, 10, 24], as an essential task of computer vision that aims to coherent images solely based on textual descriptions. In recent years, great efforts [22, 23] have been dedicated to text-to-image generation for common scenarios, such as people and objects. Remarkable progress has been achieved, especially with the advent of diffusion models [14, 25]. However, it is equally valuable to generate images in specialized domains, including autonomous driving [18], medical image analysis [4], robot perception [28], among others. Text-to-image generation for street views holds particular importance for data generation in the context of autonomous driving perception and map construction, yet it remains relatively unexplored.

Street-view text-to-image generation, as an underdeveloped task, faces several serious challenges, which can be categorized into three main aspects. Firstly, generating road topologies that adhere to traffic regulations presents a challenge. On one hand, as depicted in Fig.1 (a), learning the road structure from text-image pairs is hindered by incomplete road structure information in the image, arising from limited imaging angles and frequent occlusions. This complexity makes it challenging for Stable Diffusion[25], fine-tuned on nuScenes dataset [3], to generate expected images. On the other hand, as illustrated in Fig. 1 (b), generating lane lines that both comply with traffic regulations and match the count specified in the text poses a formidable challenge. Secondly, the representation of traffic status, a crucial element in street-view images, is often achieved through the number of traffic objects present. However, generating a specified number of traffic objects while adhering to motion rules using current models frequently falls short of expectations. As demonstrated in Fig. 1 (c), existing methods tend to lack sensitivity to precise numerical requirements. For instance, while our goal is to generate a road scene with two cars, the actual output from Stable Diffusion often includes a significantly higher number of cars. Lastly, weather conditions are typically contingent upon the scene content, and direct image generation based on these conditions often yields vague or suboptimal outcomes, as depicted in Fig. 1 (d). Due to the presence of these three challenges, street-view text-to-image generation is a demanding task in computer vision.

To address previously mentioned challenges, we propose a novel controllable text-to-image framework for street views termed as Text2Street, illustrated in Fig. 2. Within this framework, we first introduce the lane-aware road topology generator, which utilizes text descriptions to create a local semantic map representing the intricate road topology. This generator also produces lane lines within the semantic map that conform to specified quantities and traffic regulations through a counting adapter. Subsequently, we introduce the position-based object layout generator to capture the diverse traffic status. By employing an object-level bounding box diffusion strategy, it generates the traffic object layout based on textual descriptions that adhere to specified quantities and traffic rules. Finally, the road topology and object layout are projected into the camera’s imaging perspective through pose sampling. The projected road topology, object layout, and textual weather description are then integrated using the multiple control image generator to produce the final street-view image. Experimental validation confirms the effectiveness of our proposed method in generating street-view images from textual inputs.

The main contributions of this paper are as follows: 1) We propose a novel controllable text-to-image framework for street views, enabling the controls of road topology, traffic status, and weather conditions based solely on text descriptions. 2) We introduce the lane-aware road topology generator that generates specific road structures as well as lane topologies. 3) We propose the position-based object layout generator, capable of generating a specific number of traffic objects that comply with traffic rules. 4) We propose the multiple control image generator that can integrate road topology, traffic status, and weather conditions to achieve multi-condition image generation.

2 Related Work

In this section, we review related works in two aspects.

2.1 Text-to-image Generation

In recent years, many methods [17, 24, 22, 23, 14, 25, 27, 2, 7, 34] have been dedicated to dealing with the task of general text-to-image generation. For example, AlignDRAW [17] iteratively draws patches on a canvas, while attending to the relevant words in the description. GAWWN [24] synthesizes images given instructions describing what content to draw in which location based on generative adversarial networks [9]. DALLE [22] describes a simple approach for this text-to-image task based on a transformer that autoregressively models the text and image tokens as a single stream of data. DALLE2 [23] proposes a two-stage model: a prior that generates a CLIP [21] image embedding given a text caption, and a decoder that generates an image conditioned on the image embedding. DDPM [14] presents high quality image synthesis results using diffusion models [29], a class of latent variable models inspired by considerations from nonequilibrium thermodynamics. Stable Diffusion [25] applies diffusion models training in the latent space of pretrained autoencoders, and turns diffusion models into powerful and flexible generators for general conditioning inputs by introducing cross-attention layers into the model architecture. These methods have garnered remarkable results in general text-to-image generation. However, their effects in street-view text-to-image tasks is not as commendable.

2.2 Street-view Image Generation

There has been a recent surge in the study of methods for street-view image generation. For example, SDM [31] processes semantic layout and noisy image differently. It feeds noisy image to the encoder of the U-Net [26] structure while the semantic layout to the decoder by multi-layer spatially-adaptive normalization operators. BEVGen [30] synthesizes a set of realistic and spatially consistent surrounding images that match the bird’s-eye view (BEV) layout of a traffic scenario. BEVGen incorporates a novel cross-view transformation with spatial attention design which learns the relationship between cameras and map views to ensure their consistency. GeoDiffusion [6] translates various geometric conditions into text prompts and empower pre-trained text-to-image diffusion models for high-quality detection data generation and is able to encode not only the bounding boxes but also extra geometric conditions such as camera views in self-driving scenes. BEVControl [32] proposes a two-stage generative method that can generate accurate foreground and background contents. These methods typically require the input of BEV maps, object bounding boxes, or semantic masks to generate images. However, there is almost no research on generating street-view images relying solely on text. In this paper, we primarily focus on resolving the issue of street-view text-to-image generation.

3 The Proposed Approach

To address these challenges (i.e., complex road topology, diverse traffic status, various weather conditions) in street-view text-to-image generation, we introduce Text2Street, a novel controllable framework illustrated in Fig. 2. In this section, details of the approach are described as follows.

3.1 Overview

Text2Street takes a street-view description prompt (e.g., “a street-view image with the crossing, 3 lanes, 4 cars and 1 truck driving on a sunny day”) as input and generates a corresponding street-view image. Prior to the main process, the input prompt is parsed by a large language model (e.g., GPT-4 [20]) to extract descriptions of road topology, traffic status, and weather conditions, which are then fed into three main components. The first component is the lane-aware road topology generator, which takes the road topology description (“crossing, 3 lanes”) as input and produces a local semantic map. The second component is the position-based object layout generator, which takes the traffic object description from the traffic status (“4 cars and 1 truck”) as input and generates traffic object layout. The third component is the multiple control image generator, which takes road topology, object layout, and weather condition descriptions (“a sunny day”) as input, and outputs an image that matches the original street-view description prompt.

3.2 Lane-aware Road Topology Generator

Refer to caption
Figure 3: Architecture of the lane-aware road topology generator.

For Stable Diffusion [25], directly generating images that comply with road topology, including road structure and lane topologies, is difficult. To address this, we introduce the lane-aware road topology generator (LRTG), as shown in Fig. 3. This generator does not directly produce road images; instead, it first creates a local semantic map describing the road structure, representing a complete regional-level road structure, including drivable areas, intersections, sidewalks, zebra crossings, etc. Simultaneously, to ensure the generated lane lines adhere to traffic regulations (i.e., equidistant and parallel lanes), we characterize and generate lane lines on the semantic map, which is easier and more controllable than generating lane lines directly on perspective-view images. Furthermore, to ensure the number of lane lines aligns with the provided text, we incorporate a counting adapter for the precise generation of a specified number of lane lines. In LRTG, we only generate the semantic map, which serves as a crucial intermediary for street-view images, as further detailed in Section 3.4.

When generating local semantic maps, we utilize Stable Diffusion to encode road topology descriptions based on the CLIP [21] text encoder. Subsequently, the encoded input is then fed into cross-attention layers of U-Net [26] to denoise image latents, ultimately outputting the corresponding semantic map. Consistent with Stable Diffusion, the learning objective is as follows:

SD=𝔼(x),y,ϵ𝒩(0,1),t[ϵϵθ(zt,t,τ(y))22],\displaystyle\mathcal{L}_{SD}=\mathbb{E}_{\mathcal{E}(x),y,\epsilon\sim\mathcal{N}(0,1),t}\left[\left\|\epsilon-\epsilon_{\theta}(z_{t},t,\tau(y))\right\|^{2}_{2}\right], (1)

where xH×W×3x\in\mathbb{R}^{H\times W\times 3} is the given images cropped from labeled semantic maps in RGB space, ()\mathcal{E}(\cdot) refers to the encoder of pretrained autoencoders [8] and z=(x)z=\mathcal{E}(x) represents encoded image latents, ztz_{t} is from the forward diffusion process at the timestep tt, yy is the text prompt and τ()\tau(\cdot) represents the pretrained CLIP text encoder, the term ϵ\epsilon denotes the target noise, and ϵθ()\epsilon_{\theta}(\cdot) signifies the time-conditional U-Net used for predicting the noise. This manner ensures the reasonable generation of road structures and lane line shapes within the semantic map.

For achieving precise control over the number of lane lines, the counting adapter fCAf_{CA} gathers attention scores from all cross-attention layers of the U-Net. These scores are subsequently reshaped to match the same resolution and then averaged to yield attention features for all tokens. From these attention features, the ones corresponding to tokens “lane lines” are selected. These selected features l\mathcal{F}_{l}, undergo further processing through two convolutional layers with the kernel of 3×33\times 3, followed by one fully connected layer, which serves to predict the number of lane lines NlN_{l}. The learning objective for achieving precise control over the number of lane lines is as follows:

CA=NlfCA(l)22.\displaystyle\mathcal{L}_{CA}=\left\|N_{l}-f_{CA}(\mathcal{F}_{l})\right\|^{2}_{2}. (2)

Based on Eq. 1 and 2, LRTG can be jointly optimized to generate the local semantic map, encompassing both road structure and lane lines as required.

3.3 Position-based Object Layout Generator

Refer to caption
Figure 4: Architecture of the position-based object layout generator. Note \oplus means element-wise addition.

To ensure that generated images can depict diverse traffic conditions, we utilize the large language model to convert traffic status into the number of traffic objects (e.g., car, truck, pedestrian, etc). Then, the position-based object layout generator (POLG) is proposed to create an object layout based on the text description of object quantity, as demonstrated in Fig. 4. To guarantee a specified number of objects are generated, we incorporate an object-level bounding box diffusion strategy to generate positions of object bounding boxes. Simultaneously, to ensure the generated traffic objects comply with traffic rules, we incorporate the local semantic map from the LRTG into the box diffusion process. With POLG, we generate layout information for traffic objects, which also serves as an intermediary for generating the final street-view images, as introduced in Section 3.4.

In the bounding box diffusion strategy, we first represent traffic objects as position vectors 𝒪i=[xi,yi,zi,li,wi,hi,ζi,ci]\mathcal{O}_{i}=\left[x_{i},y_{i},z_{i},l_{i},w_{i},h_{i},\zeta_{i},c_{i}\right] (i=1,2,,Noi=1,2,...,N_{o}, NoN_{o} is the number of objects), where xi,yi,zix_{i},y_{i},z_{i} denote the coordinate of object position, li,wi,hil_{i},w_{i},h_{i} represent the object’s size with length/width/height, ζi\zeta_{i} signifies the object’s yaw angle, and cic_{i} indicates the object’s category. Subsequently, the position vector is diffused based on diffusion models DDPM [14]. Furthermore, to ensure objects adhere to traffic regulations (such as cars must be driven on the road and not against traffic), we use ControlNet [33], incorporating the local semantic map from LRTG as a control into the POLG. Ultimately, the learning objective is as follows:

POLG=𝔼o,m,ϵ,t[ϵϵθ(ot,t,𝒞(m))22],\displaystyle\mathcal{L}_{POLG}=\mathbb{E}_{o,m,\epsilon,t}\left[\left\|\epsilon-\epsilon_{\theta}(o_{t},t,\mathcal{C}(m))\right\|^{2}_{2}\right], (3)

where oo represents the position vectors of the objects, oto_{t} is from the forward diffusion process at the timestep tt, mm denotes the local semantic map, and 𝒞()\mathcal{C}(\cdot) signifies the ControlNet. And other symbols are consistent with those in Eq. 1.

Based on Eq. 3, the layout information of traffic objects that meet the traffic status can be optimized and generated through POLG based on textual descriptions.

3.4 Multiple Control Image Generator

Refer to caption
Figure 5: Architecture of the multiple control image generator. Note \otimes, \oplus means the concatenation and element-wise addition.

To produce images with realistic weather that align with road topology and traffic status, we introduce the multiple control image generator (MCIG), as depicted in Fig. 5.

To effectively utilize the previously generated local semantic map and traffic object layout, camera pose sampling and image projection are conducted before these two pieces of information enter MCIG. This results in a 2D road semantic mask r\mathcal{M}_{r} and traffic object layout map o\mathcal{M}_{o} from a perspective view, as shown in Fig. 2. The 2D traffic object layout maps are also represented as 2D traffic object position vectors 𝒫={𝒫i}i=1No={[xi1,yi1,xi2,yi2,ci]}i=1No\mathcal{P}=\{\mathcal{P}_{i}\}_{i=1}^{N_{o}}=\{\left[x^{1}_{i},y^{1}_{i},x^{2}_{i},y^{2}_{i},c_{i}\right]\}_{i=1}^{N_{o}}. The projection uses a conventional method based on intrinsic and extrinsic transformation, where the intrinsic parameters use fixed camera parameters, and the extrinsic parameters are sampled near the prior camera height.

As depicted in Fig. 5, MCIG comprises five modules: object-level position encoder, text encoder, semantic mask ControlNet, object layout ControlNet, and naive Stable Diffusion. The first four modules control image generation based on four different types of information, i.e., 2D traffic position vectors, text describing the weather, 2D road semantic masks, and 2D traffic object layout maps.

The object-level position encoder encodes the 2D traffic object position vectors, including 2D bounding boxes and object categories, represented as:

𝒫(𝒫)=f𝒫(𝒞).\displaystyle\mathcal{PE}(\mathcal{P})=f_{\mathcal{PE}}(\mathcal{BE}\otimes\mathcal{CE}). (4)

The box encoder maps object bounding boxes to a higher-dimensional space, ensuring that the network can learn higher-frequency mapping functions and focus on the positions of each object. Specifically, the box encoder is an encoding function based on sine and cosine. The mathematical form of the encoding function is as follows:

(p)=[,sin(2lπp),cos(2lπp),]l=0L1,\displaystyle\mathcal{BE}(p)=\left[\cdots,\text{sin}(2^{l}\pi p),\text{cos}(2^{l}\pi p),\cdots\right]^{L-1}_{l=0}, (5)

where ()\mathcal{BE}(\cdot) is applied to each component of the box (i.e., xi1,yi1,xi2,yi2x^{1}_{i},y^{1}_{i},x^{2}_{i},y^{2}_{i}) of each object 𝒫i\mathcal{P}_{i}, and LL is empirically set to 10. Simultaneously, the category encoder 𝒞\mathcal{CE} employs the CLIP text encoder to encode the object category (e.g., “car”). Subsequently, the box encoding and category encoding are concatenated at the feature embedding dimension of each object. The concatenated features are then mapped to features with the same dimension as the original text encoder’s embedding through a two-layer fully connected network f𝒫()f_{\mathcal{PE}}(\cdot), serving as position embeddings. The text encoder, based on the CLIP text encoder, encodes the weather description text 𝒯\mathcal{T}, resulting in text embeddings.

The object position encodings and weather text embeddings, upon concatenation at the token dimension, are fed into the cross-attention layer of Stable Diffusion, individually controlling the object position and weather during the image generation. Simultaneously, the semantic mask ControlNet and object layout ControlNet employ two similar ControlNets, utilizing images (i.e., semantic masks and layout maps) as inputs to control the road topology and object layout during the street-view image generation. The learning objective function of MCIG is as follows:

MCIG=\displaystyle\mathcal{L}_{MCIG}= (6)
𝔼[ϵϵθ(zt,t,𝒫(𝒫),τ(𝒯),𝒞(r),𝒞(o))22],\displaystyle\mathbb{E}_{\mathbb{P}}\left[\left\|\epsilon-\epsilon_{\theta}(z_{t},t,\mathcal{PE}(\mathcal{P}),\tau(\mathcal{T}),\mathcal{C}(\mathcal{M}_{r}),\mathcal{C}(\mathcal{M}_{o}))\right\|^{2}_{2}\right],

where \mathbb{P} is the set of {(x),𝒫,𝒯,r,o,ϵ,t}\{\mathcal{E}(x),\mathcal{P},\mathcal{T},\mathcal{M}_{r},\mathcal{M}_{o},\epsilon,t\} for convenience of presentation.

Through the optimization of MCIG using Eq. 6, we obtain street-view images that conform to the initial prompt about road topology, traffic status, and weather conditions.

4 Experiments and Results

4.1 Experimental Setup

Datasets. To validate the performance of the proposed approach, we conduct all experiments on the public autonomous driving dataset nuScenes [3]. nuScenes dataset contains 1,000 street-view scenes (700/150/150 for training/validation/testing, respectively). Each scene comprises approximately 40 frames, with each frame encompassing six RGB images captured by six cameras mounted for panoramic view on the ego vehicle. Additionally, each frame is with a labeled semantic map with 32 semantic categories. For the sake of simplicity and clarity, we solely use images captured by the 𝙵𝚁𝙾𝙽𝚃\mathtt{FRONT} camera in all experiments.

Table 1: Comparisons with state-of-the-art methods on nuScenes validation dataset. The best result is in bold fonts.
      SFIDS_{\text{FID}} \downarrow       SCLIPS_{\text{CLIP}} \uparrow       SroadS_{\text{road}} \uparrow       SlaneS_{\text{lane}} \uparrow       SobjS_{\text{obj}} \uparrow       SweaS_{\text{wea}} \uparrow
      Reference       0       18.00       89.67       48.30       52.33       97.00
      Stable Diffusion [25]       67.91       16.38       83.50       21.33       29.83       98.50
      Stable Diffusion 2.1 [1]       63.83       16.40       83.57       29.81       31.42       98.55
      Attend-and-Excite [5]       69.63       16.40       83.00       30.67       31.00       98.83
      Text2Street (Ours)       53.92       17.81       84.17       35.17       46.33       99.67

Evaluation Metrics. To comprehensively evaluate the text-to-image generation for street views, we assess the generation results from the image level and attribute level.

For image-level evaluation, we use Fréchet Inception Distance (FID) SFIDS_{\text{FID}} [13] to measure image fidelity, and CLIP score SCLIPS_{\text{CLIP}} [12] to image-text alignment. Please refer to related works [13, 12] for computation details.

In the attribute-level evaluation, we primarily measure the accuracy of text-to-image street-view generation in four aspects: road structure, lane line counting, traffic object counting, and weather conditions. For these four metrics, we train four neural networks on nuScenes dataset to evaluate scores of generated images. Specifically, a two-class classifier based on ResNet-50 [11] is trained for road structure accuracy SroadS_{\text{road}} to distinguish whether the road structure in street-view RGB images is an “intersection” or “non-intersection”. For the accuracy of lane line counting SlaneS_{\text{lane}}, a six-class classifier is similarly trained on ResNet-50 to distinguish whether the number of lane lines in street-view RGB images is equal to 0, 1, 2, 3, 4, or 5\geq 5. For the accuracy of traffic object counting SobjS_{\text{obj}}, a object detector based on YOLOv5 [15] is trained to evaluate the number of traffic objects in street-view RGB images. For the accuracy of weather conditions SweaS_{\text{wea}}, a four-class classifier is also trained on ResNet-50 to distinguish whether the weather conditions in street-view RGB images are sunny day, sunny night, rainy day, or rainy night. All models are trained on nuScenes training dataset and used as evaluation metrics for attribute-level evaluation of street-view image generation.

Training and Inference. During the training phase, we separately train three generators, i.e., lane-aware road topology generator (LRTG), position-based object layout generator (POLG) and multiple control image generator (MCIG). LRTG and MCIG are initialized with Stable Diffusionhttps://huggingface.co/runwayml/stable-diffusion-v1-5 [25], POLG is random initialized based on DDPMhttps://huggingface.co/docs/diffusers/api/pipelines/ddpm [14] modified with ControlNet [33], and CLIP [21] text encoder are fixed with pretrained weights. For these three generators, we train them with AdamW [16] optimizer for 10 epochs with a learning rate 1e41e^{-4}, a batch size of 32. In addition, semantic maps in LRTG are resized to the resolution of 512×512512\times 512, and RGB images in MCIG are resized to the resolution of 895×512895\times 512. In the inference phase, the three generators perform inference sequentially, with the denoising iterations all set to 30 times.

4.2 Comparisons with state-of-the-art methods

Refer to caption
Figure 6: Qualitative comparisons of Stable Diffusion [25] and our approach Text2Street. These two methods are finetuned on nuScenes [3] dataset. Note that in nuScenes, the double yellow/white lane line counts as one lane line, not two lane lines.

We compare our approach with several state-of-the-art algorithms in text-to-image generation, including Stable Diffusion [25], Stable Diffusion 2.1 [1] and Attend-and-Excite [5] on nuScenes validation dataset, as listed in Tab. 1. These methods are all finetuned on nuScenes training dataset. Note that we have also listed the performance on the nuScenes validation dataset as the “Reference”.

Comparing the proposed method with state-of-the-art methods, we can see that our method consistently outperforms other methods across almost all metrics from Tab. 1. Significantly, our method outperforms all others on attribute-level metrics (i.e., SroadS_{\text{road}}, SlaneS_{\text{lane}}, SobjS_{\text{obj}} and SweaS_{\text{wea}}), demonstrating its superior controllability for fine-grained text-to-image street-view image generation. Specifically, our method shows a obviously 4.50%, 14.91% improvement on metric SlaneS_{\text{lane}} and SobjS_{\text{obj}} compared to the second best performance. Additionally, our method also performs better on image-level metrics (i.e., SFIDS_{\text{FID}} and SCLIPS_{\text{CLIP}}), reflecting its superior overall generation quality and image-text consistency. Overall, these observations validate the effectiveness of our proposed method for controllable image generation for street views.

Visual examples generated by our method are illustrated in Fig. 6. From Fig. 6, it is evident that our method yields superior results in dealing with varying road structures (1st and 4th rows), different numbers of lane lines (1st and 3rd rows), diverse numbers of traffic objects (1st and 2nd rows), and various weather conditions (2nd and 3rd rows) compared to other methods. This indicates that our method can effectively generate street-view images only based on text, and also implies its controllability and superiority in street-view text-to-image generation.

Refer to caption
Figure 7: Examples of different image editing operations by our approach.
Table 2: Performance of different settings of the proposed method on nuScenes validation dataset.
LRTG POLG MCIG SFIDS_{\text{FID}} \downarrow SCLIPS_{\text{CLIP}} \uparrow SroadS_{\text{road}} \uparrow SlaneS_{\text{lane}} \uparrow SobjS_{\text{obj}} \uparrow SweaS_{\text{wea}} \uparrow
Baseline 67.91 16.38 83.50 21.33 29.83 98.50
A1 60.45 17.82 85.17 33.16 30.00 98.50
A2 59.85 17.82 84.67 34.01 31.50 98.53
B 59.60 17.81 84.40 29.33 41.50 98.67
C (Text2Street) 53.92 17.81 84.17 35.17 46.33 99.67

4.3 Ablation Analysis

To assess the effectiveness of individual components, we carry out ablation experiments on nuScenes validation dataset, comparing the performance variations within the proposed approach.

Fistly, to validate the effectiveness of the lane-aware road topology generator (LRTG), we introduced three models for ablation comparison. The first model, termed as “Baseline”, is a naive multiple control image generator (MCIG) only with only the text encoder, which actually is a Stable Diffusion model. The second model, named “A1”, is based on the “Baseline” with the addition of LRTG excluding lane line control. The third model, “A2”, adds LRTG with lane line control to the first model. The comparison of these three models is presented in the first three rows of Tab. 2. It can be observed that the introduction of road structure control (“A1”) significantly improves the SroadS_{\text{road}} metric, and the addition of both road structure and lane lines (“A2”) controls further enhances both SroadS_{\text{road}} and SlaneS_{\text{lane}} metrics. This confirms the effectiveness of LRTG in controlling road topology.

Secondly, to validate the effectiveness of the position-based object layout generator (POLG), we add POLG to “Baseline”, termed as “B”. Comparing the first and fourth rows of Tab. 2, it is evident that the inclusion of POLG significantly improves the metric SobjS_{\text{obj}}, demonstrating the control capability of POLG in traffic object generation.

Thirdly, to verify the compatibility of different modules, we also list the model “C” (i.e., Text2Street), which combines all three modules. As can be seen from the last row of Tab. 2, “C” achieves the best performance across all metrics, confirming the compatibility among different modules.

4.4 Text-to-image Generation for Object Detection

Table 3: Performance of YOLOv5 without/with the data augmentation of our method on nuScenes validation dataset.
mAP \uparrow Precision \uparrow Recall \uparrow
YOLOv5 46.30 81.77 70.91
YOLOv5 with Text2Street 47.83 84.45 72.37

To demonstrate the utility of street-view text-to-image generation for downstream tasks, we select object detection as a representative task. We use the proposed Text2Street to generate 30,000 images based on random prompts as a supplement to the original training data to train YOLOv5 on the nuScenes dataset, as listed in Tab. 3. The results indicate that the images generated by our method are beneficial for downstream street-view tasks, highlighting the potential of the street-view text-to-image generation.

4.5 Image Editing

In addition to street-view text-to-image generation, our approach also allows for modifications to local semantic maps, object layouts, or text, enabling the editing of road structures, lane lines, object layouts, and weather conditions in the originally generated RGB images, as depicted in Fig. 7.

5 Conclusion

In this paper, we propose a novel controllable text-to-image generation framework for street views. In this framework, we design the lane-aware road topology generator to exert control over the road topology in a text-to-map manner. Additionally, the position-based object layout generator is proposed to control the layout of traffic objects through a text-to-layout manner. Moreover, the multiple control image generator is built to integrate multiple controls to generate street-view images. Empirical results substantiate the effectiveness of our proposed approach.

References

  • [1] Stability AI. Stable diffusion 2.1. https://huggingface.co/stabilityai/stable-diffusion-2-1.
  • Betker et al. [2023] James Betker, Gabriel Goh, Li Jing, Tim Brooks, Jianfeng Wang, Linjie Li, Long Ouyang, Juntang Zhuang, Joyce Lee, Yufei Guo, Wesam Manassra, Prafulla Dhariwal, Casey Chu, Yunxin Jiao, and Aditya Ramesh. Improving image generation with better captions, 2023.
  • Caesar et al. [2020] Holger Caesar, Varun Bankiti, Alex H Lang, Sourabh Vora, Venice Erin Liong, Qiang Xu, Anush Krishnan, Yu Pan, Giancarlo Baldan, and Oscar Beijbom. nuscenes: A multimodal dataset for autonomous driving. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 11621–11631, 2020.
  • Chan et al. [2020] Heang-Ping Chan, Ravi K Samala, Lubomir M Hadjiiski, and Chuan Zhou. Deep learning in medical image analysis. Deep Learning in Medical Image Analysis: Challenges and Applications, pages 3–21, 2020.
  • Chefer et al. [2023] Hila Chefer, Yuval Alaluf, Yael Vinker, Lior Wolf, and Daniel Cohen-Or. Attend-and-excite: Attention-based semantic guidance for text-to-image diffusion models. ACM Transactions on Graphics (TOG), 42(4):1–10, 2023.
  • Chen et al. [2023] Kai Chen, Enze Xie, Zhe Chen, Lanqing Hong, Zhenguo Li, and Dit-Yan Yeung. Integrating geometric control into text-to-image diffusion models for high-quality detection data generation via text prompt. arXiv: 2306.04607, 2023.
  • Ding et al. [2021] Ming Ding, Zhuoyi Yang, Wenyi Hong, Wendi Zheng, Chang Zhou, Da Yin, Junyang Lin, Xu Zou, Zhou Shao, Hongxia Yang, and Jie Tang. Cogview: Mastering text-to-image generation via transformers. arXiv preprint arXiv:2105.13290, 2021.
  • Esser et al. [2021] Patrick Esser, Robin Rombach, and Bjorn Ommer. Taming transformers for high-resolution image synthesis. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 12873–12883, 2021.
  • Goodfellow et al. [2014] Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. Advances in neural information processing systems, 27, 2014.
  • Gregor et al. [2015] Karol Gregor, Ivo Danihelka, Alex Graves, Danilo Rezende, and Daan Wierstra. Draw: A recurrent neural network for image generation. In International conference on machine learning, pages 1462–1471. PMLR, 2015.
  • He et al. [2016] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770–778, 2016.
  • Hessel et al. [2021] Jack Hessel, Ari Holtzman, Maxwell Forbes, Ronan Le Bras, and Yejin Choi. Clipscore: A reference-free evaluation metric for image captioning. arXiv preprint arXiv:2104.08718, 2021.
  • Heusel et al. [2017] Martin Heusel, Hubert Ramsauer, Thomas Unterthiner, Bernhard Nessler, and Sepp Hochreiter. Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems, 30, 2017.
  • Ho et al. [2020] Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising diffusion probabilistic models. Advances in neural information processing systems, 33:6840–6851, 2020.
  • Jocher [2020] Glenn Jocher. YOLOv5 by Ultralytics, 2020.
  • Loshchilov and Hutter [2017] Ilya Loshchilov and Frank Hutter. Decoupled weight decay regularization. arXiv preprint arXiv:1711.05101, 2017.
  • Mansimov et al. [2015] Elman Mansimov, Emilio Parisotto, Jimmy Lei Ba, and Ruslan Salakhutdinov. Generating images from captions with attention. arXiv preprint arXiv:1511.02793, 2015.
  • Marti et al. [2019] Enrique Marti, Miguel Angel De Miguel, Fernando Garcia, and Joshue Perez. A review of sensor technologies for perception in automated driving. IEEE Intelligent Transportation Systems Magazine, 11(4):94–108, 2019.
  • [19] Midjourney. Midjourney. https://www.midjourney.com.
  • OpenAI [2023] OpenAI. Gpt-4 technical report, 2023.
  • Radford et al. [2021] Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual models from natural language supervision. In International conference on machine learning, pages 8748–8763. PMLR, 2021.
  • Ramesh et al. [2021] Aditya Ramesh, Mikhail Pavlov, Gabriel Goh, Scott Gray, Chelsea Voss, Alec Radford, Mark Chen, and Ilya Sutskever. Zero-shot text-to-image generation. In International Conference on Machine Learning, pages 8821–8831. PMLR, 2021.
  • Ramesh et al. [2022] Aditya Ramesh, Prafulla Dhariwal, Alex Nichol, Casey Chu, and Mark Chen. Hierarchical text-conditional image generation with clip latents. arXiv preprint arXiv:2204.06125, 2022.
  • Reed et al. [2016] Scott E Reed, Zeynep Akata, Santosh Mohan, Samuel Tenka, Bernt Schiele, and Honglak Lee. Learning what and where to draw. Advances in neural information processing systems, 29, 2016.
  • Rombach et al. [2022] Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. High-resolution image synthesis with latent diffusion models. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 10684–10695, 2022.
  • Ronneberger et al. [2015] Olaf Ronneberger, Philipp Fischer, and Thomas Brox. U-net: Convolutional networks for biomedical image segmentation. In Medical Image Computing and Computer-Assisted Intervention–MICCAI 2015: 18th International Conference, Munich, Germany, October 5-9, 2015, Proceedings, Part III 18, pages 234–241. Springer, 2015.
  • Saharia et al. [2022] Chitwan Saharia, William Chan, Saurabh Saxena, Lala Li, Jay Whang, Emily L Denton, Kamyar Ghasemipour, Raphael Gontijo Lopes, Burcu Karagol Ayan, Tim Salimans, et al. Photorealistic text-to-image diffusion models with deep language understanding. Advances in Neural Information Processing Systems, 35:36479–36494, 2022.
  • Shi et al. [2017] Qing Shi, Chang Li, Chunbao Wang, Haibo Luo, Qiang Huang, and Toshio Fukuda. Design and implementation of an omnidirectional vision system for robot perception. Mechatronics, 41:58–66, 2017.
  • Sohl-Dickstein et al. [2015] Jascha Sohl-Dickstein, Eric Weiss, Niru Maheswaranathan, and Surya Ganguli. Deep unsupervised learning using nonequilibrium thermodynamics. In International conference on machine learning, pages 2256–2265. PMLR, 2015.
  • Swerdlow et al. [2023] Alexander Swerdlow, Runsheng Xu, and Bolei Zhou. Street-view image generation from a bird’s-eye view layout. arXiv preprint arXiv:2301.04634, 2023.
  • Wang et al. [2022] Weilun Wang, Jianmin Bao, Wengang Zhou, Dongdong Chen, Dong Chen, Lu Yuan, and Houqiang Li. Semantic image synthesis via diffusion models. arXiv preprint arXiv:2207.00050, 2022.
  • Yang et al. [2023] Kairui Yang, Enhui Ma, Jibin Peng, Qing Guo, Di Lin, and Kaicheng Yu. Bevcontrol: Accurately controlling street-view elements with multi-perspective consistency via bev sketch layout. arXiv preprint arXiv:2308.01661, 2023.
  • Zhang et al. [2023] Lvmin Zhang, Anyi Rao, and Maneesh Agrawala. Adding conditional control to text-to-image diffusion models. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 3836–3847, 2023.
  • Zhou et al. [2021] Yufan Zhou, Ruiyi Zhang, Changyou Chen, Chunyuan Li, Chris Tensmeyer, Tong Yu, Jiuxiang Gu, Jinhui Xu, and Tong Sun. Lafite: Towards language-free training for text-to-image generation. arXiv preprint arXiv:2111.13792, 2021.