Enhancing Image Layout Control with Loss-Guided Diffusion Models
Abstract
Diffusion models are a powerful class of generative models capable of producing high-quality images from pure noise using a simple text prompt. While most methods which introduce additional spatial constraints into the generated images (e.g., bounding boxes) require fine-tuning, a smaller and more recent subset of these methods take advantage of the models’ attention mechanism, and are training-free. These methods generally fall into one of two categories. The first entails modifying the cross-attention maps of specific tokens directly to enhance the signal in certain regions of the image. The second works by defining a loss function over the cross-attention maps, and using the gradient of this loss to guide the latent. While previous work explores these as alternative strategies, we provide an interpretation for these methods which highlights their complimentary features, and demonstrate that it is possible to obtain superior performance when both methods are used in concert.
1 Introduction
Recently, diffusion models Sohl-Dickstein et al. (2015); Ho et al. (2020); Song and Ermon (2019) have emerged as a powerful class of generative models capable of producing high quality samples with superior mode coverage. These models are trained to approximate a data distribution, and sampling from this learned distribution can subsequently generate very realistic and diverse images. Incorporating conditioning Saharia et al. (2022); Ramesh et al. (2022); Rombach et al. (2022) further extends the utility of diffusion models by allowing one to specify the contents of the desired image using a simple text prompt, leveraging the models’ impressive compositional capabilities to combine concepts in ways that may not have been present in the training set. Conditioning on a text prompt does not, however, allow one fine-grained control over the layout of the final image, which is instead highly dependent on the initial noise sample.
An especially simple and intuitive way of describing a layout is to provide bounding boxes for various tokens in the text prompt. One way to realize a model taking such an input is to resort to training-based methods, wherein a pretrained model undergoes additional finetuning using training data where the images have been supplemented by their layouts. While such methods can achieve impressive performance Zheng et al. (2023); Zhang et al. (2023), this often involves the introduction of additional model complexity and training cost, in addition to the difficult task of compiling the training data. A recently proposed alternative approach Xie et al. (2023); Chen et al. (2024) uses the cross-attention module to achieve training-free layout control.
In this paper, we propose injection loss guidance (iLGD), a training-free framework in which the model’s denoising process works in synergy with loss guidance, such that sampled images simultaneously present the appropriate layout and maintain good image quality. First, we bias the latent towards the desired layout by altering the model’s attention maps directly, a process which we refer to as attention injection. The goal of injection is not to produce a latent which perfectly adheres to the desired layout, but rather to provide a comfortable starting point for loss guidance. The resulting latent is sufficiently close to the desired layout that we can afford to use a smaller amount of loss guidance. In doing so, we reimagine the role of injection as a coarse biasing of the diffusion process, and loss guidance as a refiner. We show that such a framework is capable of controlling the layout, while achieving superior image quality in many cases.
2 Related Work
2.1 Generative Models
Generative models learn to estimate a data distribution with the goal of generating samples from this distribution. Recently , a new family of generative models, known as diffusion models Sohl-Dickstein et al. (2015); Ho et al. (2020); Song and Ermon (2019), have achieved superior results on image synthesis compared to the previous state of the art Dhariwal and Nichol (2021), with samples exhibiting incredible diversity and image quality. While early diffusion models were formulated as Markov chains and relied on a large number of transitions to generate samples, Song et al. Song et al. (2021) showed that such a sampling procedure can be viewed as a discretization of a certain stochastic differential equation (SDE). In particular, Song et al. Song et al. (2021) showed that there exists a family of SDEs, whose solutions are sampling trajectories from the diffusion model. One such SDE, called the probability flow ODE, is completely deterministic and contains no noise. This enables the use of various ordinary differential equation (ODE) solvers for efficient sampling Karras et al. (2022); Lu et al. (2022).
2.2 Controllable Generation
Diffusion models can use a wide range of techniques for controllable generation. SDEdit Meng et al. (2022) allows the user to specify a layout by using paint strokes, which are noised to time to provide an initialization for solving the reverse-time SDE. The realism of the final image is, however, sensitive to the initial noise level , and guiding the generation of new images requires all high-level features to be specified in the stroke image. A more user-friendly method by Voynov et al. Voynov et al. (2023) requires only a simple sketch to guide the denoising process. The user-provided sketch is compared with the edges extracted by a latent edge predictor in order to compute a loss, which is used to iteratively refine the latent. In this case, the latent edge predictor must be trained, and sketches of more complicated scenes may be tedious. Zhang et al. Zhang et al. (2023) propose a more general method in which a separate encoder network takes as input a conditioning control image, such as a sketch, depth map, or scribble, to guide the generation process. Unfortunately, this requires finetuning a large pretrained encoder. When the inputs are specified as bounding boxes, smaller trainable modules can be used between the layers of the denoising UNet to encode layout information Zheng et al. (2023). Cheng et al. Cheng et al. (2023) also use an additional module which takes in bounding box inputs, injecting it directly after the self-attention layers. Once again, however, neither method can be adapted immediately, as the additional parameters necessitate further training.
A number of other methods have been proposed that are training-free. Bansal et al. Bansal et al. (2024) define a generic loss on the noiseless latent predicted from by Tweedie’s formula, and subsequently perform loss guidance on the latent at each time step. One downside of this method is that, while the loss expects clean images, the predicted latent is an approximation to only an average of possible generated images, and so can be blurry for large times . An alternative approach called MultiDiffusion was proposed by Bar-Tal et al. Bar-Tal et al. (2023), who used separate score functions on various regions in a latent diffusion model, with an optimization step at each iteration designed to fuse the separate diffusion paths. While this method can be effective in many cases, it can nonetheless exhibit patchwork artifacts, where the final image appears to be composed of several images rather than depicting a single scene.
Alternatively, several works have explored the use of cross-attention to achieve training-free layout control. Hertz et al. Hertz et al. (2023) demonstrate how attention maps can be injected from one diffusion process to another, and reweighted to control the influence of specific tokens. Subsequent work by Balaji et al. Balaji et al. (2023) builds upon this idea by directly manipulating the values in the attention map to obtain the desired layout, although it is difficult to precisely localize objects appearing in an image with this method alone. Singh et al. Singh et al. (2023) also use this technique to improve semantic control in stroke-guided image synthesis, which they combine with loss guidance based on the stroke image to improve the realism of generated images. Instead of using strokes, Chen et al. Chen et al. (2024) show that controlling layout with bounding boxes is possible by using loss guidance, where the loss is defined on the attention maps, although this method requires searching for a suitable noise initialization. Concurrent work by Xie et al. Xie et al. (2023) and Couairon et al. Couairon et al. (2023) also use attention-based loss guidance; the former adds spatial constraints to control the scale of the generated content, while the latter uses segmentation maps instead of bounding boxes. Epstein et al. Epstein et al. (2023) show that it is even possible to control properties of objects in an image, such as their shape, size, and appearance, through their attention maps, and subsequently manipulate these properties through loss guidance.
These works demonstrate the utility of injection, loss guidance, and the general role of cross-attention in layout control. We take a joint approach where we use cross-attention injection to assist loss guidance in producing the desired layout from simple bounding box inputs. In analyzing the role of each technique in layout control, we offer justification for their complementary use. The result is a powerful and intuitive method for layout control which maintains the quality of the generated images.
3 Preliminaries
3.1 Cross-Attention
To perform conditional image synthesis with text, Stable Diffusion leverages a cross-attention mechanism Vaswani et al. (2017). Cross-attention enables the modelling of complex dependencies between two sequences and , whose elements are projected to query, key and value vectors using projection matrices
where denotes the dimension of the query and key vectors, and denotes the dimension of the value vectors. Subsequently, the attention weights are computed as
The new representation for the sequence is
In diffusion models, the sequence represents the image, where each represents a pixel, and is a sequence of token embeddings. The attention weights , also called the attention or cross-attention map, follow the same spatial arrangement as the image, and a unique map is produced for each token in . Each entry describes how strongly related a spatial location is to the token . We leverage this feature of cross-attention to guide the image generation process.
3.2 Score Matching
The forward and reverse processes can be modelled by solutions of stochastic differential equations (SDEs) Song et al. (2021). While determining the coefficients of the forward process SDE is straightforward, the reverse process corresponds to a solution of the reverse-time SDE, which requires learning the score of the intractable marginal distribution . Instead, Song et al. Song et al. (2021) use the score-matching objective
(1) |
for some positive function .
While Eq. (1) does not directly enforce learning the score of , it is nonetheless minimized when Vincent (2011). Conditioning on provides a tractable way to obtain a neural network which, given enough parameters, matches almost everywhere. Because the forward process is available in closed form, it can be shown that the neural network which minimizes this loss is
(2) |
where is the standard deviation of the forward process at time , and predicts the scaled noise in .
When the predicted score function is available, any one of a family of reverse-time SDEs, all with the same marginal distributions , can be solved to sample from . One of these SDEs is noise-free, and is known as the probability flow ODE. A highly efficient way to sample from is to solve the probability flow ODE using a small number of large timesteps Song et al. (2021). An important benefit of sampling using an ODE is that the sampling process is deterministic, in the sense that it associates each noisy image with a unique noise-free sample Song et al. (2021).
In practice, the denoising process usually operates on a latent variable , where an autoencoder is used to project to and from image space . In order to generate images which correspond to a user-supplied text prompt, the noise predictor is conditioned on text prompts , and samples are computed using this noise predictor by classifier-free guidance (CFG). In this paper, we denote the CFG noise prediction by , or sometimes just , when the dependence on is clear. We provide more details about diffusion models in Appendix A.
3.3 Controllable Layout Generation
We use BoxDiff Xie et al. (2023), the method of Chen et al. Chen et al. (2024), and MultiDiffusion Bar-Tal et al. (2023) as our primary points of comparison. In Xie et al. (2023) and Chen et al. (2024), the authors apply spatial constraints on the attention maps of a latent diffusion model to derive a loss, and directly update the latent at time step by replacing it with
(3) |
where is a loss function which depends on the neural network, the prompt , and a set of bounding boxes. The parameter controls the strength of the loss guidance in each iteration. In BoxDiff, decays linearly from to , while in Chen et al., for some scale factor . In MultiDiffusion, letting denote the function which computes from using the predicted scaled noise , a new latent is produced by solving the optimization problem
where each is a mask corresponding to the bounding box of the prompt .
4 Method
[width=0.8]images/inject_vs_boxdiff_sweep.pdf
4.1 Attention Injection
Hertz et al. Hertz et al. (2023) observed that, by extracting the attention maps from a latent diffusion process and applying them to another one with a modified token sequence, it is possible to transfer the composition of an image. This technique is very effective, but requires an original set of attention maps which produce the desired layout, and it is not feasible to generate images until this layout is obtained.
Instead, we rely on the observation that the attention maps early in the diffusion process are strong indicators of the generated image’s composition. For early timesteps, both the latents and the attention maps are relatively diffuse, and don’t suggest any fine details about objects in the image. Motivated by this, we manipulate the attention maps by artificially enhancing the signal in certain regions. Given a list of target tokens , we define by setting each mask , , equal to over the region that the text token should correspond to, and zero otherwise, and perform injection by replacing the cross-attention map at time with
(4) |
where . We follow Balaji et al. Balaji et al. (2023) and use the scaling
(5) |
Scaling by ensures the injection strength is appropriate for a given timestep, and is a constant which controls the overall strength of injection.
This way, we directly bias the model’s predicted score so that each latent more closely corresponds to the desired layout. We denote the corresponding modified noise prediction by .
4.2 Loss Guidance
Conditional latent diffusion models predict the time-dependent conditional score , so that the resulting latent at the end of the denoising process is sampled from . We can modify the conditional score at time by introducing a loss term ,
(6) |
which corresponds to the marginal distribution
(7) |
The scaling constant controls the relative strength of loss guidance. By using annealed Langevin dynamics Song and Ermon (2019), it is possible to use the predicted score function together with the loss term to sample from an approximation to . While this provides a clear interpretation for the effect of loss guidance, the cost of annealed Langevin dynamics can be fairly large. Instead, we solve the probability flow ODE using the score function
(8) |
This no longer corresponds to sampling from an approximation to (see Song et al. (2023)), however, so long as the latents are not too out-of-distribution with respect to the marginals , then this process influences the trajectory to favor samples from for which the loss term is small.
For layout control, given a list of target tokens , we choose the simple loss function
(9) |
where is a mask whose value is over the region where token should appear, and otherwise, and . Intuitively, this simple loss encourages the sampling of latents whose attention maps for each target token take their largest values within the masked regions.
When high levels of loss-guidance are used, the specific choice of the loss function heavily influences the behavior of the denoising process. In BoxDiff, Xie et al. Xie et al. (2023) use sums over the most attended-to pixels in the masked attention maps, while in Chen et al. Chen et al. (2024) the authors use a normalized loss which depends on the ratio . These choices seem to be essential for enabling those methods to maintain good image quality at very high levels of loss guidance. Since we will use much lower levels of loss guidance, we find that our method is significantly less sensitive to the choice of loss function, with the result that our simple loss function works well to contain the attentions within the regions defined by .
In practice, diffusion models are trained to predict the scaled noise in . Revisiting Eq. (2), we observe that we can define the corresponding modified noise prediction by scaling the loss-guidance term appropriately Dhariwal and Nichol (2021):
(10) |
Since the loss function is a scalar function, computing its gradient with respect to by reverse accumulation requires only a single sweep through the computational graph. As a result, the runtime and memory requirements of sampling using loss guidance are approximately twice those of sampling without loss guidance. This can be seen in, for example, Table 1 of Chen et al. Chen et al. (2024).
4.3 iLGD
[width=0.8]images/balls.pdf
One weakness of attention injection is that, by directly modifying the attention maps, we disrupt the agreement between the predicted score function and the true conditional score . The discrepancy between the actual and predicted score functions becomes especially pronounced at smaller noise levels, where fine details in the cross attention maps are destroyed by the injection process. This results in a cartoon-like appearance in the final sampled images, as seen in Figure 17 of Balaji et al. (2023) and in the bottom row of Figure 1. The sensitivity of the cross-attention maps at low noise levels makes it difficult to use injection to precisely control the final layout, without negatively influencing image quality. One big advantage of attention injection, however, is that at high noise levels it is possible to strongly influence the cross-attention maps while still producing latents that are in-distribution. Interestingly, even the cartoon-like images resulting from excessive attention injection at low noise levels are not actually out-of-distribution, but are only in an unwanted style.
A weakness of loss-guidance is that the ad-hoc choice of the loss function may not compete well with the predicted score . In each denoising step, the latent must fall in a high probability region of of , otherwise the predicted latent will be out-of-distribution, which results in degraded image quality in the final sampled latent . This means that the guidance term’s influence in Eq. (6) should be small enough to avoid moving the latents into low probability regions. On the other hand, small loss guidance strengths may exert too little influence on the sampling trajectory. In this case, the model produces in-distribution samples, but ones which do not fully agree with the desired layout. This tradeoff is illustrated in the first row of Figure 1, which shows images with unnaturally high contrast and saturation appearing at high loss-guidance strengths . One advantage of loss-guidance is that, even at low strengths, it is able to exert some influence over sampling trajectories without biasing the style of the images.
We take the point of view that injection is better suited as a course control over the predicted latents, and cannot fully replace loss guidance. Instead, we propose using injection together with loss guidance in a complimentary fashion, in such a way that they compensate for each other’s weaknesses. We call this approach to layout control injection loss guidance (iLGD). Instead of delegating the layout generation task entirely to loss guidance, we rely on injection to first bias the latent, as illustrated in the second row of Figure 2. The first row of this figure shows that, when using Stable Diffusion alone, the ball appears in random locations near the center of the image, while the second row shows that injection encourages it to appear more frequently inside of the bounding box, towards the top right. When we also perform loss guidance, the third row of the figure shows that the averaged attention map is significantly more concentrated inside of the bounding box. Since the original score function is modified additively, the amount of loss guidance can be made sufficiently small so that the latents stay in-distribution, while still being large enough to influence the sampling trajectory. This is true even at small noise levels, when fine details are present in the images.
We also observe that, when using both injection and loss guidance, the details of the objects better reflect the context of the scene due to higher levels of attention on those objects. In the third row, many of the balls appear in the style of a soccer ball, which is likely the most common type of ball to appear together with grass.
We outline our algorithm in Algorithm 1 and depict it visually in Figure 3. We perform attention injection from timestep to in order to obtain the modified predicted noise . In each timestep, we further refine this latent by performing loss-guidance simultaneously, from timestep to . In practice, we find it useful to perform loss-guidance for several more steps after we stop injection, .
[width=0.45]images/unet-ilgd.pdf
5 Experiments
5.1 Experimental Setup
Datasets
We follow Xie et al. Xie et al. (2023) and evaluate performance on a dataset consisting of 200 prompt and bounding-box pairs, spanning 20 different prompts and 27 object categories. Each prompt reflects either of the following prompt structures: “a {} …,” or “a {} and a {} ….” MultiDiffusion requires separate prompts for the foreground elements and for the background, which we manually created from our original prompts. For example, the prompt “a red book and a clock” became the two foreground prompts “a red book” and “a clock,” with a null background prompt, while “a giraffe on a field” became the foreground prompt “a giraffe” in the foreground and the background prompt “a field.”
Methods
In this section, we compare our proposed method (iLGD), BoxDiff Xie et al. (2023), Chen et al. Chen et al. (2024), MultiDiffusion Bar-Tal et al. (2023), and Stable Diffusion Rombach et al. (2022). We also include the results of an ablation study of our method, in which we use just injection (SD + I) or just loss guidance (SD + LG) alone. In order to allow loss guidance to exert sufficient influence over the final image layouts when it is used in isolation, we increase the loss-guidance strength above the level used in iLGD (see Appendix B for more details). All of the methods compared in this section used the official Stable Diffusion v1.4 model Rombach et al. (2022) from HuggingFace.
Evaluation Metrics
We employ a variety of metrics to measure performance along various aspects. First, we use the T2I-Sim metric Xie et al. (2023) to measure text-to-image similarity between prompts and their corresponding generated images. This metric measures the cosine similarity between text and images in CLIP feature space Radford et al. (2021) to evaluate how well the generated images reflect the semantics of the prompt.
We also use CLIP-IQA Wang et al. (2023) to assess the quality of the generated images. Given a pair of descriptors which are opposite in meaning (e.g., high quality, low quality), CLIP-IQA compares the CLIP features of these prompts with the CLIP features of the generated image. The final score reflects how well , as opposed to , describes the image. We evaluate the overall quality of images using the pair , blurriness using , and naturalness using .
To evaluate each method’s faithfulness to the prescribed bounding box, we use YOLOv4 Bochkovskiy et al. (2020) to compare the predicted bounding box over the set of generated images to the ground truth bounding boxes, and report the average precision at .
Finally, we report the average contrast and saturation of generated images. We observe that guidance-based methods often lead to high contrast and high saturation, particularly when the guidance strength is high.
More details about these evaluation metrics can be found in Appendix B.
5.2 Comparisons
[width=]images/ilgd_vs_boxdiff.pdf
[width=0.7]images/ilgd_vs_injection.pdf
Average Contrast | Average Saturation | |
---|---|---|
SD @ CFG 0.0 | 52.96 | 84.49 |
SD @ CFG 7.5 | 58.53 | 110.14 |
SD @ CFG 12.5 | 65.87 | 123.32 |
BoxDiff | 65.68 | 115.67 |
Chen et al. | 62.12 | 98.56 |
MultiDiffusion | 48.03 | 75.01 |
SD + I | 46.07 | 105.63 |
SD + LG | 58.51 | 102.62 |
Ours (iLGD) | 47.53 | 105.81 |
Method | T2I-Sim () | CLIP-IQA () | [email protected] () | ||
---|---|---|---|---|---|
Quality | Natural | Clear | |||
SD | 0.303 | 0.928 | 0.705 | 0.736 | – |
BoxDiff | 0.305 | 0.922 | 0.613 | 0.6945 | 0.192 |
Chen et al. | 0.301 | 0.936 | 0.640 | 0.814 | 0.118 |
MultiDiffusion | 0.295 | 0.920 | 0.547 | 0.792 | 0.411 |
SD + I | 0.305 | 0.958 | 0.647 | 0.808 | 0.136 |
SD + LG | 0.302 | 0.932 | 0.684 | 0.737 | 0.055 |
Ours (iLGD) | 0.309 | 0.961 | 0.654 | 0.817 | 0.202 |
We present a comparison between our proposed method (iLGD), BoxDiff Xie et al. (2023), Chen et al. Chen et al. (2024), MultiDiffusion Bar-Tal et al. (2023), and Stable Diffusion Rombach et al. (2022) in Figure 4. Qualitatively, we observe that BoxDiff and Chen et al. both produce images with higher contrast, and BoxDiff produces especially saturated images. For instance, for the prompt “a donut and a carrot,” both objects in BoxDiff’s image appear unnaturally bright or dark in certain regions. Chen et al. produces high contrast images for the prompts “a dog on a bench” and “a banana and broccoli,” though with visibly less saturation than BoxDiff. High contrast and saturation are even visible in some images when just Stable Diffusion is used, e.g., in the prompts “a balloon and a cake and a frame…” and “a suitcase and a handbag” in Figure 5.
These observations are reflected quantitatively in Table 1, which reports the average contrast and saturation across all generated images. The same high-contrast phenomenon observed in BoxDiff and Chen et al. is seen to occur in Stable Diffusion when increasing the classifier-free guidance (CFG) scale. Furthermore, high staturation can also be seen in both BoxDiff and Stable Diffusion at high CFG scales. Both loss guidance and CFG appear to move the predicted latent into similar low-probability regions if the strength is too large, which may nonetheless be necessary to obtain either the appropriate layout or the appropriate agreement with the semantics of the text prompt. Neither attention injection nor MultiDiffusion appear to have this biasing effect, and the contrast and saturation for iLGD are similar to those of attention injection.
We note that both attention injection and iLGD produce images of lower contrast than even Stable Diffusion without classifier-free guidance, although visual inspection suggests that this does not manifest as any obvious abnormality in the images. We hypothesize that injection favours scenes that naturally contain lower contrast. When the objects appearing in the images receive high levels of attention as a result of injection, they tend to be clear, in-focus, and easily discernible, and do not have either very bright reflections or very dark shadows.
Biasing the latents using injection also clearly helps to preserve image quality and achieve stronger layout control, particularly for layouts which are difficult for the model to generate. For the prompt “a red book and a clock,” BoxDiff struggles to move the clock into the correct position, instead generating it as a lower quality, shadow-like figure, while Chen et al. omits the clock entirely, and MultiDiffusion generates a chaotic scene with many artifacts. Using iLGD, the clock and book appear in the correct positions, and the clock maintains its fine details, e.g., the numbering on the face, as well as its texture. For the prompt “a bowl and a spoon,” the image produced by iLGD is more faithful to the bounding boxes than image produced by BoxDiff, in which it is also difficult to distinguish the spoon from its shadow as dark colours are exaggerated, as well as the image of Chen et al., which struggles to reposition the objects. The layout produced by MultiDiffusion is more accurate, but the image itself is not sensible. In general, it appears that BoxDiff performs very well, but produces images of unnaturally high contrast and saturation. The method of Chen et al. produces images of reasonable quality, but struggles to move the objects to the desired bounding boxes. MultiDiffusion appears to consistently make objects appear in the bounding boxes, but often produces chaotic scenes or scenes with cut-and-paste artifacts, in which the foreground images seem glued to an incompatible background (see the prompt “a banana and broccoli” for one such example).
In Table 2, we report various metrics related to image quality (CLIP-IQA), bounding box precision (YOLOv4) and text-to-image similarity (T2I-Sim). While both BoxDiff and iLGD achieve similar T2I-Sim and [email protected] scores, the latter achieves better performance on all CLIP-IQA metrics. While the method of Chen et al. achieves T2I-Sim and CLIP-IQA scores that are almost as good as those of iLGD, it achieves a much lower [email protected] score. MultiDiffusion has the highest [email protected] score of all methods considered, but it also has the lowest T2I-Sim and CLIP-IQA metrics (all except for Clear), showing that MultiDiffusion sacrifices an unacceptable amount of image quality in exchange for layout control. The ablation study (SD + I) using just injection shows good image quality, but substantially worse layout control than iLGD, while the ablation study (SD + L) using just loss guidance shows slightly worse image quality and much worse layout control than iLGD. These results indicate that iLGD is able to achieve superior image quality without sacrificing layout control, qualifying it as a meaningful improvement over BoxDiff, Chen et al., and MultiDiffusion.
We provide additional comparisons in Appendix C.
5.3 Ablation Studies
A visual comparison between iLGD and injection alone is presented in Figure 5. We observe that, while injection typically does generate an object in each bounding box, the object itself may be incorrect. To illustrate, for the prompt “a balloon, a cake, and a frame, …,” using injection leads to a frame appearing where the cake should be; for “a donut and a carrot,” it generates a hand where the donut should be; for “a suitcase and a handbag,” a second suitcase is generated instead of a handbag; and for “a cat sitting outside,” something akin to a tree stump appears instead of a cat. In the example for “a castle in the middle of a marsh,” the castle does not fill up the bounding box, and for “a cat with a tie” the resulting image has the correct layout, but begins to appear like a cartoon cutout. When loss guidance is added using iLGD, all of these images appear correctly. These observations are reflected in Table 2, where injection achieves a noticeably lower [email protected] score compared to iLGD.
We note that the results in Figure 5 provide some additional evidence that injection is able to successfully bias the image according to the desired layout. For instance, revisiting the example of the prompt “a cat sitting outside,” injection causes a tree stump to appear in the region where the attention was enhanced. When loss guidance is also applied in each step, this stump is instead denoised into a cat. In this case, a small amount of loss guidance suffices to generate the appropriate layout, as it is augmented by the biasing effect of injection.
6 Conclusions
In this work, we introduce a framework which combines both attention injection and loss guidance to produce samples conforming to a desired layout. We show, both qualitatively and quantitatively, that our proposed method can produce such samples with fewer visual artifacts compared to training-free methods using loss guidance alone. Our method uses only existing components of the diffusion model, avoiding any additional model complexity. One of the method’s limitations is that it remains somewhat sensitive to the initial random seed.
Appendix A Diffusion Models
A.1 Denoising Diffusion Probabilistic Models
Diffusion models Ho et al. (2020) are characterized by two principle algorithms. The first is the forward process, wherein the data is gradually corrupted by Gaussian noise until it becomes pure noise, which we denote by . The reverse process moves in the opposite direction, attempting to recover the data by iteratively removing noise. The denoiser is typically a UNet Ronneberger et al. (2015) which accepts an image , and predicts its noise content . Removing a fraction of this noise yields a slightly denoised image . Repeating this process over steps produces a noise-free image .
Operating directly on the image in pixel-space is computationally expensive. As an alternative, latent diffusion models have been proposed to curtail this high cost, in which the denoising procedure is performed in latent space, whose dimensionality is typically much lower than pixel space. Stable Diffusion Rombach et al. (2022) is one example of a latent diffusion model which achieves state-of-the-art performance on various image synthesis tasks. It leverages a powerful autoencoder to project to and from latent space, where the standard denoising procedure is performed. Images in latent space are typically denoted by , and the encoder and decoder are denoted by and , respectively, and .
During training, samples from the true data distribution are corrupted via the forward process. By training a diffusion model to learn a reverse process in which it iteratively reconstructs these noisy samples into noise-free samples, it is possible to generate images from pure noise at inference time. This corresponds to sampling from an approximation to the data distribution, . This generation process can be guided by introducing an additional input vector , which is often a text prompt. In this case, the model produces samples from an approximation to the conditional distribution .
In denoising diffusion probabilistic models (DDPM) Ho et al. (2020), the forward process is characterized by the Markov chain , for some noise schedule . In this case, , where and . The reverse process is typically modeled by a learned Markov chain , where is an untrained time dependent constant, usually with and , or with simply chosen equal to .
It is not efficient to optimize the log-likehood directly, since computing requires marginalizing over . Instead, one can use importance sampling to write
(11) |
Then, by Jensen’s inequality,
(12) |
The right hand side is the usual evidence lower bound (ELBO), which is minimized instead. Ho et al. Ho et al. (2020) show that minimizing the ELBO is equivalent to minimizing
(13) |
for some positive function , where , , and
(14) |
A.2 Score Matching
Since is a normal distribution, we know that
(15) |
Thus, minimizing (13) is equivalent to minimizing
(16) |
for some positive function , where
(17) |
It is known that this loss is minimized when Vincent (2011), so given enough parameters, will converge to almost everywhere. Given an approximation to the score function, it is possible to sample from using annealed Langevin dynamics Song and Ermon (2019).
A.3 Stochastic Differential Equations
Song et al. Song et al. (2021) showed that the forward process of DDPM can viewed as a discretization of the stochastic differential equation (SDE)
(22) |
where denotes the Wiener process. There, the authors point out that any SDE of the form , where can be reversed by the SDE , where is the standard Wiener process in the reverse time direction, and where . Furthermore, each SDEs admits a family of related SDEs that share the same marginal distributions . One of these SDEs is purely deterministic, and is known as the probability flow ordinary differential equation (ODE).
If the score function is available, then it is possible to sample from by solving the probabily flow ODE, starting with samples from . This results in a deterministic mapping from noises images to clean images . This sampling process can be performed quickly with the aid of ODE solvers Lu et al. (2022).
A.4 Classifier-Free Guidance
In order to generate images following a user-supplied text prompt, the denoiser of a latent diffusion model is trained with an additional input given by a sequence of token embeddings . A single denoiser, usually a UNet, is trained over a variety of text prompts, and the token embeddings influence the denoiser by a cross-attention mechanism in both the contractive and expansive layers. Ho and Salimans Ho and Salimans (2021) found that, rather than sampling images using the conditional denoiser alone, better results can be obtained by taking a combination of conditional and unconditional noise estimates,
(23) |
where represents the intensity of the additive term . For , this noise prediction can be viewed as an approximation to ( times) the score function of the marginal distribution . In classifier-free guidance (CFG), , which does not have a simple interpretation in terms of the marginal distributions of the new denoising process.
Appendix B Detailed Methods
Implementation Details
We implement our method on the official Stable Diffusion v1.4 model Rombach et al. (2022) from HuggingFace. All images are generated using 50 denoising steps and a classifier-free guidance scale of 7.5, unless otherwise noted. We use the noise scheduler LMSDiscreteScheduler Karras et al. (2022) provided by HuggingFace. Experiments are conducted on an NVIDIA TESLA V100 GPU.
We perform attention injection over all attention maps. When performing injection, we resize the mask to the appropriate resolution, depending on which layer of the UNet the attention maps are taken from. For loss guidance, we again use all of the model’s attention maps, but resize them resolution, and compute the mean of each map over all pixels. We apply the softmax function over these means to obtain a weight vector , where each entry is the scalar weight associated with the -th resized attention map. Finally, we obtain the attention map by taking a weighted average over all resized attention maps at time , using the appropriate weight for each map.
When attempting to control the layout of a generated image, we find that skipping the first step, so that it remains a standard denoising step, leads to better results. We do this for all experiments conducted in this paper which use either injection or loss guidance or both. In iLGD, we use , , , and , unless otherwise noted. In our ablation experiments, we keep the injection strength at when performing just attention injection. When performing just loss guidance, we increase the loss-guidance strength to , in order to make loss guidance alone exert sufficient influence over the final image layouts.
In our comparisons with BoxDiff, we maintain the default parameters the authors provide in their implementation. We start with , which decays linearly to , and perform guidance for 25 iterations out of a total of 50 denoising steps.
In our comparisons with the method of Chen et al., we also maintain the default parameters the authors provide in their implementation, setting the loss scale factor to .
Evaluation with YOLOv4
In this section, we describe in detail how we obtain the AP@50 scores in Table 2. In classical object detection, a model is trained to detect and localize objects of certain classes in an image, typically by predicting a bounding box which fully encloses the object. The accuracy of the model’s predicted bounding box, , is evaluated by comparison to the corresponding ground truth bounding box, . More specifically, we compute the intersection over union (IOU) over the pair of bounding boxes:
(24) |
The IOU is then compared to a threshold , such that, if , then the detection is classified as correct. If not, then the detection is classified as incorrect. In our case, we follow Li et al. Li et al. (2021) and treat the object detection model as an oracle, where we assume that it provides the bounding boxes of objects in a given image with perfect accuracy. In particular, we first define a layout through a set of ground truth bounding boxes, describing the desired positions of each object. We then generate an image according to this layout, and subsequently apply the object detection model to the generated image to obtain a set of predicted bounding boxes. Finally, to evaluate how similar the layout of the generated image is to the desired layout, we compare each predicted bounding box, , to the corresponding ground truth bounding box, , by computing their IOU. We use a IOU threshold of 0.5.
To calculate the average precision, we first need to compute the number of true positives (TP), false positives (FP), and false negatives (FN). We count a false negative when no detection is made on the image, even though a ground truth object exists, or when the detected class is not among the ground truth classes. We also count a false negative as well as a false positive when the correct detection is made, but , and a true positive when . Using these quantities, we compute the precision and recall as:
(25) |
(26) |
We repeat this for classifier confidence thresholds of 0.15 to 0.95, in steps of 0.05, so that we end up with 17 values for precision and recall, respectively. We then construct a precision-recall curve, and compute the average precision using 11-point interpolation Padilla et al. (2020):
(27) |
where
(28) |
Image Quality Assessment
Wang et al. Wang et al. (2023) suggest using the pair {good photo, bad photo} instead of {high quality, low quality} to measure quality, as they find that it corresponds better to human preferences. However, we choose the latter to remain agnostic to the image’s style, as we believe the former carries with it a stylistic bias, due to the word “photo.”
Contrast Calculation
We calculate the RMS contrast by using OpenCV’s .std() method on a greyscale image.
Saturation Calculation
We calculate the saturation by working in HSV space and using OpenCV’s .mean() method on the image’s saturation channel.
Appendix C Additional Experiments
We provide two additional sets of comparisons between our proposed method (iLGD), BoxDiff Xie et al. (2023), Chen et al. Chen et al. (2024), MultiDiffusion Bar-Tal et al. (2023), and Stable Diffusion Rombach et al. (2022) . In Figure 1, we compare the three methods using same prompts and bounding boxes as in Figure 3, but using a different random seed for each set of images. In Figure 2, we compare the methods using an entirely new set of prompts and bounding boxes.
[width=]images/ilgd_vs_boxdiff_appendix.pdf
[width=]images/ilgd_vs_boxdiff_appendix_2.pdf
References
- (1)
- Balaji et al. (2023) Yogesh Balaji, Seungjun Nah, Xun Huang, Arash Vahdat, Jiaming Song, Qinsheng Zhang, Karsten Kreis, Miika Aittala, Timo Aila, Samuli Laine, Bryan Catanzaro, Tero Karras, and Ming-Yu Liu. 2023. eDiff-I: Text-to-Image Diffusion Models with an Ensemble of Expert Denoisers. arXiv preprint arXiv:2211.01324 (2023).
- Bansal et al. (2024) Arpit Bansal, Hong-Min Chu, Avi Schwarzschild, Soumyadip Sengupta, Micah Goldblum, Jonas Geiping, and Tom Goldstein. 2024. Universal Guidance for Diffusion Models. In ICLR.
- Bar-Tal et al. (2023) Omer Bar-Tal, Lior Yariv, Yaron Lipman, and Tali Dekel. 2023. Multidiffusion: Fusing Diffusion Paths for Controlled Image Generation. 202 (2023), 1737–1752.
- Bochkovskiy et al. (2020) Alexey Bochkovskiy, Chien-Yao Wang, and Hong-Yuan Mark Liao. 2020. YOLOv4: Optimal Speed and Accuracy of Object Detection.
- Chen et al. (2024) Minghao Chen, Iro Laina, and Andrea Vedaldi. 2024. Training-Free Layout Control with Cross-Attention Guidance. In IEEE Wint. Conf. Appl. 5343–5353.
- Cheng et al. (2023) Jiaxin Cheng, Xiao Liang, Xingjian Shi, Tong He, Tianjun Xiao, and Mu Li. 2023. Layoutdiffuse: Adapting foundational diffusion models for layout-to-image generation. arXiv preprint arXiv:2302.08908 (2023).
- Couairon et al. (2023) Guillaume Couairon, Marlène Careil, Matthieu Cord, Stéphane Lathuilière, and Jakob Verbeek. 2023. Zero-shot Spatial Layout Conditioning for Text-to-Image Diffusion Models. In ICCV. 2174–2183.
- Dhariwal and Nichol (2021) Prafulla Dhariwal and Alexander Nichol. 2021. Diffusion Models Beat GANs on Image Synthesis. NeurIPS 34 (2021), 8780–8794.
- Efron (2011) Bradley Efron. 2011. Tweedie’s Formula and Selection Bias. J. Am. Stat. Assoc. 106, 496 (2011), 1602–1614.
- Epstein et al. (2023) Dave Epstein, Allan Jabri, Ben Poole, Alexei Efros, and Aleksander Holynski. 2023. Diffusion Self-Guidance for Controllable Image Generation. In NeurIPS, Vol. 36. 16222–16239.
- Hertz et al. (2023) Amir Hertz, Ron Mokady, Jay Tenenbaum, Kfir Aberman, Yael Pritch, and Daniel Cohen-or. 2023. Prompt-to-Prompt Image Editing with Cross-Attention Control. In ICLR.
- Ho et al. (2020) Jonathan Ho, Ajay Jain, and Pieter Abbeel. 2020. Denoising Diffusion Probabilistic Models. NeurIPS 33, 6840–6851.
- Ho and Salimans (2021) Jonathan Ho and Tim Salimans. 2021. Classifier-Free Diffusion Guidance. In NeurIPS (Workshop: Deep Generative Models and Downstream Applications).
- Karras et al. (2022) Tero Karras, Miika Aittala, Timo Aila, and Samuli Laine. 2022. Elucidating the Design Space of Diffusion-Based Generative Models. In NeurIPS, Vol. 35.
- Li et al. (2021) Zejian Li, Jingyu Wu, Immanuel Koh, Yongchuan Tang, and Lingyun Sun. 2021. Image Synthesis From Layout with Locality-Aware Mask Adaption. In ICCV. 13819–13828.
- Lu et al. (2022) Cheng Lu, Yuhao Zhou, Fan Bao, Jianfei Chen, Chongxuan Li, and Jun Zhu. 2022. DPM-Solver: A Fast ODE Solver for Diffusion Probabilistic Model Sampling in Around 10 Steps. In NeurIPS, Vol. 35.
- Meng et al. (2022) Chenlin Meng, Yutong He, Yang Song, Jiaming Song, Jiajun Wu, Jun-Yan Zhu, and Stefano Ermon. 2022. SDEdit: Guided Image Synthesis and Editing with Stochastic Differential Equations. In ICLR.
- Padilla et al. (2020) Rafael Padilla, Sergio L Netto, and Eduardo AB Da Silva. 2020. A Survey on Performance Metrics for Object-Detection Algorithms. In Int. Conf. Syst. Signal. IEEE, 237–242.
- Radford et al. (2021) Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. 2021. Learning transferable visual models from natural language supervision. In ICML. PMLR, 8748–8763.
- Ramesh et al. (2022) Aditya Ramesh, Prafulla Dhariwal, Alex Nichol, Casey Chu, and Mark Chen. 2022. Hierarchical Text-Conditional Image Generation with Clip Latents. arXiv preprint arXiv:2204.06125 (2022).
- Rombach et al. (2022) Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. 2022. High-resolution Image Synthesis with Latent Diffusion Models. In CVPR. 10684–10695.
- Ronneberger et al. (2015) Olaf Ronneberger, Philipp Fischer, and Thomas Brox. 2015. U-Net: Convolutional Networks for Biomedical Image Segmentation. In MICCAI. 234–241.
- Saharia et al. (2022) Chitwan Saharia, William Chan, Saurabh Saxena, Lala Li, Jay Whang, Emily L Denton, Kamyar Ghasemipour, Raphael Gontijo Lopes, Burcu Karagol Ayan, Tim Salimans, et al. 2022. Photorealistic Text-to-Image Diffusion Models with Deep Language Understanding. NeurIPS 35 (2022), 36479–36494.
- Singh et al. (2023) Jaskirat Singh, Stephen Gould, and Liang Zheng. 2023. High-Fidelity Guided Image Synthesis with Latent Diffusion Models. In CVPR. IEEE, 5997–6006.
- Sohl-Dickstein et al. (2015) Jascha Sohl-Dickstein, Eric Weiss, Niru Maheswaranathan, and Surya Ganguli. 2015. Deep Unsupervised Learning Using Nonequilibrium Thermodynamics. In ICML, Vol. 37. PMLR, 2256–2265.
- Song et al. (2023) Jiaming Song, Qinsheng Zhang, Hongxu Yin, Morteza Mardani, Ming-Yu Liu, Jan Kautz, Yongxin Chen, and Arash Vahdat. 2023. Loss-Guided Diffusion Models for Plug-and-Play Controllable Generation. In ICML, Vol. 202. PMLR, 32483–32498.
- Song and Ermon (2019) Yang Song and Stefano Ermon. 2019. Generative Modeling by Estimating Gradients of the Data Distribution. NeurIPS 32 (2019).
- Song et al. (2021) Yang Song, Jascha Sohl-Dickstein, Diederik P Kingma, Abhishek Kumar, Stefano Ermon, and Ben Poole. 2021. Score-Based Generative Modeling through Stochastic Differential Equations. In ICLR.
- Vaswani et al. (2017) Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is All You Need. NeurIPS 30, 5998–6008.
- Vincent (2011) Pascal Vincent. 2011. A Connection between Score Matching and Denoising Autoencoders. Neural Comput. 23, 7 (2011), 1661–1674.
- Voynov et al. (2023) Andrey Voynov, Kfir Aberman, and Daniel Cohen-Or. 2023. Sketch-Guided Text-to-Image Diffusion Models. In SIGGRAPH. 1–11.
- Wang et al. (2023) Jianyi Wang, Kelvin CK Chan, and Chen Change Loy. 2023. Exploring CLIP for Assessing the Look and Feel of Images. In AAAI, Vol. 37. 2555–2563.
- Xie et al. (2023) Jinheng Xie, Yuexiang Li, Yawen Huang, Haozhe Liu, Wentian Zhang, Yefeng Zheng, and Mike Zheng Shou. 2023. Boxdiff: Text-to-Image Synthesis with Training-Free Box-Constrained Diffusion. In ICCV. 7452–7461.
- Zhang et al. (2023) Lvmin Zhang, Anyi Rao, and Maneesh Agrawala. 2023. Adding Conditional Control to Text-to-Image Diffusion Models. In ICCV. 3836–3847.
- Zheng et al. (2023) Guangcong Zheng, Xianpan Zhou, Xuewei Li, Zhongang Qi, Ying Shan, and Xi Li. 2023. Layoutdiffusion: Controllable Diffusion Model for Layout-to-Image Generation. In CVPR. 22490–22499.