This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Enhancing Image Layout Control with Loss-Guided Diffusion Models

Zakaria Patel Ecomtent & Department of Computer Science, University of Toronto
      Corresponding author. Email: [email protected] 0009-0003-1752-1910
   Kirill Serkh Departments of Mathematics and Computer Science, University of Toronto
      Email: [email protected] 0000-0003-4751-305X
Abstract

Diffusion models are a powerful class of generative models capable of producing high-quality images from pure noise using a simple text prompt. While most methods which introduce additional spatial constraints into the generated images (e.g., bounding boxes) require fine-tuning, a smaller and more recent subset of these methods take advantage of the models’ attention mechanism, and are training-free. These methods generally fall into one of two categories. The first entails modifying the cross-attention maps of specific tokens directly to enhance the signal in certain regions of the image. The second works by defining a loss function over the cross-attention maps, and using the gradient of this loss to guide the latent. While previous work explores these as alternative strategies, we provide an interpretation for these methods which highlights their complimentary features, and demonstrate that it is possible to obtain superior performance when both methods are used in concert.

1 Introduction

Recently, diffusion models Sohl-Dickstein et al. (2015); Ho et al. (2020); Song and Ermon (2019) have emerged as a powerful class of generative models capable of producing high quality samples with superior mode coverage. These models are trained to approximate a data distribution, and sampling from this learned distribution can subsequently generate very realistic and diverse images. Incorporating conditioning Saharia et al. (2022); Ramesh et al. (2022); Rombach et al. (2022) further extends the utility of diffusion models by allowing one to specify the contents of the desired image using a simple text prompt, leveraging the models’ impressive compositional capabilities to combine concepts in ways that may not have been present in the training set. Conditioning on a text prompt does not, however, allow one fine-grained control over the layout of the final image, which is instead highly dependent on the initial noise sample.

An especially simple and intuitive way of describing a layout is to provide bounding boxes for various tokens in the text prompt. One way to realize a model taking such an input is to resort to training-based methods, wherein a pretrained model undergoes additional finetuning using training data where the images have been supplemented by their layouts. While such methods can achieve impressive performance Zheng et al. (2023); Zhang et al. (2023), this often involves the introduction of additional model complexity and training cost, in addition to the difficult task of compiling the training data. A recently proposed alternative approach Xie et al. (2023); Chen et al. (2024) uses the cross-attention module to achieve training-free layout control.

In this paper, we propose injection loss guidance (iLGD), a training-free framework in which the model’s denoising process works in synergy with loss guidance, such that sampled images simultaneously present the appropriate layout and maintain good image quality. First, we bias the latent towards the desired layout by altering the model’s attention maps directly, a process which we refer to as attention injection. The goal of injection is not to produce a latent which perfectly adheres to the desired layout, but rather to provide a comfortable starting point for loss guidance. The resulting latent is sufficiently close to the desired layout that we can afford to use a smaller amount of loss guidance. In doing so, we reimagine the role of injection as a coarse biasing of the diffusion process, and loss guidance as a refiner. We show that such a framework is capable of controlling the layout, while achieving superior image quality in many cases.

2 Related Work

2.1 Generative Models

Generative models learn to estimate a data distribution with the goal of generating samples from this distribution. Recently , a new family of generative models, known as diffusion models Sohl-Dickstein et al. (2015); Ho et al. (2020); Song and Ermon (2019), have achieved superior results on image synthesis compared to the previous state of the art Dhariwal and Nichol (2021), with samples exhibiting incredible diversity and image quality. While early diffusion models were formulated as Markov chains and relied on a large number of transitions to generate samples, Song et al. Song et al. (2021) showed that such a sampling procedure can be viewed as a discretization of a certain stochastic differential equation (SDE). In particular, Song et al. Song et al. (2021) showed that there exists a family of SDEs, whose solutions are sampling trajectories from the diffusion model. One such SDE, called the probability flow ODE, is completely deterministic and contains no noise. This enables the use of various ordinary differential equation (ODE) solvers for efficient sampling Karras et al. (2022); Lu et al. (2022).

2.2 Controllable Generation

Diffusion models can use a wide range of techniques for controllable generation. SDEdit Meng et al. (2022) allows the user to specify a layout by using paint strokes, which are noised to time t<Tt<T to provide an initialization for solving the reverse-time SDE. The realism of the final image is, however, sensitive to the initial noise level σt\sigma_{t}, and guiding the generation of new images requires all high-level features to be specified in the stroke image. A more user-friendly method by Voynov et al. Voynov et al. (2023) requires only a simple sketch to guide the denoising process. The user-provided sketch is compared with the edges extracted by a latent edge predictor in order to compute a loss, which is used to iteratively refine the latent. In this case, the latent edge predictor must be trained, and sketches of more complicated scenes may be tedious. Zhang et al. Zhang et al. (2023) propose a more general method in which a separate encoder network takes as input a conditioning control image, such as a sketch, depth map, or scribble, to guide the generation process. Unfortunately, this requires finetuning a large pretrained encoder. When the inputs are specified as bounding boxes, smaller trainable modules can be used between the layers of the denoising UNet to encode layout information Zheng et al. (2023). Cheng et al. Cheng et al. (2023) also use an additional module which takes in bounding box inputs, injecting it directly after the self-attention layers. Once again, however, neither method can be adapted immediately, as the additional parameters necessitate further training.

A number of other methods have been proposed that are training-free. Bansal et al. Bansal et al. (2024) define a generic loss on the noiseless latent 𝐳^0\hat{\mathbf{z}}_{0} predicted from 𝐳t\mathbf{z}_{t} by Tweedie’s formula, and subsequently perform loss guidance on the latent 𝐳t\mathbf{z}_{t} at each time step. One downside of this method is that, while the loss expects clean images, the predicted latent 𝐳^0\hat{\mathbf{z}}_{0} is an approximation to only an average of possible generated images, and so can be blurry for large times tt. An alternative approach called MultiDiffusion was proposed by Bar-Tal et al. Bar-Tal et al. (2023), who used separate score functions on various regions in a latent diffusion model, with an optimization step at each iteration designed to fuse the separate diffusion paths. While this method can be effective in many cases, it can nonetheless exhibit patchwork artifacts, where the final image appears to be composed of several images rather than depicting a single scene.

Alternatively, several works have explored the use of cross-attention to achieve training-free layout control. Hertz et al. Hertz et al. (2023) demonstrate how attention maps can be injected from one diffusion process to another, and reweighted to control the influence of specific tokens. Subsequent work by Balaji et al. Balaji et al. (2023) builds upon this idea by directly manipulating the values in the attention map to obtain the desired layout, although it is difficult to precisely localize objects appearing in an image with this method alone. Singh et al. Singh et al. (2023) also use this technique to improve semantic control in stroke-guided image synthesis, which they combine with loss guidance based on the stroke image to improve the realism of generated images. Instead of using strokes, Chen et al. Chen et al. (2024) show that controlling layout with bounding boxes is possible by using loss guidance, where the loss is defined on the attention maps, although this method requires searching for a suitable noise initialization. Concurrent work by Xie et al. Xie et al. (2023) and Couairon et al. Couairon et al. (2023) also use attention-based loss guidance; the former adds spatial constraints to control the scale of the generated content, while the latter uses segmentation maps instead of bounding boxes. Epstein et al. Epstein et al. (2023) show that it is even possible to control properties of objects in an image, such as their shape, size, and appearance, through their attention maps, and subsequently manipulate these properties through loss guidance.

These works demonstrate the utility of injection, loss guidance, and the general role of cross-attention in layout control. We take a joint approach where we use cross-attention injection to assist loss guidance in producing the desired layout from simple bounding box inputs. In analyzing the role of each technique in layout control, we offer justification for their complementary use. The result is a powerful and intuitive method for layout control which maintains the quality of the generated images.

3 Preliminaries

3.1 Cross-Attention

To perform conditional image synthesis with text, Stable Diffusion leverages a cross-attention mechanism Vaswani et al. (2017). Cross-attention enables the modelling of complex dependencies between two sequences 𝐗T=(𝐱1,𝐱2,,𝐱n)\mathbf{X}^{T}=(\mathbf{x}_{1},\mathbf{x}_{2},\ldots,\mathbf{x}_{n}) and 𝐘T=(𝐲1,𝐲2,,𝐲k)\mathbf{Y}^{T}=(\mathbf{y}_{1},\mathbf{y}_{2},\ldots,\mathbf{y}_{k}), whose elements are projected to query, key and value vectors using projection matrices

𝐗𝐖𝐪=𝐐n×dk,\displaystyle\mathbf{X}\mathbf{W_{q}}=\mathbf{Q}\in\mathbbm{R}^{n\times d_{k}},
𝐘𝐖𝐤=𝐊k×dk,\displaystyle\mathbf{Y}\mathbf{W_{k}}=\mathbf{K}\in\mathbbm{R}^{k\times d_{k}},
𝐘𝐖𝐯=𝐕k×dv,\displaystyle\mathbf{Y}\mathbf{W_{v}}=\mathbf{V}\in\mathbbm{R}^{k\times d_{v}},

where dkd_{k} denotes the dimension of the query and key vectors, and dvd_{v} denotes the dimension of the value vectors. Subsequently, the attention weights are computed as

A=Softmax(𝐐𝐊Tdk)n×k.\displaystyle A=\text{Softmax}\left(\frac{\mathbf{QK}^{T}}{\sqrt{d_{k}}}\right)\in\mathbbm{R}^{n\times k}.

The new representation for the sequence 𝐗\mathbf{X} is

𝐙=A𝐕n×dv.\displaystyle\mathbf{Z}=A\mathbf{V}\in\mathbbm{R}^{n\times d_{v}}.

In diffusion models, the sequence 𝐗\mathbf{X} represents the image, where each 𝐱i\mathbf{x}_{i} represents a pixel, and 𝐘\mathbf{Y} is a sequence of token embeddings. The attention weights AA, also called the attention or cross-attention map, follow the same spatial arrangement as the image, and a unique map AjA_{j} is produced for each token 𝐲j\mathbf{y}_{j} in 𝐘\mathbf{Y}. Each entry AijA_{ij} describes how strongly related a spatial location 𝐱i\mathbf{x}_{i} is to the token 𝐲j\mathbf{y}_{j}. We leverage this feature of cross-attention to guide the image generation process.

3.2 Score Matching

The forward and reverse processes can be modelled by solutions of stochastic differential equations (SDEs) Song et al. (2021). While determining the coefficients of the forward process SDE is straightforward, the reverse process corresponds to a solution of the reverse-time SDE, which requires learning the score of the intractable marginal distribution q(𝐱t)q(\mathbf{x}_{t}). Instead, Song et al. Song et al. (2021) use the score-matching objective

𝔼t[λ(t)𝔼q(𝐱0)𝔼q(𝐱t|𝐱0)[𝐬θ(𝐱t,t)𝐱tlogq(𝐱t|𝐱0)22]],\mathbbm{E}_{t}[\lambda(t)\mathbbm{E}_{q(\mathbf{x}_{0})}\mathbbm{E}_{q(\mathbf{x}_{t}|\mathbf{x}_{0})}[\lVert\mathbf{s_{\theta}}(\mathbf{x}_{t},t)-\nabla_{\mathbf{x}_{t}}\log q(\mathbf{x}_{t}|\mathbf{x}_{0})\rVert^{2}_{2}]], (1)

for some positive function λ:[0,T]\lambda\colon[0,T]\to\mathbbm{R}.

While Eq. (1) does not directly enforce learning the score of q(𝐱t)q(\mathbf{x}_{t}), it is nonetheless minimized when 𝐬θ(𝐱t,t)=𝐱tlogq(𝐱𝐭)\mathbf{s}_{\theta}(\mathbf{x}_{t},t)=\nabla_{\mathbf{x}_{t}}\log q(\mathbf{x_{t}}) Vincent (2011). Conditioning on 𝐱0\mathbf{x}_{0} provides a tractable way to obtain a neural network 𝐬θ(𝐱t,t)\mathbf{s}_{\theta}(\mathbf{x}_{t},t) which, given enough parameters, matches 𝐱tlogq(𝐱t)\nabla_{\mathbf{x}_{t}}\log q(\mathbf{x}_{t}) almost everywhere. Because the forward process q(𝐱t|𝐱0)q(\mathbf{x}_{t}|\mathbf{x}_{0}) is available in closed form, it can be shown that the neural network which minimizes this loss is

𝐬θ(𝐱t,t)=ϵθ(𝐱t,t)σt,\mathbf{s}_{\theta}(\mathbf{x}_{t},t)=-\frac{\bm{\epsilon}_{\theta}(\mathbf{x}_{t},t)}{\sigma_{t}}, (2)

where σt\sigma_{t} is the standard deviation of the forward process at time tt, and ϵθ(𝐱t,t)\bm{\epsilon}_{\theta}(\mathbf{x}_{t},t) predicts the scaled noise ϵ𝒩(𝟎,𝐈)\bm{\epsilon}\sim\mathcal{N}(\mathbf{0},\mathbf{I}) in 𝐱t\mathbf{x}_{t}.

When the predicted score function is available, any one of a family of reverse-time SDEs, all with the same marginal distributions pθ(𝐱t)q(𝐱t)p_{\theta}(\mathbf{x}_{t})\approx q(\mathbf{x}_{t}), can be solved to sample from pθ(𝐱0)q(𝐱0)p_{\theta}(\mathbf{x}_{0})\approx q(\mathbf{x}_{0}). One of these SDEs is noise-free, and is known as the probability flow ODE. A highly efficient way to sample from pθ(𝐱0)p_{\theta}(\mathbf{x}_{0}) is to solve the probability flow ODE using a small number of large timesteps Song et al. (2021). An important benefit of sampling using an ODE is that the sampling process is deterministic, in the sense that it associates each noisy image 𝐱T\mathbf{x}_{T} with a unique noise-free sample 𝐱0\mathbf{x}_{0} Song et al. (2021).

In practice, the denoising process usually operates on a latent variable 𝐳t\mathbf{z}_{t}, where an autoencoder is used to project to and from image space 𝐱t\mathbf{x}_{t}. In order to generate images which correspond to a user-supplied text prompt, the noise predictor ϵθ(𝐳t,t,𝐲)\bm{\epsilon}_{\theta}(\mathbf{z}_{t},t,\mathbf{y}) is conditioned on text prompts 𝐲\mathbf{y}, and samples are computed using this noise predictor by classifier-free guidance (CFG). In this paper, we denote the CFG noise prediction by ϵθ(𝐳t,t,𝐲)\bm{\epsilon}_{\theta}(\mathbf{z}_{t},t,\mathbf{y}), or sometimes just ϵθ(𝐳t,t)\bm{\epsilon}_{\theta}(\mathbf{z}_{t},t), when the dependence on 𝐲\mathbf{y} is clear. We provide more details about diffusion models in Appendix A.

3.3 Controllable Layout Generation

We use BoxDiff Xie et al. (2023), the method of Chen et al. Chen et al. (2024), and MultiDiffusion Bar-Tal et al. (2023) as our primary points of comparison. In Xie et al. (2023) and Chen et al. (2024), the authors apply spatial constraints on the attention maps of a latent diffusion model to derive a loss, and directly update the latent at time step tt by replacing it with

𝐳t=𝐳tαt𝐳t𝐲(𝐳t),\mathbf{z}^{\prime}_{t}=\mathbf{z}_{t}-\alpha_{t}\cdot\nabla_{\mathbf{z}_{t}}\mathcal{L}_{\mathbf{y}}(\mathbf{z}_{t}), (3)

where 𝐲\mathcal{L}_{\mathbf{y}} is a loss function which depends on the neural network, the prompt 𝐲\mathbf{y}, and a set of bounding boxes. The parameter αt\alpha_{t} controls the strength of the loss guidance in each iteration. In BoxDiff, αt\alpha_{t} decays linearly from t=Tt=T to t=0t=0, while in Chen et al., αt=ησt2\alpha_{t}=\eta\cdot\sigma_{t}^{2} for some scale factor η>0\eta>0. In MultiDiffusion, letting Φθ(𝐳t,t,𝐲)\Phi_{\theta}(\mathbf{z}_{t},t,\mathbf{y}) denote the function which computes 𝐳t1\mathbf{z}_{t-1} from 𝐳t\mathbf{z}_{t} using the predicted scaled noise ϵθ(𝐳t,t,𝐲){\bm{\epsilon}}_{\theta}(\mathbf{z}_{t},t,\mathbf{y}), a new latent 𝐳t1\mathbf{z}^{\prime}_{t-1} is produced by solving the optimization problem

𝐳t1=argmin𝐳i=1k𝐦i(𝐳Φθ(𝐳t,t,𝐲i))2,\displaystyle\mathbf{z}^{\prime}_{t-1}=\mathop{\mathrm{arg\,min}}_{\mathbf{z}}\sum_{i=1}^{k}\bigl{\|}\mathbf{m}_{i}\otimes\bigl{(}\mathbf{z}-\Phi_{\theta}(\mathbf{z}_{t},t,\mathbf{y}_{i})\bigr{)}\bigr{\|}^{2},

where each 𝐦i{0,1}n\mathbf{m}_{i}\in\{0,1\}^{n} is a mask corresponding to the bounding box of the prompt 𝐲i\mathbf{y}_{i}.

4 Method

\includegraphics

[width=0.8]images/inject_vs_boxdiff_sweep.pdf

Figure 1: The effects of varying the strengths of BoxDiff and attention injection by tuning their respective parameters. In the top row, we sweep through various choices of αT\alpha_{T} to tune the guidance strength of BoxDiff. In the bottom row, we sweep through various choices of the injection strength ν\nu^{\prime}. For iLGD, shown in the final column, we use ν=0.75\nu^{\prime}=0.75 and η=0.8\eta=0.8.

4.1 Attention Injection

Hertz et al. Hertz et al. (2023) observed that, by extracting the attention maps from a latent diffusion process and applying them to another one with a modified token sequence, it is possible to transfer the composition of an image. This technique is very effective, but requires an original set of attention maps which produce the desired layout, and it is not feasible to generate images until this layout is obtained.

Instead, we rely on the observation that the attention maps early in the diffusion process are strong indicators of the generated image’s composition. For early timesteps, both the latents and the attention maps are relatively diffuse, and don’t suggest any fine details about objects in the image. Motivated by this, we manipulate the attention maps by artificially enhancing the signal in certain regions. Given a list of target tokens S{1,2,,k}S\subset\{1,2,\ldots,k\}, we define 𝐦={𝐦1,𝐦2,,𝐦k}n×k\mathbf{m}=\{\mathbf{m}_{1},\mathbf{m}_{2},\ldots,\mathbf{m}_{k}\}\in\mathbbm{R}^{n\times k} by setting each mask 𝐦j\mathbf{m}_{j}, jSj\in S, equal to 11 over the region that the text token 𝐲j\mathbf{y}_{j} should correspond to, and zero otherwise, and perform injection by replacing the cross-attention map AtA_{t} at time tt with

At=softmax(𝐐t𝐊tT+νt𝐦dk),A^{\prime}_{t}=\text{softmax}\left(\frac{\mathbf{Q}_{t}\mathbf{K}_{t}^{T}+\nu_{t}\mathbf{m}}{\sqrt{d_{k}}}\right), (4)

where νt>0\nu_{t}>0. We follow Balaji et al. Balaji et al. (2023) and use the scaling

νt=νlog(1+σt)max(𝐐t𝐊tT).\nu_{t}=\nu^{\prime}\cdot\log(1+\sigma_{t})\cdot\max(\mathbf{Q}_{t}\mathbf{K}_{t}^{T}). (5)

Scaling by σt\sigma_{t} ensures the injection strength is appropriate for a given timestep, and ν\nu^{\prime} is a constant which controls the overall strength of injection.

This way, we directly bias the model’s predicted score 𝐬θ(𝐱t,t)𝐳tlogqt(𝐳t|𝐲)\mathbf{s}_{\theta}(\mathbf{x}_{t},t)\approx\nabla_{\mathbf{z}_{t}}\log{q_{t}(\mathbf{z}_{t}|\mathbf{y})} so that each latent 𝐳t1\mathbf{z}_{t-1} more closely corresponds to the desired layout. We denote the corresponding modified noise prediction by ϵθ(𝐳t,t,Atν,𝐦At){\bm{\epsilon}}_{\theta}(\mathbf{z}_{t},t,A_{t}\xrightarrow{\mbox{\raisebox{-1.72218pt}[0.0pt][0.0pt]{\tiny$\nu^{\prime}$,$\mathbf{m}$}}}A_{t}^{\prime}).

4.2 Loss Guidance

Conditional latent diffusion models predict the time-dependent conditional score 𝐳tlogq(𝐳t|𝐲)\nabla_{\mathbf{z}_{t}}\log{q(\mathbf{z}_{t}|\mathbf{y})}, so that the resulting latent at the end of the denoising process is sampled from q(𝐳0|𝐲)q(\mathbf{z}_{0}|\mathbf{y}). We can modify the conditional score at time tt by introducing a loss term 𝐲(𝐳t)\ell_{\mathbf{y}}(\mathbf{z}_{t}),

𝐳tlogq^(𝐳t|𝐲)=𝐳tlogq(𝐳t|𝐲)η𝐳t𝐲(𝐳t),\nabla_{\mathbf{z}_{t}}\log\hat{q}(\mathbf{z}_{t}|\mathbf{y})=\nabla_{\mathbf{z}_{t}}\log q(\mathbf{z}_{t}|\mathbf{y})-\eta\nabla_{\mathbf{z}_{t}}\ell_{\mathbf{y}}(\mathbf{z}_{t}), (6)

which corresponds to the marginal distribution

q^(𝐳t|𝐲)q(𝐳t|𝐲)eη𝐲(𝐳t).\displaystyle\hat{q}(\mathbf{z}_{t}|\mathbf{y})\propto q(\mathbf{z}_{t}|\mathbf{y})e^{-\eta\ell_{\mathbf{y}}(\mathbf{z}_{t})}. (7)

The scaling constant η\eta controls the relative strength of loss guidance. By using annealed Langevin dynamics Song and Ermon (2019), it is possible to use the predicted score function together with the loss term to sample from an approximation to q^(𝐳0|𝐲)\hat{q}(\mathbf{z}_{0}|\mathbf{y}). While this provides a clear interpretation for the effect of loss guidance, the cost of annealed Langevin dynamics can be fairly large. Instead, we solve the probability flow ODE using the score function

𝐬^θ(𝐳t,t):=𝐬θ(𝐳t,t)η𝐳t𝐲(𝐳t).\displaystyle\hat{\mathbf{s}}_{\theta}(\mathbf{z}_{t},t):=\mathbf{s}_{\theta}(\mathbf{z}_{t},t)-\eta\nabla_{\mathbf{z}_{t}}\ell_{\mathbf{y}}(\mathbf{z}_{t}). (8)

This no longer corresponds to sampling from an approximation to q^(𝐳0|𝐲)\hat{q}(\mathbf{z}_{0}|\mathbf{y}) (see Song et al. (2023)), however, so long as the latents 𝐳t\mathbf{z}_{t} are not too out-of-distribution with respect to the marginals q(𝐳t|𝐲)q(\mathbf{z}_{t}|\mathbf{y}), then this process influences the trajectory to favor samples from pθ(𝐳0|𝐲)p_{\theta}(\mathbf{z}_{0}|\mathbf{y}) for which the loss term is small.

For layout control, given a list of target tokens S{1,2,,k}S\subset\{1,2,\ldots,k\}, we choose the simple loss function

𝐲(𝐳t)=jSsum(𝐦¯j(At)j)sum(𝐦j(At)j),\ell_{\mathbf{y}}(\mathbf{z}_{t})=\sum_{j\in S}\text{sum}(\mathbf{\bar{m}}_{j}\odot(A_{t})_{j})-\text{sum}(\mathbf{m}_{j}\odot(A_{t})_{j}), (9)

where 𝐦j\mathbf{m}_{j} is a mask whose value is 11 over the region where token 𝐲j\mathbf{y}_{j} should appear, and 0 otherwise, and 𝐦¯j=1𝐦j\mathbf{\bar{m}}_{j}=1-\mathbf{m}_{j}. Intuitively, this simple loss encourages the sampling of latents whose attention maps for each target token take their largest values within the masked regions.

When high levels of loss-guidance are used, the specific choice of the loss function heavily influences the behavior of the denoising process. In BoxDiff, Xie et al. Xie et al. (2023) use sums over the PnP\ll n most attended-to pixels in the masked attention maps, while in Chen et al. Chen et al. (2024) the authors use a normalized loss which depends on the ratio sum(𝐦j(At)j)/sum((At)j)\text{sum}(\mathbf{m}_{j}\odot(A_{t})_{j})/\text{sum}((A_{t})_{j}). These choices seem to be essential for enabling those methods to maintain good image quality at very high levels of loss guidance. Since we will use much lower levels of loss guidance, we find that our method is significantly less sensitive to the choice of loss function, with the result that our simple loss function works well to contain the attentions within the regions defined by 𝐦\mathbf{m}.

In practice, diffusion models are trained to predict the scaled noise ϵ𝒩(𝟎,𝐈)\bm{\epsilon}\sim\mathcal{N}(\mathbf{0},\mathbf{I}) in 𝐳t\mathbf{z}_{t}. Revisiting Eq. (2), we observe that we can define the corresponding modified noise prediction by scaling the loss-guidance term appropriately Dhariwal and Nichol (2021):

ϵ^θ(𝐳t,t):=ϵθ(𝐳t,t)+ησt𝐳t𝐲(𝐳t).\hat{\bm{\epsilon}}_{\theta}(\mathbf{z}_{t},t):=\bm{\epsilon}_{\theta}(\mathbf{z}_{t},t)+\eta\sigma_{t}\nabla_{\mathbf{z}_{t}}\ell_{\mathbf{y}}(\mathbf{z}_{t}). (10)

Since the loss function 𝐲(𝐳t)\ell_{\mathbf{y}}(\mathbf{z}_{t}) is a scalar function, computing its gradient with respect to 𝐳t\mathbf{z}_{t} by reverse accumulation requires only a single sweep through the computational graph. As a result, the runtime and memory requirements of sampling using loss guidance are approximately twice those of sampling without loss guidance. This can be seen in, for example, Table 1 of Chen et al. Chen et al. (2024).

4.3 iLGD

\includegraphics

[width=0.8]images/balls.pdf

Figure 2: Images generated with the prompt “a ball on the grass,” using the bounding boxes shown in the first column. Each row corresponds to a different method. The bounding boxes in the first column are used for injection and iLGD. The attention maps in the second column are averages over the 8×88\times 8 resolution attention maps at t=0t=0 over 100 random seeds. Each of the 8 columns of images in this figure corresponds to one of these 100 seeds.

One weakness of attention injection is that, by directly modifying the attention maps, we disrupt the agreement between the predicted score function 𝐬θ(𝐳t,t)\mathbf{s}_{\theta}(\mathbf{z}_{t},t) and the true conditional score 𝐳tlogq(𝐳t|𝐲)\nabla_{\mathbf{z}_{t}}\log q(\mathbf{z}_{t}|\mathbf{y}). The discrepancy between the actual and predicted score functions becomes especially pronounced at smaller noise levels, where fine details in the cross attention maps are destroyed by the injection process. This results in a cartoon-like appearance in the final sampled images, as seen in Figure 17 of Balaji et al. (2023) and in the bottom row of Figure 1. The sensitivity of the cross-attention maps at low noise levels makes it difficult to use injection to precisely control the final layout, without negatively influencing image quality. One big advantage of attention injection, however, is that at high noise levels it is possible to strongly influence the cross-attention maps while still producing latents that are in-distribution. Interestingly, even the cartoon-like images resulting from excessive attention injection at low noise levels are not actually out-of-distribution, but are only in an unwanted style.

A weakness of loss-guidance is that the ad-hoc choice of the loss function may not compete well with the predicted score 𝐬θ(𝐳t,t)\mathbf{s}_{\theta}(\mathbf{z}_{t},t). In each denoising step, the latent 𝐳t1\mathbf{z}_{t-1} must fall in a high probability region of of q(𝐳t1|𝐳t)q(\mathbf{z}_{t-1}|\mathbf{z}_{t}), otherwise the predicted latent 𝐳t1\mathbf{z}_{t-1} will be out-of-distribution, which results in degraded image quality in the final sampled latent 𝐳0\mathbf{z}_{0}. This means that the guidance term’s influence in Eq. (6) should be small enough to avoid moving the latents into low probability regions. On the other hand, small loss guidance strengths may exert too little influence on the sampling trajectory. In this case, the model produces in-distribution samples, but ones which do not fully agree with the desired layout. This tradeoff is illustrated in the first row of Figure 1, which shows images with unnaturally high contrast and saturation appearing at high loss-guidance strengths . One advantage of loss-guidance is that, even at low strengths, it is able to exert some influence over sampling trajectories without biasing the style of the images.

We take the point of view that injection is better suited as a course control over the predicted latents, and cannot fully replace loss guidance. Instead, we propose using injection together with loss guidance in a complimentary fashion, in such a way that they compensate for each other’s weaknesses. We call this approach to layout control injection loss guidance (iLGD). Instead of delegating the layout generation task entirely to loss guidance, we rely on injection to first bias the latent, as illustrated in the second row of Figure 2. The first row of this figure shows that, when using Stable Diffusion alone, the ball appears in random locations near the center of the image, while the second row shows that injection encourages it to appear more frequently inside of the bounding box, towards the top right. When we also perform loss guidance, the third row of the figure shows that the averaged attention map is significantly more concentrated inside of the bounding box. Since the original score function is modified additively, the amount of loss guidance can be made sufficiently small so that the latents stay in-distribution, while still being large enough to influence the sampling trajectory. This is true even at small noise levels, when fine details are present in the images.

We also observe that, when using both injection and loss guidance, the details of the objects better reflect the context of the scene due to higher levels of attention on those objects. In the third row, many of the balls appear in the style of a soccer ball, which is likely the most common type of ball to appear together with grass.

We outline our algorithm in Algorithm 1 and depict it visually in Figure 3. We perform attention injection from timestep TT to tinjectt_{\text{inject}} in order to obtain the modified predicted noise ϵθ(𝐳t,t)\bm{\epsilon}^{\prime}_{\theta}(\mathbf{z}_{t},t). In each timestep, we further refine this latent by performing loss-guidance simultaneously, from timestep TT to tlosst_{\text{loss}}. In practice, we find it useful to perform loss-guidance for several more steps after we stop injection, tloss>tinjectt_{\text{loss}}>t_{\text{inject}}.

\includegraphics

[width=0.45]images/unet-ilgd.pdf

Figure 3: A graphical depiction of injection loss guidance (iLGD).
Input: A prompt 𝐲\mathbf{y}; a list of target tokens S{1,2,,k}S\subset\{1,2,\ldots,k\}; a collection of bounding boxes, one for each token in SS; an injection strength ν\nu^{\prime}; a loss guidance strength η\eta.
Output: The generated image 𝐱0=𝒟(𝐳0)\mathbf{x}_{0}=\mathcal{D}(\mathbf{z}_{0}).
Construct the mask 𝐦={𝐦1,𝐦2,,𝐦k}\mathbf{m}=\{\mathbf{m}_{1},\mathbf{m}_{2},\ldots,\mathbf{m}_{k}\} from the bounding boxes;
Initialize 𝐳T𝒩(𝟎,𝐈)\mathbf{z}_{T}\sim\mathcal{N}(\mathbf{0},\mathbf{I});
for t=T,,1t=T,\ldots,1 do
      ϵθ(𝐳t,t)={ϵθ(𝐳t,t,Atν,𝐦At)if t>tinject,ϵθ(𝐳t,t)otherwise\bm{\epsilon}^{\prime}_{\theta}(\mathbf{z}_{t},t)=\begin{cases}\bm{\epsilon}_{\theta}(\mathbf{z}_{t},t,A_{t}\xrightarrow{\mbox{\raisebox{-1.37775pt}[0.0pt][0.0pt]{\tiny$\nu^{\prime}$,$\mathbf{m}$}}}A_{t}^{\prime})&\text{if $t>t_{\text{inject}}$},\\ \bm{\epsilon}_{\theta}(\mathbf{z}_{t},t)&\text{otherwise}\end{cases};
      𝐲(𝐳t)={jSsum(𝐦¯j(At)j)sum(𝐦j(At)j)if t>tloss,0otherwise\ell_{\mathbf{y}}(\mathbf{z}_{t})=\begin{cases}\sum_{j\in S}\text{sum}(\mathbf{\bar{m}}_{j}\odot(A_{t})_{j})-\text{sum}(\mathbf{m}_{j}\odot(A_{t})_{j})&\text{if $t>t_{\text{loss}}$},\\ 0&\text{otherwise}\end{cases};
      ϵ^θ(𝐳t,t)=ϵθ(𝐳t,t)+ησt𝐳t𝐲(zt)\hat{\bm{\epsilon}}_{\theta}^{\prime}(\mathbf{z}_{t},t)=\bm{\epsilon}^{\prime}_{\theta}(\mathbf{z}_{t},t)+\eta\sigma_{t}\nabla_{\mathbf{z}_{t}}\ell_{\mathbf{y}}(z_{t});
      Compute 𝐳t1\mathbf{z}_{t-1} from 𝐳t\mathbf{z}_{t} using ϵ^θ(𝐳t,t)\hat{\bm{\epsilon}}_{\theta}^{\prime}(\mathbf{z}_{t},t);
     
      end for
ALGORITHM 1 Pseudocode for iLGD

5 Experiments

5.1 Experimental Setup

Datasets

We follow Xie et al. Xie et al. (2023) and evaluate performance on a dataset consisting of 200 prompt and bounding-box pairs, spanning 20 different prompts and 27 object categories. Each prompt reflects either of the following prompt structures: “a {} …,” or “a {} and a {} ….” MultiDiffusion requires separate prompts for the foreground elements and for the background, which we manually created from our original prompts. For example, the prompt “a red book and a clock” became the two foreground prompts “a red book” and “a clock,” with a null background prompt, while “a giraffe on a field” became the foreground prompt “a giraffe” in the foreground and the background prompt “a field.”

Methods

In this section, we compare our proposed method (iLGD), BoxDiff Xie et al. (2023), Chen et al. Chen et al. (2024), MultiDiffusion Bar-Tal et al. (2023), and Stable Diffusion Rombach et al. (2022). We also include the results of an ablation study of our method, in which we use just injection (SD + I) or just loss guidance (SD + LG) alone. In order to allow loss guidance to exert sufficient influence over the final image layouts when it is used in isolation, we increase the loss-guidance strength above the level used in iLGD (see Appendix B for more details). All of the methods compared in this section used the official Stable Diffusion v1.4 model Rombach et al. (2022) from HuggingFace.

Evaluation Metrics

We employ a variety of metrics to measure performance along various aspects. First, we use the T2I-Sim metric Xie et al. (2023) to measure text-to-image similarity between prompts and their corresponding generated images. This metric measures the cosine similarity between text and images in CLIP feature space Radford et al. (2021) to evaluate how well the generated images reflect the semantics of the prompt.

We also use CLIP-IQA Wang et al. (2023) to assess the quality of the generated images. Given a pair of descriptors {𝐲1,𝐲2}\{\mathbf{y}_{1},\mathbf{y}_{2}\} which are opposite in meaning (e.g., high quality, low quality), CLIP-IQA compares the CLIP features of these prompts with the CLIP features of the generated image. The final score reflects how well 𝐲1\mathbf{y}_{1}, as opposed to 𝐲2\mathbf{y}_{2}, describes the image. We evaluate the overall quality of images using the pair {high quality, low quality}\{\text{high quality, low quality}\}, blurriness using {clear, blurry}\{\text{clear, blurry}\}, and naturalness using {natural, synthetic}\{\text{natural, synthetic}\}.

To evaluate each method’s faithfulness to the prescribed bounding box, we use YOLOv4 Bochkovskiy et al. (2020) to compare the predicted bounding box over the set of generated images to the ground truth bounding boxes, and report the average precision at IOU=0.5\text{IOU}=0.5.

Finally, we report the average contrast and saturation of generated images. We observe that guidance-based methods often lead to high contrast and high saturation, particularly when the guidance strength is high.

More details about these evaluation metrics can be found in Appendix B.

5.2 Comparisons

\includegraphics

[width=]images/ilgd_vs_boxdiff.pdf

Figure 4: A comparison of iLGD against BoxDiff, Chen et al., MultiDiffusion, and Stable Diffusion. The random seed is kept the same across each set of images.
\includegraphics

[width=0.7]images/ilgd_vs_injection.pdf

Figure 5: Images generated using either just injection or iLGD. The random seed is kept the same across each set of images. For the prompt “a castle in the middle of a marsh,” we use ν=0.6\nu^{\prime}=0.6 for injection and use η=0.48\eta=0.48 when we additionally introduce loss guidance in iLGD.
Table 1: Comparison of the average contrast and saturation over 200 images for BoxDiff, Chen et al., MultiDiffusion, Stable Diffusion (SD), with injection (SD + I) and with loss guidance (SD + LG), and iLGD.
Average Contrast Average Saturation
SD @ CFG 0.0 52.96 84.49
SD @ CFG 7.5 58.53 110.14
SD @ CFG 12.5 65.87 123.32
BoxDiff 65.68 115.67
Chen et al. 62.12 98.56
MultiDiffusion 48.03 75.01
SD + I 46.07 105.63
SD + LG 58.51 102.62
Ours (iLGD) 47.53 105.81
Table 2: Comparison of various quality metrics for BoxDiff, Chen et al., MultiDiffusion, Stable Diffusion (SD), with injection (SD + I) and with loss guidance (SD + LG), and iLGD, averaged over 200 images.
Method T2I-Sim (\uparrow) CLIP-IQA (\uparrow) [email protected] (\uparrow)
Quality Natural Clear
SD 0.303 0.928 0.705 0.736
BoxDiff 0.305 0.922 0.613 0.6945 0.192
Chen et al. 0.301 0.936 0.640 0.814 0.118
MultiDiffusion 0.295 0.920 0.547 0.792 0.411
SD + I 0.305 0.958 0.647 0.808 0.136
SD + LG 0.302 0.932 0.684 0.737 0.055
Ours (iLGD) 0.309 0.961 0.654 0.817 0.202

We present a comparison between our proposed method (iLGD), BoxDiff Xie et al. (2023), Chen et al. Chen et al. (2024), MultiDiffusion Bar-Tal et al. (2023), and Stable Diffusion Rombach et al. (2022) in Figure 4. Qualitatively, we observe that BoxDiff and Chen et al. both produce images with higher contrast, and BoxDiff produces especially saturated images. For instance, for the prompt “a donut and a carrot,” both objects in BoxDiff’s image appear unnaturally bright or dark in certain regions. Chen et al. produces high contrast images for the prompts “a dog on a bench” and “a banana and broccoli,” though with visibly less saturation than BoxDiff. High contrast and saturation are even visible in some images when just Stable Diffusion is used, e.g., in the prompts “a balloon and a cake and a frame…” and “a suitcase and a handbag” in Figure 5.

These observations are reflected quantitatively in Table 1, which reports the average contrast and saturation across all generated images. The same high-contrast phenomenon observed in BoxDiff and Chen et al. is seen to occur in Stable Diffusion when increasing the classifier-free guidance (CFG) scale. Furthermore, high staturation can also be seen in both BoxDiff and Stable Diffusion at high CFG scales. Both loss guidance and CFG appear to move the predicted latent into similar low-probability regions if the strength is too large, which may nonetheless be necessary to obtain either the appropriate layout or the appropriate agreement with the semantics of the text prompt. Neither attention injection nor MultiDiffusion appear to have this biasing effect, and the contrast and saturation for iLGD are similar to those of attention injection.

We note that both attention injection and iLGD produce images of lower contrast than even Stable Diffusion without classifier-free guidance, although visual inspection suggests that this does not manifest as any obvious abnormality in the images. We hypothesize that injection favours scenes that naturally contain lower contrast. When the objects appearing in the images receive high levels of attention as a result of injection, they tend to be clear, in-focus, and easily discernible, and do not have either very bright reflections or very dark shadows.

Biasing the latents using injection also clearly helps to preserve image quality and achieve stronger layout control, particularly for layouts which are difficult for the model to generate. For the prompt “a red book and a clock,” BoxDiff struggles to move the clock into the correct position, instead generating it as a lower quality, shadow-like figure, while Chen et al. omits the clock entirely, and MultiDiffusion generates a chaotic scene with many artifacts. Using iLGD, the clock and book appear in the correct positions, and the clock maintains its fine details, e.g., the numbering on the face, as well as its texture. For the prompt “a bowl and a spoon,” the image produced by iLGD is more faithful to the bounding boxes than image produced by BoxDiff, in which it is also difficult to distinguish the spoon from its shadow as dark colours are exaggerated, as well as the image of Chen et al., which struggles to reposition the objects. The layout produced by MultiDiffusion is more accurate, but the image itself is not sensible. In general, it appears that BoxDiff performs very well, but produces images of unnaturally high contrast and saturation. The method of Chen et al. produces images of reasonable quality, but struggles to move the objects to the desired bounding boxes. MultiDiffusion appears to consistently make objects appear in the bounding boxes, but often produces chaotic scenes or scenes with cut-and-paste artifacts, in which the foreground images seem glued to an incompatible background (see the prompt “a banana and broccoli” for one such example).

In Table 2, we report various metrics related to image quality (CLIP-IQA), bounding box precision (YOLOv4) and text-to-image similarity (T2I-Sim). While both BoxDiff and iLGD achieve similar T2I-Sim and [email protected] scores, the latter achieves better performance on all CLIP-IQA metrics. While the method of Chen et al. achieves T2I-Sim and CLIP-IQA scores that are almost as good as those of iLGD, it achieves a much lower [email protected] score. MultiDiffusion has the highest [email protected] score of all methods considered, but it also has the lowest T2I-Sim and CLIP-IQA metrics (all except for Clear), showing that MultiDiffusion sacrifices an unacceptable amount of image quality in exchange for layout control. The ablation study (SD + I) using just injection shows good image quality, but substantially worse layout control than iLGD, while the ablation study (SD + L) using just loss guidance shows slightly worse image quality and much worse layout control than iLGD. These results indicate that iLGD is able to achieve superior image quality without sacrificing layout control, qualifying it as a meaningful improvement over BoxDiff, Chen et al., and MultiDiffusion.

We provide additional comparisons in Appendix C.

5.3 Ablation Studies

A visual comparison between iLGD and injection alone is presented in Figure 5. We observe that, while injection typically does generate an object in each bounding box, the object itself may be incorrect. To illustrate, for the prompt “a balloon, a cake, and a frame, …,” using injection leads to a frame appearing where the cake should be; for “a donut and a carrot,” it generates a hand where the donut should be; for “a suitcase and a handbag,” a second suitcase is generated instead of a handbag; and for “a cat sitting outside,” something akin to a tree stump appears instead of a cat. In the example for “a castle in the middle of a marsh,” the castle does not fill up the bounding box, and for “a cat with a tie” the resulting image has the correct layout, but begins to appear like a cartoon cutout. When loss guidance is added using iLGD, all of these images appear correctly. These observations are reflected in Table 2, where injection achieves a noticeably lower [email protected] score compared to iLGD.

We note that the results in Figure 5 provide some additional evidence that injection is able to successfully bias the image according to the desired layout. For instance, revisiting the example of the prompt “a cat sitting outside,” injection causes a tree stump to appear in the region where the attention was enhanced. When loss guidance is also applied in each step, this stump is instead denoised into a cat. In this case, a small amount of loss guidance suffices to generate the appropriate layout, as it is augmented by the biasing effect of injection.

6 Conclusions

In this work, we introduce a framework which combines both attention injection and loss guidance to produce samples conforming to a desired layout. We show, both qualitatively and quantitatively, that our proposed method can produce such samples with fewer visual artifacts compared to training-free methods using loss guidance alone. Our method uses only existing components of the diffusion model, avoiding any additional model complexity. One of the method’s limitations is that it remains somewhat sensitive to the initial random seed.

Appendix A Diffusion Models

A.1 Denoising Diffusion Probabilistic Models

Diffusion models Ho et al. (2020) are characterized by two principle algorithms. The first is the forward process, wherein the data 𝐱0\mathbf{x}_{0} is gradually corrupted by Gaussian noise until it becomes pure noise, which we denote by 𝐱T\mathbf{x}_{T}. The reverse process moves in the opposite direction, attempting to recover the data by iteratively removing noise. The denoiser ϵθ(𝐱t,t)\bm{\epsilon}_{\theta}(\mathbf{x}_{t},t) is typically a UNet Ronneberger et al. (2015) which accepts an image 𝐱t\mathbf{x}_{t}, and predicts its noise content ϵ\bm{\epsilon}. Removing a fraction of this noise yields a slightly denoised image 𝐱t1\mathbf{x}_{t-1}. Repeating this process over TT steps produces a noise-free image 𝐱0\mathbf{x}_{0}.

Operating directly on the image 𝐱t\mathbf{x}_{t} in pixel-space is computationally expensive. As an alternative, latent diffusion models have been proposed to curtail this high cost, in which the denoising procedure is performed in latent space, whose dimensionality is typically much lower than pixel space. Stable Diffusion Rombach et al. (2022) is one example of a latent diffusion model which achieves state-of-the-art performance on various image synthesis tasks. It leverages a powerful autoencoder to project to and from latent space, where the standard denoising procedure is performed. Images in latent space are typically denoted by 𝐳t\mathbf{z}_{t}, and the encoder and decoder are denoted by \mathcal{E} and 𝒟\mathcal{D}, respectively, 𝐳t=(𝐱t)\mathbf{z}_{t}=\mathcal{E}(\mathbf{x}_{t}) and 𝐱t=𝒟(𝐳t)\mathbf{x}_{t}=\mathcal{D}(\mathbf{z}_{t}).

During training, samples from the true data distribution q(𝐱0)q(\mathbf{x}_{0}) are corrupted via the forward process. By training a diffusion model to learn a reverse process in which it iteratively reconstructs these noisy samples into noise-free samples, it is possible to generate images from pure noise at inference time. This corresponds to sampling from an approximation pθ(𝐱0)p_{\theta}(\mathbf{x}_{0}) to the data distribution, q(𝐱0)q(\mathbf{x}_{0}). This generation process can be guided by introducing an additional input vector 𝐲\mathbf{y}, which is often a text prompt. In this case, the model produces samples from an approximation pθ(𝐱0|𝐲)p_{\theta}(\mathbf{x}_{0}|\mathbf{y}) to the conditional distribution q(𝐱0|𝐲)q(\mathbf{x}_{0}|\mathbf{y}).

In denoising diffusion probabilistic models (DDPM) Ho et al. (2020), the forward process is characterized by the Markov chain q(𝐱t|𝐱t1)𝒩(𝐱t;1βt𝐱t1,βt𝐈)q(\mathbf{x}_{t}|\mathbf{x}_{t-1})\sim\mathcal{N}(\mathbf{x}_{t};\sqrt{1-\beta_{t}}\mathbf{x}_{t-1},\beta_{t}\mathbf{I}), for some noise schedule βt\beta_{t}. In this case, q(𝐱t|𝐱0)𝒩(𝐱t;α¯t𝐱0,(1α¯t)𝐈)q(\mathbf{x}_{t}|\mathbf{x}_{0})\sim\mathcal{N}(\mathbf{x}_{t};\sqrt{\bar{\alpha}_{t}}\mathbf{x}_{0},(1-\bar{\alpha}_{t})\mathbf{I}), where α¯t=s=1tαs\bar{\alpha}_{t}=\prod_{s=1}^{t}\alpha_{s} and αt=1βt\alpha_{t}=1-\beta_{t}. The reverse process is typically modeled by a learned Markov chain pθ(𝐱t1|𝐱t)𝒩(𝐱t1;μθ(𝐱t,t),σt2𝐈)p_{\theta}(\mathbf{x}_{t-1}|\mathbf{x}_{t})\sim\mathcal{N}(\mathbf{x}_{t-1};\mu_{\theta}(\mathbf{x}_{t},t),\sigma_{t}^{2}\mathbf{I}), where σt\sigma_{t} is an untrained time dependent constant, usually with β~tσt2βt\tilde{\beta}_{t}\leq\sigma_{t}^{2}\leq\beta_{t} and β~t=βt(1α¯t1)/(1α¯t)\tilde{\beta}_{t}=\beta_{t}(1-\bar{\alpha}_{t-1})/(1-\bar{\alpha}_{t}), or with σt\sigma_{t} simply chosen equal to βt\sqrt{\beta_{t}}.

It is not efficient to optimize the log-likehood 𝔼[logpθ(x0)]\mathbb{E}[-\log p_{\theta}(x_{0})] directly, since computing pθ(𝐱0)p_{\theta}(\mathbf{x}_{0}) requires marginalizing over 𝐱1:T\mathbf{x}_{1:T}. Instead, one can use importance sampling to write

pθ(𝐱0)=𝔼q(𝐱1:T|𝐱0)[pθ(𝐱0:T)q(𝐱1:T|𝐱0)].\displaystyle p_{\theta}(\mathbf{x}_{0})=\mathbb{E}_{q(\mathbf{x}_{1:T}|\mathbf{x}_{0})}\left[\frac{p_{\theta}(\mathbf{x}_{0:T})}{q(\mathbf{x}_{1:T}|\mathbf{x}_{0})}\right]. (11)

Then, by Jensen’s inequality,

logpθ(𝐱0)𝔼q(𝐱1:T|𝐱0)[logpθ(𝐱0:T)q(𝐱1:T|𝐱0)].\displaystyle-\log p_{\theta}(\mathbf{x}_{0})\leq\mathbb{E}_{q(\mathbf{x}_{1:T}|\mathbf{x}_{0})}\left[-\log\frac{p_{\theta}(\mathbf{x}_{0:T})}{q(\mathbf{x}_{1:T}|\mathbf{x}_{0})}\right]. (12)

The right hand side is the usual evidence lower bound (ELBO), which is minimized instead. Ho et al. Ho et al. (2020) show that minimizing the ELBO is equivalent to minimizing

𝔼t[λ(t)𝔼q(𝐱0),ϵ[ϵθ(𝐱t,t)ϵ22]],\displaystyle\mathbb{E}_{t}[\lambda(t)\mathbb{E}_{q(\mathbf{x}_{0}),\epsilon}[\lVert\epsilon_{\theta}(\mathbf{x}_{t},t)-\epsilon\rVert^{2}_{2}]], (13)

for some positive function λ(t)\lambda(t), where ϵ𝒩(𝟎,𝐈)\epsilon\sim\mathcal{N}(\mathbf{0},\mathbf{I}), 𝐱t(𝐱0,ϵ)=α¯t𝐱0+1α¯tϵ\mathbf{x}_{t}(\mathbf{x}_{0},\epsilon)=\sqrt{\bar{\alpha}_{t}}\mathbf{x}_{0}+\sqrt{1-\bar{\alpha}_{t}}\epsilon, and

μθ(𝐱t,t)=1αt(𝐱tβt1α¯tϵθ(𝐱t,t)).\displaystyle\mu_{\theta}(\mathbf{x}_{t},t)=\frac{1}{\sqrt{\alpha_{t}}}\Bigl{(}\mathbf{x}_{t}-\frac{\beta_{t}}{\sqrt{1-\bar{\alpha}_{t}}}\epsilon_{\theta}(\mathbf{x}_{t},t)\Bigr{)}. (14)

A.2 Score Matching

Since q(𝐱t|𝐱0)q(\mathbf{x}_{t}|\mathbf{x}_{0}) is a normal distribution, we know that

𝐱tlogq(𝐱t|𝐱0)=ϵ1α¯t.\displaystyle\nabla_{\mathbf{x}_{t}}\log q(\mathbf{x}_{t}|\mathbf{x}_{0})=-\frac{\epsilon}{\sqrt{1-\bar{\alpha}_{t}}}. (15)

Thus, minimizing (13) is equivalent to minimizing

𝔼t[λ(t)𝔼q(𝐱0)𝔼q(𝐱t|𝐱0)[𝐬θ(𝐱t,t)𝐱tlogq(𝐱t|𝐱0)22]],\displaystyle\mathbb{E}_{t}[\lambda(t)\mathbb{E}_{q(\mathbf{x}_{0})}\mathbb{E}_{q(\mathbf{x}_{t}|\mathbf{x}_{0})}[\lVert\mathbf{s}_{\theta}(\mathbf{x}_{t},t)-\nabla_{\mathbf{x}_{t}}\log q(\mathbf{x}_{t}|\mathbf{x}_{0})\rVert^{2}_{2}]], (16)

for some positive function λ(t)\lambda(t), where

𝐬θ(𝐱t,t):=ϵθ(𝐱t,t)1α¯t.\displaystyle\mathbf{s_{\theta}}(\mathbf{x}_{t},t):=-\frac{\epsilon_{\theta}(\mathbf{x}_{t},t)}{\sqrt{1-\bar{\alpha}_{t}}}. (17)

It is known that this loss is minimized when 𝐬θ(𝐱t,t)=𝐱tlogq(𝐱t)\mathbf{s_{\theta}}(\mathbf{x}_{t},t)=\nabla_{\mathbf{x}_{t}}\log q(\mathbf{x}_{t}) Vincent (2011), so given enough parameters, 𝐬θ(𝐱t,t)\mathbf{s_{\theta}}(\mathbf{x}_{t},t) will converge to 𝐱tlogq(𝐱t)\nabla_{\mathbf{x}_{t}}\log q(\mathbf{x}_{t}) almost everywhere. Given an approximation to the score function, it is possible to sample from pθ(𝐱0)p_{\theta}(\mathbf{x}_{0}) using annealed Langevin dynamics Song and Ermon (2019).

Letting the forward process posterior mean μ~t\tilde{\mu}_{t} be defined by q(𝐱t1|𝐱t,𝐱0)𝒩(𝐱t1;μ~t(𝐱t,𝐱0),β~t𝐈)q(\mathbf{x}_{t-1}|\mathbf{x}_{t},\mathbf{x}_{0})\sim\mathcal{N}(\mathbf{x}_{t-1};\tilde{\mu}_{t}(\mathbf{x}_{t},\mathbf{x}_{0}),\tilde{\beta}_{t}\mathbf{I}), we have that

μ~t(𝐱t,𝐱0):=α¯t1βt1α¯t𝐱0+αt(1α¯t1)1α¯t𝐱t\displaystyle\tilde{\mu}_{t}(\mathbf{x}_{t},\mathbf{x}_{0}):=\frac{\sqrt{\bar{\alpha}_{t-1}}\beta_{t}}{1-\bar{\alpha}_{t}}\mathbf{x}_{0}+\frac{\sqrt{\alpha_{t}}(1-\bar{\alpha}_{t-1})}{1-\bar{\alpha}_{t}}\mathbf{x}_{t} (18)

(see Ho et al. (2020)). With this, the mean μθ(𝐱t,t)\mu_{\theta}(\mathbf{x}_{t},t) of the reverse process can be understood as

μθ(𝐱t,t)=μ~t(𝐱t,Dθ(𝐱t,t)),\displaystyle\mu_{\theta}(\mathbf{x}_{t},t)=\tilde{\mu}_{t}(\mathbf{x}_{t},D_{\theta}(\mathbf{x}_{t},t)), (19)

where

Dθ(𝐱t,t):=1α¯t(𝐱t+(1α¯t)sθ(𝐱t,t))\displaystyle D_{\theta}(\mathbf{x}_{t},t):=\frac{1}{\sqrt{\bar{\alpha}_{t}}}(\mathbf{x}_{t}+(1-\bar{\alpha}_{t})s_{\theta}(\mathbf{x}_{t},t)) (20)

is an approximation to Tweedie’s formula

1α¯t(𝐱t+(1α¯t)𝐱tlogq(𝐱t))=1α¯t𝔼q[α¯t𝐱0|𝐱t]=𝔼q[𝐱0|𝐱t]\displaystyle\begin{split}&\frac{1}{\sqrt{\bar{\alpha}_{t}}}(\mathbf{x}_{t}+(1-\bar{\alpha}_{t})\nabla_{\mathbf{x}_{t}}\log q(\mathbf{x}_{t}))\\ &\hskip 30.00005pt=\frac{1}{\sqrt{\bar{\alpha}_{t}}}\mathbb{E}_{q}[\sqrt{\bar{\alpha}_{t}}\mathbf{x}_{0}|\mathbf{x}_{t}]=\mathbb{E}_{q}[\mathbf{x}_{0}|\mathbf{x}_{t}]\end{split} (21)

(see Efron (2011)).

A.3 Stochastic Differential Equations

Song et al. Song et al. (2021) showed that the forward process of DDPM can viewed as a discretization of the stochastic differential equation (SDE)

d𝐱=12β(t)𝐱dt+β(t)d𝐰,\displaystyle d\mathbf{x}=-\frac{1}{2}\beta(t)\mathbf{x}\,dt+\sqrt{\beta(t)}\,d\mathbf{w}, (22)

where 𝐰\mathbf{w} denotes the Wiener process. There, the authors point out that any SDE of the form d𝐱=𝐟(𝐱,t)+g(t)d𝐰d\mathbf{x}=\mathbf{f}(\mathbf{x},t)+g(t)d\mathbf{w}, where 𝐱0p0(𝐱0)\mathbf{x}_{0}\sim p_{0}(\mathbf{x}_{0}) can be reversed by the SDE d𝐱=(𝐟(𝐱,t)g(t)2𝐱logpt(𝐱))dt+g(t)d𝐰¯d\mathbf{x}=(\mathbf{f}(\mathbf{x},t)-g(t)^{2}\nabla_{\mathbf{x}}\log p_{t}(\mathbf{x}))\,dt+g(t)\,d\mathbf{\bar{w}}, where 𝐰¯\mathbf{\bar{w}} is the standard Wiener process in the reverse time direction, and where 𝐱TpT(𝐱T)\mathbf{x}_{T}\sim p_{T}(\mathbf{x}_{T}). Furthermore, each SDEs admits a family of related SDEs that share the same marginal distributions pt(𝐱t)p_{t}(\mathbf{x}_{t}). One of these SDEs is purely deterministic, and is known as the probability flow ordinary differential equation (ODE).

If the score function 𝐬θ(𝐱t,t)\mathbf{s}_{\theta}(\mathbf{x}_{t},t) is available, then it is possible to sample from pθ(𝐱0)p_{\theta}(\mathbf{x}_{0}) by solving the probabily flow ODE, starting with samples from pθ(𝐱T)p_{\theta}(\mathbf{x}_{T}). This results in a deterministic mapping from noises images 𝐱T\mathbf{x}_{T} to clean images 𝐱0\mathbf{x}_{0}. This sampling process can be performed quickly with the aid of ODE solvers Lu et al. (2022).

A.4 Classifier-Free Guidance

In order to generate images following a user-supplied text prompt, the denoiser ϵθ(𝐳t,t,𝐲)\bm{\epsilon}_{\theta}(\mathbf{z}_{t},t,\mathbf{y}) of a latent diffusion model is trained with an additional input given by a sequence of token embeddings 𝐲={𝐲1,𝐲2,,𝐲k}\mathbf{y}=\{\mathbf{y}_{1},\mathbf{y}_{2},\ldots,\mathbf{y}_{k}\}. A single denoiser, usually a UNet, is trained over a variety of text prompts, and the token embeddings influence the denoiser by a cross-attention mechanism in both the contractive and expansive layers. Ho and Salimans Ho and Salimans (2021) found that, rather than sampling images using the conditional denoiser alone, better results can be obtained by taking a combination of conditional and unconditional noise estimates,

ϵ~θ(𝐳t,t,𝐲)=(1+w)ϵθ(𝐳t,t,𝐲)wϵθ(𝐳t,t,{}),\displaystyle\bm{\tilde{\epsilon}}_{\theta}(\mathbf{z}_{t},t,\mathbf{y})=(1+w)\bm{\epsilon}_{\theta}(\mathbf{z}_{t},t,\mathbf{y})-w\bm{\epsilon}_{\theta}(\mathbf{z}_{t},t,\{\}), (23)

where ww represents the intensity of the additive term ϵθ(𝐳t,t,𝐲)ϵθ(𝐳t,t,{})\bm{\epsilon}_{\theta}(\mathbf{z}_{t},t,\mathbf{y})-\bm{\epsilon}_{\theta}(\mathbf{z}_{t},t,\{\}). For 1w0-1\leq w\leq 0, this noise prediction can be viewed as an approximation to (σt-\sigma_{t} times) the score function of the marginal distribution p~θ(𝐳t|𝐲)pθ(𝐳t|𝐲)1+wpθ(𝐳t|{})w\tilde{p}_{\theta}(\mathbf{z}_{t}|\mathbf{y})\propto p_{\theta}(\mathbf{z}_{t}|\mathbf{y})^{1+w}p_{\theta}(\mathbf{z}_{t}|\{\})^{-w}. In classifier-free guidance (CFG), w0w\gg 0, which does not have a simple interpretation in terms of the marginal distributions of the new denoising process.

Appendix B Detailed Methods

Implementation Details

We implement our method on the official Stable Diffusion v1.4 model Rombach et al. (2022) from HuggingFace. All images are generated using 50 denoising steps and a classifier-free guidance scale of 7.5, unless otherwise noted. We use the noise scheduler LMSDiscreteScheduler Karras et al. (2022) provided by HuggingFace. Experiments are conducted on an NVIDIA TESLA V100 GPU.

We perform attention injection over all attention maps. When performing injection, we resize the mask mm to the appropriate resolution, depending on which layer of the UNet the attention maps are taken from. For loss guidance, we again use all of the model’s attention maps, but resize them 16×1616\times 16 resolution, and compute the mean of each map over all pixels. We apply the softmax function over these means to obtain a weight vector 𝐰\mathbf{w}, where each entry wjw_{j} is the scalar weight associated with the jj-th resized attention map. Finally, we obtain the attention map AtA_{t} by taking a weighted average over all resized attention maps at time tt, using the appropriate weight wjw_{j} for each map.

When attempting to control the layout of a generated image, we find that skipping the first step, so that it remains a standard denoising step, leads to better results. We do this for all experiments conducted in this paper which use either injection or loss guidance or both. In iLGD, we use η=0.48\eta=0.48, ν=0.75\nu^{\prime}=0.75, tloss=25t_{\text{loss}}=25, and tinject=10t_{\text{inject}}=10, unless otherwise noted. In our ablation experiments, we keep the injection strength at ν=0.75\nu^{\prime}=0.75 when performing just attention injection. When performing just loss guidance, we increase the loss-guidance strength to η=1\eta=1, in order to make loss guidance alone exert sufficient influence over the final image layouts.

In our comparisons with BoxDiff, we maintain the default parameters the authors provide in their implementation. We start with αT=20\alpha_{T}=20, which decays linearly to α0=10\alpha_{0}=10, and perform guidance for 25 iterations out of a total of 50 denoising steps.

In our comparisons with the method of Chen et al., we also maintain the default parameters the authors provide in their implementation, setting the loss scale factor to η=30\eta=30.

Evaluation with YOLOv4

In this section, we describe in detail how we obtain the AP@50 scores in Table 2. In classical object detection, a model is trained to detect and localize objects of certain classes in an image, typically by predicting a bounding box which fully encloses the object. The accuracy of the model’s predicted bounding box, BpB_{p}, is evaluated by comparison to the corresponding ground truth bounding box, BgtB_{gt}. More specifically, we compute the intersection over union (IOU) over the pair of bounding boxes:

IOU=area(BpBgt)area(BpBgt).\text{IOU}=\frac{\text{area}(B_{p}\cap B_{gt})}{\text{area}(B_{p}\cup B_{gt})}. (24)

The IOU is then compared to a threshold tt, such that, if IOUt\text{IOU}\geq t, then the detection is classified as correct. If not, then the detection is classified as incorrect. In our case, we follow Li et al. Li et al. (2021) and treat the object detection model as an oracle, where we assume that it provides the bounding boxes of objects in a given image with perfect accuracy. In particular, we first define a layout through a set of ground truth bounding boxes, describing the desired positions of each object. We then generate an image according to this layout, and subsequently apply the object detection model to the generated image to obtain a set of predicted bounding boxes. Finally, to evaluate how similar the layout of the generated image is to the desired layout, we compare each predicted bounding box, BpB_{p}, to the corresponding ground truth bounding box, BgtB_{gt}, by computing their IOU. We use a IOU threshold of 0.5.

To calculate the average precision, we first need to compute the number of true positives (TP), false positives (FP), and false negatives (FN). We count a false negative when no detection is made on the image, even though a ground truth object exists, or when the detected class is not among the ground truth classes. We also count a false negative as well as a false positive when the correct detection is made, but IOU<0.5\text{IOU}<0.5, and a true positive when IOU0.5\text{IOU}\geq 0.5. Using these quantities, we compute the precision PP and recall RR as:

P=TPTP+FP,P=\frac{TP}{TP+FP}, (25)
R=TPTP+FN.R=\frac{TP}{TP+FN}. (26)

We repeat this for classifier confidence thresholds of 0.15 to 0.95, in steps of 0.05, so that we end up with 17 values for precision and recall, respectively. We then construct a precision-recall curve, and compute the average precision using 11-point interpolation Padilla et al. (2020):

AP11=111R{0,0.1,,0.9,1}Pinterp(R),\text{AP}_{11}=\frac{1}{11}\sum_{R\in\{0,0.1,...,0.9,1\}}\!\!\!\!\!\!\!\!P_{\text{interp}}(R), (27)

where

Pinterp(R)=maxR~RP(R~).P_{\text{interp}}(R)=\max_{\tilde{R}\geq R}P(\tilde{R}). (28)

Image Quality Assessment

Wang et al. Wang et al. (2023) suggest using the pair {good photo, bad photo} instead of {high quality, low quality} to measure quality, as they find that it corresponds better to human preferences. However, we choose the latter to remain agnostic to the image’s style, as we believe the former carries with it a stylistic bias, due to the word “photo.”

Contrast Calculation

We calculate the RMS contrast by using OpenCV’s .std() method on a greyscale image.

Saturation Calculation

We calculate the saturation by working in HSV space and using OpenCV’s .mean() method on the image’s saturation channel.

Appendix C Additional Experiments

We provide two additional sets of comparisons between our proposed method (iLGD), BoxDiff Xie et al. (2023), Chen et al. Chen et al. (2024), MultiDiffusion Bar-Tal et al. (2023), and Stable Diffusion Rombach et al. (2022) . In Figure 1, we compare the three methods using same prompts and bounding boxes as in Figure 3, but using a different random seed for each set of images. In Figure 2, we compare the methods using an entirely new set of prompts and bounding boxes.

\includegraphics

[width=]images/ilgd_vs_boxdiff_appendix.pdf

Figure 1: A comparison of iLGD against BoxDiff, Chen et al., MultiDiffusion, and Stable Diffusion, using the same prompts as Figure 3 but different random seeds, with the seed kept the same across each set of images.
\includegraphics

[width=]images/ilgd_vs_boxdiff_appendix_2.pdf

Figure 2: A comparison of iLGD against BoxDiff, Chen et al., MultiDiffusion, and Stable Diffusion. The random seed kept the same across each set of images.

References

  • (1)
  • Balaji et al. (2023) Yogesh Balaji, Seungjun Nah, Xun Huang, Arash Vahdat, Jiaming Song, Qinsheng Zhang, Karsten Kreis, Miika Aittala, Timo Aila, Samuli Laine, Bryan Catanzaro, Tero Karras, and Ming-Yu Liu. 2023. eDiff-I: Text-to-Image Diffusion Models with an Ensemble of Expert Denoisers. arXiv preprint arXiv:2211.01324 (2023).
  • Bansal et al. (2024) Arpit Bansal, Hong-Min Chu, Avi Schwarzschild, Soumyadip Sengupta, Micah Goldblum, Jonas Geiping, and Tom Goldstein. 2024. Universal Guidance for Diffusion Models. In ICLR.
  • Bar-Tal et al. (2023) Omer Bar-Tal, Lior Yariv, Yaron Lipman, and Tali Dekel. 2023. Multidiffusion: Fusing Diffusion Paths for Controlled Image Generation. 202 (2023), 1737–1752.
  • Bochkovskiy et al. (2020) Alexey Bochkovskiy, Chien-Yao Wang, and Hong-Yuan Mark Liao. 2020. YOLOv4: Optimal Speed and Accuracy of Object Detection.
  • Chen et al. (2024) Minghao Chen, Iro Laina, and Andrea Vedaldi. 2024. Training-Free Layout Control with Cross-Attention Guidance. In IEEE Wint. Conf. Appl. 5343–5353.
  • Cheng et al. (2023) Jiaxin Cheng, Xiao Liang, Xingjian Shi, Tong He, Tianjun Xiao, and Mu Li. 2023. Layoutdiffuse: Adapting foundational diffusion models for layout-to-image generation. arXiv preprint arXiv:2302.08908 (2023).
  • Couairon et al. (2023) Guillaume Couairon, Marlène Careil, Matthieu Cord, Stéphane Lathuilière, and Jakob Verbeek. 2023. Zero-shot Spatial Layout Conditioning for Text-to-Image Diffusion Models. In ICCV. 2174–2183.
  • Dhariwal and Nichol (2021) Prafulla Dhariwal and Alexander Nichol. 2021. Diffusion Models Beat GANs on Image Synthesis. NeurIPS 34 (2021), 8780–8794.
  • Efron (2011) Bradley Efron. 2011. Tweedie’s Formula and Selection Bias. J. Am. Stat. Assoc. 106, 496 (2011), 1602–1614.
  • Epstein et al. (2023) Dave Epstein, Allan Jabri, Ben Poole, Alexei Efros, and Aleksander Holynski. 2023. Diffusion Self-Guidance for Controllable Image Generation. In NeurIPS, Vol. 36. 16222–16239.
  • Hertz et al. (2023) Amir Hertz, Ron Mokady, Jay Tenenbaum, Kfir Aberman, Yael Pritch, and Daniel Cohen-or. 2023. Prompt-to-Prompt Image Editing with Cross-Attention Control. In ICLR.
  • Ho et al. (2020) Jonathan Ho, Ajay Jain, and Pieter Abbeel. 2020. Denoising Diffusion Probabilistic Models. NeurIPS 33, 6840–6851.
  • Ho and Salimans (2021) Jonathan Ho and Tim Salimans. 2021. Classifier-Free Diffusion Guidance. In NeurIPS (Workshop: Deep Generative Models and Downstream Applications).
  • Karras et al. (2022) Tero Karras, Miika Aittala, Timo Aila, and Samuli Laine. 2022. Elucidating the Design Space of Diffusion-Based Generative Models. In NeurIPS, Vol. 35.
  • Li et al. (2021) Zejian Li, Jingyu Wu, Immanuel Koh, Yongchuan Tang, and Lingyun Sun. 2021. Image Synthesis From Layout with Locality-Aware Mask Adaption. In ICCV. 13819–13828.
  • Lu et al. (2022) Cheng Lu, Yuhao Zhou, Fan Bao, Jianfei Chen, Chongxuan Li, and Jun Zhu. 2022. DPM-Solver: A Fast ODE Solver for Diffusion Probabilistic Model Sampling in Around 10 Steps. In NeurIPS, Vol. 35.
  • Meng et al. (2022) Chenlin Meng, Yutong He, Yang Song, Jiaming Song, Jiajun Wu, Jun-Yan Zhu, and Stefano Ermon. 2022. SDEdit: Guided Image Synthesis and Editing with Stochastic Differential Equations. In ICLR.
  • Padilla et al. (2020) Rafael Padilla, Sergio L Netto, and Eduardo AB Da Silva. 2020. A Survey on Performance Metrics for Object-Detection Algorithms. In Int. Conf. Syst. Signal. IEEE, 237–242.
  • Radford et al. (2021) Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. 2021. Learning transferable visual models from natural language supervision. In ICML. PMLR, 8748–8763.
  • Ramesh et al. (2022) Aditya Ramesh, Prafulla Dhariwal, Alex Nichol, Casey Chu, and Mark Chen. 2022. Hierarchical Text-Conditional Image Generation with Clip Latents. arXiv preprint arXiv:2204.06125 (2022).
  • Rombach et al. (2022) Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. 2022. High-resolution Image Synthesis with Latent Diffusion Models. In CVPR. 10684–10695.
  • Ronneberger et al. (2015) Olaf Ronneberger, Philipp Fischer, and Thomas Brox. 2015. U-Net: Convolutional Networks for Biomedical Image Segmentation. In MICCAI. 234–241.
  • Saharia et al. (2022) Chitwan Saharia, William Chan, Saurabh Saxena, Lala Li, Jay Whang, Emily L Denton, Kamyar Ghasemipour, Raphael Gontijo Lopes, Burcu Karagol Ayan, Tim Salimans, et al. 2022. Photorealistic Text-to-Image Diffusion Models with Deep Language Understanding. NeurIPS 35 (2022), 36479–36494.
  • Singh et al. (2023) Jaskirat Singh, Stephen Gould, and Liang Zheng. 2023. High-Fidelity Guided Image Synthesis with Latent Diffusion Models. In CVPR. IEEE, 5997–6006.
  • Sohl-Dickstein et al. (2015) Jascha Sohl-Dickstein, Eric Weiss, Niru Maheswaranathan, and Surya Ganguli. 2015. Deep Unsupervised Learning Using Nonequilibrium Thermodynamics. In ICML, Vol. 37. PMLR, 2256–2265.
  • Song et al. (2023) Jiaming Song, Qinsheng Zhang, Hongxu Yin, Morteza Mardani, Ming-Yu Liu, Jan Kautz, Yongxin Chen, and Arash Vahdat. 2023. Loss-Guided Diffusion Models for Plug-and-Play Controllable Generation. In ICML, Vol. 202. PMLR, 32483–32498.
  • Song and Ermon (2019) Yang Song and Stefano Ermon. 2019. Generative Modeling by Estimating Gradients of the Data Distribution. NeurIPS 32 (2019).
  • Song et al. (2021) Yang Song, Jascha Sohl-Dickstein, Diederik P Kingma, Abhishek Kumar, Stefano Ermon, and Ben Poole. 2021. Score-Based Generative Modeling through Stochastic Differential Equations. In ICLR.
  • Vaswani et al. (2017) Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is All You Need. NeurIPS 30, 5998–6008.
  • Vincent (2011) Pascal Vincent. 2011. A Connection between Score Matching and Denoising Autoencoders. Neural Comput. 23, 7 (2011), 1661–1674.
  • Voynov et al. (2023) Andrey Voynov, Kfir Aberman, and Daniel Cohen-Or. 2023. Sketch-Guided Text-to-Image Diffusion Models. In SIGGRAPH. 1–11.
  • Wang et al. (2023) Jianyi Wang, Kelvin CK Chan, and Chen Change Loy. 2023. Exploring CLIP for Assessing the Look and Feel of Images. In AAAI, Vol. 37. 2555–2563.
  • Xie et al. (2023) Jinheng Xie, Yuexiang Li, Yawen Huang, Haozhe Liu, Wentian Zhang, Yefeng Zheng, and Mike Zheng Shou. 2023. Boxdiff: Text-to-Image Synthesis with Training-Free Box-Constrained Diffusion. In ICCV. 7452–7461.
  • Zhang et al. (2023) Lvmin Zhang, Anyi Rao, and Maneesh Agrawala. 2023. Adding Conditional Control to Text-to-Image Diffusion Models. In ICCV. 3836–3847.
  • Zheng et al. (2023) Guangcong Zheng, Xianpan Zhou, Xuewei Li, Zhongang Qi, Ying Shan, and Xi Li. 2023. Layoutdiffusion: Controllable Diffusion Model for Layout-to-Image Generation. In CVPR. 22490–22499.