S2-DMs: Skip-Step Diffusion Models
Abstract
Diffusion models have emerged as powerful generative tools, rivaling GANs in sample quality and mirroring the likelihood scores of autoregressive models. A subset of these models, exemplified by DDIMs, exhibit an inherent asymmetry: they are trained over steps but only sample from a subset of during generation. This selective sampling approach, though optimized for speed, inadvertently misses out on vital information from the unsampled steps, leading to potential compromises in sample quality. To address this issue, we present the S2-DMs, which is a new training method by using an innovative , meticulously designed to reintegrate the information omitted during the selective sampling phase. The benefits of this approach are manifold: it notably enhances sample quality, is exceptionally simple to implement, requires minimal code modifications, and is flexible enough to be compatible with various sampling algorithms. On the CIFAR10 dataset, models trained using our algorithm showed an improvement of 3.27% to 14.06% over models trained with traditional methods across various sampling algorithms (DDIMs, PNDMs, DEIS) and different numbers of sampling steps (10, 20, …, 1000). On the CELEBA dataset, the improvement ranged from 8.97% to 27.08%.

1 Introduction
Generative models, especially deep generative models, play a foundational role in the machine learning domain (Karras et al., 2020); (Oord et al., 2016). Architectures like Variational Autoencoders (VAEs; (Kingma & Welling, 2013)) and Autoregressive models(Van den Oord et al., 2016);(Brown et al., 2020); (Salimans et al., 2017), Generative Adversarial Networks (GANs; (Goodfellow et al., 2014); (Yu et al., 2017); (Hjelm et al., 2017);(Fedus et al., 2018)), and Restricted Boltzmann Machines (RBMs; (Hinton, 2012)) have been at the forefront. VAEs, while providing a structured probabilistic framework, occasionally yield blurry samples. GANs, acclaimed for their prowess in generating high-resolution images, can face training instabilities (Adler & Lunz, 2018); (Gulrajani et al., 2017); (Karras et al., 2019). RBMs, though seminal, find themselves overshadowed by more recent architectures in scalability and performance. Against this backdrop, diffusion models, Denoising diffusion probabilistic models(DDPMs; (Ho et al., 2020); (Sohl-Dickstein et al., 2015); (Song et al., 2020b)), have emerged as a compelling alternative, exhibiting unmatched capabilities in generating superior samples in diverse domains, from image synthesis to molecule design (Bengio et al., 2014).
However, diffusion models do come with challenges. Their inherently slow sampling speed, driven by the multitude of necessary sampling steps, remains a significant concern. Recent research has honed in on this computational bottleneck, with the goal of optimizing the sampling process (Jolicoeur-Martineau et al., 2021), (Nichol & Dhariwal, 2021). A significant breakthrough in this area is the Denoising Diffusion Implicit Models (DDIMs; (Song et al., 2020a)). DDIMs utilize a subset sampling strategy, achieving faster performance by sampling from a smaller subset instead of the entire set of steps. This method, due to its omission of certain steps, is coined ”skip-step sampling.” Yet, this acceleration introduces an inconsistency between training and sampling. During training, the model undergoes every step, but during sampling, some steps are selectively skipped, posing a risk of information loss. Although DDIMs, with sampling algorithm adjustments, have lessened the adverse effects of this approach compared to DDPMs, they haven’t specifically addressed and optimized for the missing intermediate information. Consequently, this challenge persists, leading to the suboptimal performance of diffusion models during expedited sampling and hindering the generation of high-quality samples.
Driven by these observations, our research proposes a method of integrating skip-step sampling as an optimization target into the training process. The model trained in this way can consider and adapt to the information lost during the sampling period due to skip-step sampling. Our model not only maintains efficient generation when using other accelerated sampling algorithms but also improves the quality of the generated samples, thereby achieving a balance of efficiency and performance. Specifically, during the training process, the original loss function is retained, and at the same time, a skip-step loss is introduced, measuring the difference between the model’s current step prediction and the skip-step result. This skip-step loss is combined with the original loss using a weighted mechanism. Figure 1 shows an overview of the diffusion models.
Empirical results demonstrate that when the skip-step loss is incorporated into the loss objective, the S2 -DMs achieves superior performance in unconditional generation on the CIFAR10 (Krizhevsky et al., 2009) and CELEBA datasets, under the same sampling algorithms (DDIMs, PNDMs (Liu et al., 2022a), and DEIS (Zhang & Chen, 2022)), compared to models trained with the original loss objective. Importantly, our study shows that our model achieves better results than the original model at different steps, with an improvement ranging from 3.27%-14.06% (CIFAR10) and 8.97%-27.08% (CELEBA), under the same sampling algorithms. We also conducted qualitative experiments and ablation studies, along with related analyses.
The key contributions of this work are:
-
1.
Modeling for Training-Sampling Discrepancy: To our knowledge, our study proposes the first method specifically designed to mitigate the inherent mismatch between training and sampling in diffusion models. This strategy consistently demonstrates superior performance.
-
2.
Innovative Skip-Step Loss: We introduce a trailblazing skip-step loss, embedding the selective sampling modality directly within the training process. This method empowers models to proactively navigate potential sampling information deficits, enhancing the quality of the samples.
-
3.
Simplicity of Implementation: The S2-DMs approach stands out not just for its efficacy but also its simplicity. With minimal code alterations required, it offers a convenient solution for both researchers and practitioners. Crucially, it’s adaptable to a range of sampling algorithms.
2 Background
This study is based on DDPMs (Ho et al., 2020) and DDIMs (Song et al., 2020a), so a brief review is in order. DDPMs specifies a prior Markov forward diffusion process, which gradually adds noise to the data over steps. Refer to the background description of (Watson et al., 2021). Following the notation of (Ho et al., 2020),
(1) |
(2) |
where represents the data distribution and signifies the variance of the Gaussian noise added at step . For each , we have and . To facilitate the transformation of noise back into data, DDPMs are trained to invert equation 1 with a model . This model is trained by optimizing a (possibly reweighted) evidence lower bound (ELBO),
(3) |
(4) |
DDPMs explicitly select the model for parameterization as
(5) |
In this framework, optimizing the ELBO corresponds to minimizing denoising score matching goals as explained by (Vincent, 2011). (Song et al., 2020a) introduced the DDIMs concept, a set of ELBOs complemented by forward diffusion processes and sampling mechanisms. These ELBOs, having similar marginals as DDPMs, offer flexibility in determining posterior variances (Chen et al., 2020). (Song et al., 2020a) emphasized crafting alternative ELBOs using a subset of original timesteps with consistent marginals. This results in for every in , permitting faster sampling processes compatible with pre-trained models by integrating new timesteps. Their work also suggests the feasibility of creating a vast range of non-Markovian processes, denoted as , with each maintaining marginals aligned with the original progression,
(6) |
and where the posteriors are defined as
(7) |
In their research, (Song et al., 2020a) observed that the special case of employing all-zero variances, termed as DDIMs(), persistently enhances the quality of samples in the short-step domain. When amalgamated with an apt choice of timesteps for assessing the modeled score function, known as strides, DDIMs() sets a new benchmark in the realm of few-step diffusion model sampling, especially with minimal inference step allocations. A pivotal advancement we bring is the enhancement of sample quality by introducing skip information (i.e., the aforementioned subset) during the training phase, ultimately establishing a novel diffusion model training paradigm. For a more comprehensive discussion on the S2-DMs, please refer to Section 3.
3 Skip-Step Diffusion Models
Acceleration approaches often employ skip-step sampling as a strategy for acceleration. However, this approach inherently introduces non-smooth denoising, leading to a potential decline in performance. This observation prompted us to re-evaluate the entire training and sampling workflow. Intriguingly, we identified an asymmetry between the training and sampling phases: the former proceeds in single steps, while the latter uses skip-step.
To improve the quality of skip-step sampling, we devised a novel yet straightforward objective function for the training phase. By incorporating the skip-step objective into our original loss function (Section 3.2), we achieved a symmetrical training effect. Ultimately, the performance of the model trained by this method met our expectations. (Section 4).
3.1 Asymmetry in Accelerated Sampling
Due to the slow sampling speed of diffusion models, extensive research has been conducted on acceleration algorithms for these models, with DDIMs being the most prominent. In the original paper, it was stated that a subset of the full steps of the diffusion model was selected. This subset forms an increasing sequence and is considerably shorter than the original steps, leading to a significant acceleration in the sampling process. Specifically, in its implementation, not all steps are sampled. Instead, 50-step and 100-step samplings are more prevalent, which are 10-20 times faster than the original 1000-step sampling. For instance, in the 100-step sampling, the model samples at every 10th step, maintaining equal intervals between each sample.
Clearly, there’s an asymmetry between the behavior during training and sampling. During training, the model is trained across all diffusion steps. In contrast, during sampling, it samples only a subset of these steps using skip-step sampling. Consequently, information from intermediate steps is overlooked, inevitably leading to a decline in model performance. We term this the “asymmetric diffusion model.”
In the subsequent sections, we will present a technique to integrate skip-step information during the training phase. This approach ensures that the trained model is more attuned to the skip-step sampling process, culminating in what we call the “Skip-Step Diffusion Models.”
3.2 Training with Skip-Step Loss
We aim to introduce a novel skip-step loss function built upon the original one. The standard optimization function for diffusion models was presented in the DDPMs and subsequent diffusion models predominantly utilize this foundational loss function,
(8) |
we may choose the parameterization
(9) |
Initially, we assume that sampling is conducted every 10 steps, which aligns with the commonly used DDIMs setting. This configuration allows us to reduce the sampling from the original 1000 steps to just 100 steps. Here, the skip-step setting corresponds to the sampling time, denoted as (Subsequent experiments will explore the model performance with various skip values). We will now introduce skip-step information, and for this purpose, we define .
Then the role of is to enable the model to learn the information at each step, which corresponds to . However, during the training phase, the corresponding sampling step is . Hence, we consider it as a new skip-step loss function. During the training process of the model, we aim to make as close as possible to . By doing so, when sampling with skips, the model can produce outputs that are closely aligned with the corresponding positions, thereby enhancing the quality of the output. Their formulations are as follows:
(10) |
To achieve our desired outcome, we minimize the squared difference between our targeted optimization goal and the original target, thereby effectively reducing the discrepancy between them,
(11) |
After simplification following re-parameterization, we are able to derive our final loss function ,
(12) |
Owing to the adjustment of certain weight values, it becomes crucial to incorporate a suitable weight in the following sections to guarantee the effective functioning of the system. At present, the stability of this loss function’s value is maintained, effectively averting any system collapse. This stability is evidenced by the results detailed in the experimental section (see the experimental in section 4).


3.3 Loss scaling
Traditionally, the training of diffusion models is centered around a singular training objective, often denoted as . This conventional approach focuses on a specific set of parameters to optimize the model’s performance. However, our innovative S²-DMs paradigm introduces an additional training objective, , which represents a significant shift from the traditional methodology. The incorporation of is designed to specifically address and mitigate the challenges associated with skip-step diffusion processes.
To effectively integrate these two objectives, we propose a balanced weighting approach: . Here, serves as a tuning parameter, enabling us to adjust the relative influence of the original training objective () and the new skip-step objective (). This balanced formula is crucial in harmonizing the traditional and novel aspects of our model, ensuring that neither is disproportionately emphasized at the expense of the other.
Furthermore, from a conceptual standpoint, the form of the training objective in S2-DMs bears resemblance to a regularization term. This similarity is not merely coincidental but stems from a deliberate design choice. The primary motivation behind this design is to provide informational compensation for the skip-step diffusion model. By doing so, we aim to guide and constrain the model’s trajectory more effectively. This approach is somewhat analogous to the principle of regularization in machine learning, where additional information is provided to prevent overfitting and to enhance the model’s generalization capabilities. In the context of S2-DMs, this ’regularization-like’ approach aids in refining the model’s performance, particularly in handling the complexities introduced by the skip-step diffusion process.
3.4 Training and Sampling
Figure 3 highlights our method’s core. By integrating skip-step data during training, the model gains a broader perspective, enhancing sampling performance. While the model may lean towards symmetry, it’s not a prerequisite for optimal performance. Peak efficacy is seen when nearing a symmetric form, termed ”coordinated”. This aligns with the model utilizing both current and post-skip data for improved predictions. The model remains flexible, not restricted by the parameter, allowing diverse step sampling.
Incorporating during training doesn’t introduce a new sampling method but modifies the diffusion model’s traditional training approach. In essence, models following the original diffusion training can benefit from our method. Our tests, using different sampling algorithms on identically trained models, consistently matched our predictions (see Experiment 4). Algorithm 1 S2-DMs Training process. 1: repeat 2: ; 3: ; 4: ; 5: ; 6: ; 7: 8: until convergence is achieved Algorithm 2 S2-DMs sampling with DDIMs. 1: repeat 2: ; 3: for do 4: ; 5: 6: end for 7: until convergence is achieved
In Algorithm 1 and 2, we illustrate the training and sampling procedures of the S2-DMs. The training process, compared to the standard diffusion models, only involves an additional computation of . This computation is straightforward. In our code repository, one can see that only a few lines of the entire code were modified to achieve all changes, making it easy to implement and facilitating follow-up by other researchers. The sampling procedure follows the standard DDIMs sampling. As skip-step information was incorporated during the training, no modifications are required in the sampling process. This allows for the generation of higher-quality samples, making it user-friendly.
Models \ # samplesteps S | 10 | 20 | 50 | 100 | 200 | 1000 |
---|---|---|---|---|---|---|
DDIMs | 12.98 | 6.92 | 4.94 | 4.56 | 4.43 | 4.39 |
S2-DMs(DDIMs) | 11.38 | 6.36 | 4.46 | 4.23 | 4.20 | 4.06 |
Increase Rate | 12.33% | 8.09% | 9.72% | 7.24% | 5.19% | 7.52% |
PNDMs | 13.67 | 7.61 | 4.87 | 3.99 | 3.67 | 3.42 |
S2-DMs(PNDMs) | 12.01 | 6.54 | 4.36 | 3.77 | 3.55 | 3.26 |
Increase Rate | 12.14% | 14.06% | 10.47% | 5.51% | 3.27% | 4.68% |
DEIS | 4.68 | 3.94 | 3.77 | - | - | - |
S2-DMs(DEIS) | 4.24 | 3.78 | 3.36 | - | - | - |
Increase Rate | 9.40% | 4.06% | 10.88% | - | - | - |
Models \ # samplesteps S | 10 | 20 | 50 | 100 | 200 | 1000 |
---|---|---|---|---|---|---|
DDIMs | 13.15 | 9.29 | 6.40 | 5.24 | 4.58 | 4.23 |
S2-DMs(DDIMs) | 11.97 | 8.12 | 5.29 | 4.18 | 3.65 | 3.13 |
Increase Rate | 8.97% | 12.59% | 17.34% | 20.23% | 20.31% | 26.00% |
PNDMs | 12.59 | 8.72 | 6.00 | 4.89 | 4.30 | 3.43 |
S2-DMs(PNDMs) | 11.40 | 7.58 | 4.94 | 3.91 | 3.38 | 2.94 |
Increase Rate | 9.45% | 13.07% | 17.67% | 20.04% | 21.40% | 14.29% |
DEIS | 6.93 | 2.77 | 2.39 | - | - | - |
S2-DMs(DEIS) | 6.29 | 2.02 | 1.77 | - | - | - |
Increase Rate | 9.24% | 27.08% | 25.94% | - | - | - |
4 Experiments
In this section, we demonstrate that the S2-DMs outperform DDIMs (Song et al., 2020a), PNDMs (Liu et al., 2022a) and DEIS (Zhang & Chen, 2022) in image generation, achieving this with an identical number of steps and the same sampling algorithm. Moreover, the latent variables in the images generated by the S2-DMs retain a high level of image features, allowing for interpolation within the latent space.
In consideration of computational resources, our experiments utilized the CIFAR10 dataset with a resolution of 32×32 and the CelebA dataset with a resolution of 64×64. For the training setup, we adopted the same architecture (He et al., 2016); (Ronneberger et al., 2015); (Kingma & Ba, 2014) as provided in the official DDIMs repository. We also ensured that all parameters were kept consistent, guaranteeing the reproducibility of our experiments. For hardware, we trained both datasets on two NVIDIA A100 GPUs, with 600K steps for CIFAR10 and 400K steps for CelebA. Model performance was evaluated based on the FID (Heusel et al., 2017); (Jolicoeur-Martineau et al., 2020). Specifically, Our evaluation method followed the DDIMs repository, wherein we sampled 50,000 images and computed the FID against real images. To further guarantee reproducibility, we fixed the random seed in our experiments, ensuring that all results are replicable. (It is noteworthy that in the experiments, the models are trained based on the same parameters and methods. The results are then obtained using different sampling algorithms. Therefore, some of the results may not be consistent with those reported in original works. This is reasonable, as the parameters trained in each report are different. Our primary comparison focuses on the diffusion models trained using our method versus those trained with the original method, specifically comparing the effects of leap-step sampling under different sampling algorithms.)
In Table 2 and 2, we evaluate the quality of samples generated by models trained on the CIFAR10 and CelebA datasets, measured using the FID as the evaluation metric. We default to and compare our results with DDIMs. As expected, by incorporating skip-step information, the model is able to capture a broader scope of knowledge, leading to an improved sample quality. We observed that the S2-DMs consistently produces higher quality samples than original model with the same sampling steps and the same algorithm. This demonstrates that the diffusion model enhanced by our training algorithm contributes to the improvement of sample quality. Thus, other models only need a few lines of training code modifications to benefit from the performance boost this method offers, without any additional changes to the sampling algorithm.
4.1 Sample Quality and Ablation Experiment
In Figure 4, the impact of various skip-step intervals on the model is presented, using the same datasets (detailed further in Table 5). We employed skip-step intervals of 50, 10, 2 for the model. As expected, integrating more extensive skip-step information enables the model to obtain a wider perspective in fewer generative paths, thereby producing samples of higher quality. We believe that the introduction of transforms the diffusion model into a skip-step diffusion model, which aligns with the number of leap steps. Models without the addition of can be considered as having a setting. Conversely, models with , 10, 50 settings will perform well when the sampling leap steps are 2, 10, and 50, respectively.
In Figure 5, we specifically focused on the training time and memory usage implications of incorporating our proposed into the model. The empirical data revealed a modest increase in training time: for instance, when integrating , the training time rose from 0.0025 seconds to 0.0031 seconds per iter on CIFAR10, and from 0.0048 seconds to 0.0056 seconds on CELEBA. Notably, this increase in training time, while measurable, remains relatively minor. Moreover, it is essential to highlight that the memory usage remained largely consistent, indicating that the incorporation of does not exert additional pressure on memory resources. This suggests that the added computational cost is minimal and, crucially, does not compromise the efficiency of the training process. Given the negligible increase in training time and stable memory usage, we believe that the trade-off is favorable, considering the significant performance improvements our method offers.
In Figures 7 and 7, we display samples from the CIFAR10 and CelebA datasets generated by models with identical numbers of sampling steps and the same sampling algorithm. These figures reveal that, under the same sampling conditions, the samples generated by S2-DMs are of higher quality. Additionally, as the number of sampling steps increases, so does the quality of the samples. For instance, in CIFAR10, the images of boats and cars in the second and fourth columns demonstrate that S2-DMs can produce highly accurate images in just 10 steps, whereas DDIMs yield blurrier images and require more steps for high-quality outcomes. Similarly, in CelebA, the third column presents images of a male subject, where S2-DMs generate a clear hat, in contrast to the blurrier hat produced by DDIMs. Moreover, the detail in images generated by PNDMs is less pronounced compared to those by S2-DMs. This difference is significant, highlighting the substantial improvement in sample quality achieved by S2-DMs with the same number of sampling steps.

4.2 Interpolation and Generation Consistency
The S2-DMs, drawing from the deterministic generation frameworks of DDIMs (Song et al., 2020a), PNDMs (Liu et al., 2022a), and DEIS (Zhang & Chen, 2022), also incorporate the semantic interpolation typical of implicit models like GANs (Mohamed & Lakshminarayanan, 2016). This dual approach allows S2-DMs to blend structured generation with creative interpolation capabilities.
In Figure 8, the interpolation outcomes of S2-DMs under various skip-step settings are displayed. The results indicate that even basic interpolation in achieves semantically rich transitions between two samples. Particularly at a ’skip’ of 50, the models generate samples with remarkable quality and detailed features. This is evidenced in images 5 to 7, where the models adeptly reproduce light and shadow effects on facial features, demonstrating the nuanced capabilities of S2-DMs.
Conversely, at a of 2, the model’s behavior is more akin to the original DDIMs, leading to slightly less detailed samples. This highlights the impact of skip-step intervals on the fidelity of generated images.
Moreover, the figure shows that S2-DMs maintain a consistent level of quality across different training settings, even when conditioned on the same encoding. This underscores the robustness and adaptability of S2-DMs, capable of producing stable and high-quality outputs across various skip-step configurations.





5 Related Work
Denoising Diffusion Probabilistic Models (DDPMs; (Ho et al., 2020)) and Noise Conditional Score Networks (NCSNs; (Song & Ermon, 2019)) are notable for their sample quality, comparable to GANs. While DDPMs optimize a variational lower bound, NCSNs target score matching over a Parzen density estimator. Both models employ a denoising autoencoder across noise levels and use Langevin dynamics for sampling. The shared approach requires multiple iterations for optimal sample quality. Recent advancements aim to decrease DDPMs’ inference steps through SDE solvers and programming algorithms. However, challenges like the disparity between log-likelihood reduction and FID (Heusel et al., 2017), (Szegedy et al., 2016) remain in some models.
DDIMs (Song et al., 2020a) emerges as an implicit generative model, wherein samples are uniquely defined by latent variables. This lends DDIMs properties akin to GANs and invertible flows, including the capability to produce semantically meaningful interpolations. Conceived from a purely variational standpoint, DDIMs sidesteps the constraints of Langevin dynamics, potentially explaining its superior sample quality compared to DDPMs in fewer iterations. The sampling paradigm of DDIMs also echoes the concepts found in neural networks with continuous depth. Additionally, other innovative methods have been introduced to further refine DDPMs sampling, such as reverse SDEs (Song et al., 2020b) with unique coefficients, “corrector” steps, and probability flow ODEs (Liu et al., 2022b). As the exploration of efficient sampling in diffusion models continues, our research stands on the shoulders of these pioneering works, aiming to push the boundaries of what’s achievable in generative models.
6 Conclusion and Future Work
We propose the Skip-Step Diffusion Models, a diffusion model that adapts better to accelerated sampling algorithms by simply adding an additional leap step loss during the training process. We demonstrate how to incorporate leap step loss into the original loss function during training. Our results qualitatively and quantitatively show that under the same accelerated sampling algorithms, S2-DMs significantly improve the sample quality of image generation. Our method successfully explores adding leap step information during training, allowing the model to reach a symmetrical state during sampling, thereby enhancing sample quality.
Our findings pave a new direction for future diffusion model research. By investigating the asymmetry between the training of diffusion models and accelerated sampling algorithms, we can ensure that the trained models are better suited to accelerated sampling algorithms. This ensures that even with algorithms using fewer sampling steps, the model can still generate high-quality outputs. Perhaps this offers an effective solution to the challenges diffusion models face in balancing sampling speed and sample quality. In the future, we will continue to explore how to better integrate leap-step information to ensure greater consistency between the trained model and accelerated sampling algorithms, thereby improving sample quality. Additionally, we plan to investigate how to incorporate leap-step information into ODEs, while also exploring possibilities in non-continuous spaces.
References
- Adler & Lunz (2018) Adler, J. and Lunz, S. Banach wasserstein gan. Advances in neural information processing systems, 31, 2018.
- Bengio et al. (2014) Bengio, Y., Laufer, E., Alain, G., and Yosinski, J. Deep generative stochastic networks trainable by backprop. In International Conference on Machine Learning, pp. 226–234. PMLR, 2014.
- Brown et al. (2020) Brown, T., Mann, B., Ryder, N., Subbiah, M., Kaplan, J. D., Dhariwal, P., Neelakantan, A., Shyam, P., Sastry, G., Askell, A., et al. Language models are few-shot learners. Advances in neural information processing systems, 33:1877–1901, 2020.
- Chen et al. (2020) Chen, N., Zhang, Y., Zen, H., Weiss, R. J., Norouzi, M., and Chan, W. Wavegrad: Estimating gradients for waveform generation. arXiv preprint arXiv:2009.00713, 2020.
- Fedus et al. (2018) Fedus, W., Goodfellow, I., and Dai, A. M. Maskgan: better text generation via filling in the_. arXiv preprint arXiv:1801.07736, 2018.
- Goodfellow et al. (2014) Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., and Bengio, Y. Generative adversarial nets. Advances in neural information processing systems, 27, 2014.
- Gulrajani et al. (2017) Gulrajani, I., Ahmed, F., Arjovsky, M., Dumoulin, V., and Courville, A. C. Improved training of wasserstein gans. Advances in neural information processing systems, 30, 2017.
- He et al. (2016) He, K., Zhang, X., Ren, S., and Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770–778, 2016.
- Heusel et al. (2017) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., and Hochreiter, S. Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems, 30, 2017.
- Hinton (2012) Hinton, G. E. A practical guide to training restricted boltzmann machines. In Neural Networks: Tricks of the Trade: Second Edition, pp. 599–619. Springer, 2012.
- Hjelm et al. (2017) Hjelm, R. D., Jacob, A. P., Che, T., Trischler, A., Cho, K., and Bengio, Y. Boundary-seeking generative adversarial networks. arXiv preprint arXiv:1702.08431, 2017.
- Ho et al. (2020) Ho, J., Jain, A., and Abbeel, P. Denoising diffusion probabilistic models. Advances in neural information processing systems, 33:6840–6851, 2020.
- Jolicoeur-Martineau et al. (2020) Jolicoeur-Martineau, A., Piché-Taillefer, R., Combes, R. T. d., and Mitliagkas, I. Adversarial score matching and improved sampling for image generation. arXiv preprint arXiv:2009.05475, 2020.
- Jolicoeur-Martineau et al. (2021) Jolicoeur-Martineau, A., Li, K., Piché-Taillefer, R., Kachman, T., and Mitliagkas, I. Gotta go fast when generating data with score-based models. arXiv preprint arXiv:2105.14080, 2021.
- Karras et al. (2019) Karras, T., Laine, S., and Aila, T. A style-based generator architecture for generative adversarial networks. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 4401–4410, 2019.
- Karras et al. (2020) Karras, T., Laine, S., Aittala, M., Hellsten, J., Lehtinen, J., and Aila, T. Analyzing and improving the image quality of stylegan. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 8110–8119, 2020.
- Kingma & Ba (2014) Kingma, D. P. and Ba, J. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
- Kingma & Welling (2013) Kingma, D. P. and Welling, M. Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114, 2013.
- Krizhevsky et al. (2009) Krizhevsky, A., Hinton, G., et al. Learning multiple layers of features from tiny images. 2009.
- Liu et al. (2022a) Liu, L., Ren, Y., Lin, Z., and Zhao, Z. Pseudo numerical methods for diffusion models on manifolds. arXiv preprint arXiv:2202.09778, 2022a.
- Liu et al. (2022b) Liu, X., Gong, C., and Liu, Q. Flow straight and fast: Learning to generate and transfer data with rectified flow. arXiv preprint arXiv:2209.03003, 2022b.
- Mohamed & Lakshminarayanan (2016) Mohamed, S. and Lakshminarayanan, B. Learning in implicit generative models. arXiv preprint arXiv:1610.03483, 2016.
- Nichol & Dhariwal (2021) Nichol, A. Q. and Dhariwal, P. Improved denoising diffusion probabilistic models. In International Conference on Machine Learning, pp. 8162–8171. PMLR, 2021.
- Oord et al. (2016) Oord, A. v. d., Dieleman, S., Zen, H., Simonyan, K., Vinyals, O., Graves, A., Kalchbrenner, N., Senior, A., and Kavukcuoglu, K. Wavenet: A generative model for raw audio. arXiv preprint arXiv:1609.03499, 2016.
- Ronneberger et al. (2015) Ronneberger, O., Fischer, P., and Brox, T. U-net: Convolutional networks for biomedical image segmentation. In Medical Image Computing and Computer-Assisted Intervention–MICCAI 2015: 18th International Conference, Munich, Germany, October 5-9, 2015, Proceedings, Part III 18, pp. 234–241. Springer, 2015.
- Salimans et al. (2017) Salimans, T., Karpathy, A., Chen, X., and Kingma, D. P. Pixelcnn++: Improving the pixelcnn with discretized logistic mixture likelihood and other modifications. arXiv preprint arXiv:1701.05517, 2017.
- Sohl-Dickstein et al. (2015) Sohl-Dickstein, J., Weiss, E., Maheswaranathan, N., and Ganguli, S. Deep unsupervised learning using nonequilibrium thermodynamics. In International conference on machine learning, pp. 2256–2265. PMLR, 2015.
- Song et al. (2020a) Song, J., Meng, C., and Ermon, S. Denoising diffusion implicit models. arXiv preprint arXiv:2010.02502, 2020a.
- Song & Ermon (2019) Song, Y. and Ermon, S. Generative modeling by estimating gradients of the data distribution. Advances in neural information processing systems, 32, 2019.
- Song et al. (2020b) Song, Y., Sohl-Dickstein, J., Kingma, D. P., Kumar, A., Ermon, S., and Poole, B. Score-based generative modeling through stochastic differential equations. arXiv preprint arXiv:2011.13456, 2020b.
- Szegedy et al. (2016) Szegedy, C., Vanhoucke, V., Ioffe, S., Shlens, J., and Wojna, Z. Rethinking the inception architecture for computer vision. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 2818–2826, 2016.
- Van den Oord et al. (2016) Van den Oord, A., Kalchbrenner, N., Espeholt, L., Vinyals, O., Graves, A., et al. Conditional image generation with pixelcnn decoders. Advances in neural information processing systems, 29, 2016.
- Vincent (2011) Vincent, P. A connection between score matching and denoising autoencoders. Neural computation, 23(7):1661–1674, 2011.
- Watson et al. (2021) Watson, D., Chan, W., Ho, J., and Norouzi, M. Learning fast samplers for diffusion models by differentiating through sample quality. In International Conference on Learning Representations, 2021.
- Yu et al. (2017) Yu, L., Zhang, W., Wang, J., and Yu, Y. Seqgan: Sequence generative adversarial nets with policy gradient. In Proceedings of the AAAI conference on artificial intelligence, volume 31, 2017.
- Zhang & Chen (2022) Zhang, Q. and Chen, Y. Fast sampling of diffusion models with exponential integrator. arXiv preprint arXiv:2204.13902, 2022.
Appendix A Experimenal Details
In this section, we include more details about the training and sampling of the S2-DMs. All the experiments are run on two NVIDIA A100 GPUs.
A.1 Training and Cost
Our experiments were conducted on the CIFAR10 and CelebA. Since the original size of the CelebA dataset is not 64x64, we followed Song’s approach by first center-cropping the CelebA images and then resizing them to 64x64 dimensions. During model training, we set the batch size to 128 and employed the Adam optimizer. On the CIFAR10 and CelebA datasets, we iterated for 600K and 400K times respectively, even though the typical number of iterations stated is 800K/600K. To investigate this effect, the main text also showcases experimental results from training for 800K/600K iterations. Finally, the images generated by the model were compared with a dataset of 50,000 images for FID calculation.
Dataset/Model | CIFAR/DDIMs | CelebA/DDIMs | CIFAR/S2-DMs | CelebA/S2-DMs |
---|---|---|---|---|
Time per iter(s) | 0.0025 | 0.0048 | 0.0031 | 0.0056 |
Memory per GPU(G) | 4.83 | 15.55 | 4.83 | 15.56 |
Datasets-samplesteps | CIFAR-10 | CIFAR-20 | CelebA-10 | CelebA-20 |
---|---|---|---|---|
Total Times(s) | 675 | 1305 | 1483 | 2904 |
To ensure the repeatability of our experiments, we uniformly adopted the random seed 1234 from Song’s repository by default. Additionally, on the CIFAR10 dataset, we refrained from utilizing multi-threading, guaranteeing that the reproducibility of the experiments would not be compromised by hardware randomness.
Models \ # samplesteps S | 10 | 20 | 50 | 100 | 200 | 400 | 1000 |
---|---|---|---|---|---|---|---|
CIFAR10(32×32) | |||||||
S2-DMs50 | 8.01 | 6.44 | 6.86 | 7.31 | 7.65 | 7.92 | 8.19 |
S2-DMs10 | 15.63 | 9.88 | 6.75 | 5.61 | 4.87 | 4.30 | 4.21 |
S2-DMs2 | 17.92 | 11.00 | 7.34 | 5.85 | 5.06 | 4.37 | 4.26 |
Celeba(64×64) | |||||||
S2-DMs50 | 6.41 | 3.99 | 3.99 | 4.73 | 5.57 | 6.34 | 6.62 |
S2-DMs10 | 11.97 | 8.12 | 5.29 | 4.18 | 3.65 | 3.25 | 3.13 |
S2-DMs2 | 12.43 | 8.73 | 6.00 | 4.80 | 4.13 | 3.77 | 3.71 |
We quantitatively investigated the training overhead of the model. All results were measured on two NVIDIA A100 graphics processors. In Table 3, we report the time consumption and memory usage for each iteration on the CIFAR10 and CelebA datasets. As can be observed, the introduction of increased the training overhead of the model. However, it did not significantly extend the overall training time. We believe that compared to the substantial performance improvement, the added overhead is acceptable.
A.2 Sampling
In Table 4, we showcase the time required for the S2-DMs to sample with {10, 20} steps on different datasets. We believe that within this range, the trade-off between performance and time cost is optimal.
It’s worth noting that when we trained and sampled the original DDIMs model on CelebA using mixed-precision training, we encountered issues related to gradient explosion. However, this problem did not arise when employing the same mixed-precision training with the S2-DMs. We plan to investigate this issue further. For now, we believe it leans more towards engineering and hardware-related challenges.
In Table 5, we provide detailed numerical results of ablation experiments with different values of . The best results are highlighted in bold. From the table data, it’s evident that with smaller sample steps, the larger the , the better the model performs, as it gets closer to the symmetric interval at this point.