Constant Acceleration Flow
Abstract
Rectified flow and reflow procedures have significantly advanced fast generation by progressively straightening ordinary differential equation (ODE) flows. They operate under the assumption that image and noise pairs, known as couplings, can be approximated by straight trajectories with constant velocity. However, we observe that modeling with constant velocity and using reflow procedures have limitations in accurately learning straight trajectories between pairs, resulting in suboptimal performance in few-step generation. To address these limitations, we introduce Constant Acceleration Flow (CAF), a novel framework based on a simple constant acceleration equation. CAF introduces acceleration as an additional learnable variable, allowing for more expressive and accurate estimation of the ODE flow. Moreover, we propose two techniques to further improve estimation accuracy: initial velocity conditioning for the acceleration model and a reflow process for the initial velocity. Our comprehensive studies on toy datasets, CIFAR-10, and ImageNet 64×64 demonstrate that CAF outperforms state-of-the-art baselines for one-step generation. We also show that CAF dramatically improves few-step coupling preservation and inversion over Rectified flow. Code is available at https://github.com/mlvlab/CAF.
1 Introduction
Diffusion models [1, 2] learn the probability flow between a target data distribution and a simple Gaussian distribution through an iterative process. Starting from Gaussian noise, they gradually denoise to approximate the target distribution via a series of learned local transformations. Due to their superior generative capabilities compared to other models such as GANs and VAEs, diffusion models have become the go-to choice for high-quality image generation. However, their multi-step generation process entails slow generation and imposes a significant computational burden. To address this issue, two main approaches have been proposed: distillation models [3, 4, 5, 6, 7, 8, 9] and methods that simplify the flow trajectories [10, 11, 12, 13, 14] to achieve fewer-step generation. An example of the latter is rectified flow [10, 13, 11], which focuses on straightening ordinary differential equation (ODE) trajectories. Through repeated applications of the rectification process, called reflow, the trajectories become progressively straighter by addressing the flow crossing problem. Straighter flows reduce discretization errors, enabling fewer steps in the numerical solution and, thus, faster generation.
Rectified flow [10, 13] defines the straight ODE flow over time with a drift force , where each sample transforms from to under a constant velocity . It approximates the underlying velocity with a neural network . Then, it iteratively applies the reflow process to avoid flow crossing by rewiring the flow and building deterministic data coupling. However, constant velocity modeling may limit the expressiveness needed for approximating complex couplings between and . This results in sampling trajectories that fail to converge optimally to the target distribution. Moreover, the interpolation paths after the reflow may still intersect—a phenomenon known as flow crossing—which leads to curved rectified flows because the model estimates different targets for the same input. As illustrated in Fig. 1(a), instead of following the intended path from to , a sampling trajectory from Rectified flow erroneously diverts towards due to the flow crossing. Such flow crossing can make the accurate learning of straight ODE trajectories more challenging.


In this paper, we introduce the Constant Acceleration Flow (CAF), a novel ODE framework based on a constant acceleration equation, as outlined in (4). Our CAF generalizes Rectified flow by introducing acceleration as an additional learnable variable. This constant acceleration modeling offers the ability to control flow characteristics by manipulating the acceleration magnitude and enables a direct closed-form solution of the ODE, supporting precise and efficient sampling in just a few steps. Additionally, we propose two strategies to address the flow crossing problem. The first one is initial velocity conditioning (IVC) for the acceleration model, and the second one is to employ reflow to enhance the learning of initial velocity. Fig. 1(b) presents that CAF, with the proposed strategies, can accurately predict the ground-truth path from to , even when flow crossing occurs. Through extensive experiments, from toy datasets to real-world image generation on CIFAR-10 [15] and ImageNet 6464, we demonstrate that our CAF exhibits superior performance over Rectified flow and state-of-the-art baselines. Notably, CAF achieves superior Fréchet Inception Distance (FID) scores on CIFAR-10 and ImageNet 6464 in conditional settings, recording FIDs of 1.39 and 1.69, respectively, thereby surpassing recent strong methods. Moreover, we show that CAF provides more accurate flow estimation than Rectified flow by assessing the ‘straightness’ and ‘coupling preservation’ of the learned ODE flow. CAF is also capable of few-step inversion, making it effective for real-world applications such as box inpainting.
To summarize, our contributions are as follows:
-
•
We propose Constant Acceleration Flow (CAF), a novel ODE framework that integrates acceleration as a controllable variable, enhancing the precision of ODE flow estimation compared to the constant velocity framework.
-
•
We propose two strategies to address the flow crossing problem: initial velocity conditioning for the acceleration model and a reflow procedure to improve initial velocity learning. These strategies ensure a more accurate trajectory estimation even in the presence of flow crossings.
-
•
Through extensive experiments on synthetic and real datasets, CAF demonstrates remarkable performance, especially achieving the superior FID on CIFAR-10 and ImageNet 6464 over strong baselines. We also demonstrate that CAF learns more accurate flow than Rectified flow by assessing the straightness, coupling preservation, and inversion.
2 Related work
Generative models.
Learning generative models involves finding a nonlinear transformation between two distributions, typically denoted as and , where is a simple distribution like a Gaussian, and is the complex data distribution. Various approaches have been developed to achieve this transformation. For example, variational autoencoders (VAE) [16, 17] optimize the Evidence Lower Bound (ELBO) to learn a nonlinear mapping from the latent space distribution to the data distribution . Normalizing flows [18, 19, 20] construct a series of invertible and differentiable mappings to transform into . Similarly, GANs [21, 22, 23, 24, 25] earn a generator that transforms into through an adversarial process involving a discriminator. These models typically perform a one-step generation from to . In contrast, diffusion models [2, 26, 27, 28, 29, 30] propose learning the probability flow between the two distributions through an iterative process. This iterative process ensures stability and precision, as the model incrementally learns to reverse a diffusion process that adds noise to data. Diffusion models have demonstrated superior performance across various domains, including images [31, 12, 32, 33], 3D [34, 35, 36, 37], and video [38, 39, 40].
Few-step diffusion models
Addressing the slow generation speed of diffusion models has become a major focus in recent research: Distillation methods [3, 4, 5, 6, 7, 8, 9] seek to optimize the inference steps of pre-trained diffusion models by amortizing the integration of ODE flow. Consistency models [6, 8, 7] train a model to map any point on the pre-trained diffusion trajectory back to the data distribution, enabling fast generation. Rectified flow [10, 13, 11] is another direction, which focuses on straightening ODE trajectories under a constant velocity field. By straightening the flow and reducing path complexity, it allows for fast generation through efficient and accurate numerical solutions with fewer Euler steps. Recent methods such as AGM [41] also introduce acceleration modeling based on Stochastic Optimal Control (SOC) theory instead of relying solely on velocity. However, AGM predicts time-varying acceleration, which still requires multiple iterative steps to solve the differential equations. In contrast, our proposed CAF ODE assumes that the acceleration term is constant with respect to time. Therefore, there is no need to iteratively solve complex time-dependent differential equations. This simplification allows for a direct closed-form solution that supports efficient and accurate sampling in just a few steps.

3 Preliminary
Rectified flow [10, 13] is an ordinary differential equation-based framework for learning a mapping between two distributions and . Typically, in image generation, is a simple tractable distribution, e.g., the standard normal distribution, defined in the latent space and is the image distribution. Given empirical observations of and over time , a flow is defined as
(1) |
where is a time-differentiable interpolation between and , and is a velocity field defined on data-time domain. Rectified flow learns the velocity field with a neural network by minimizing the following mean square objective:
(2) |
where represents a coupling of () and is a time distribution defined on . The choice of interpolation leads to various algorithms, such as Rectified flow [10], ADM [30], EDM [29], and LDM [42]. Specifically, Rectified flow proposes a simple linear interpolation between and as , which induces the velocity field in the direction of (, i.e., . This means the Rectified flow transports to along a straight trajectory with a constant velocity. After training , we can generate a sample using off-the-shelf ODE solvers , such as the Euler method:
(3) |
where and is the total number of steps. To achieve faster generation with fewer steps without sacrificing accuracy, it is crucial to learn a straight ODE flow. Straight ODE flow minimize numerical errors encountered by the ODE solver.
Reflow and flow crossing.
The trajectories of interpolants may intersect—a phenomenon known as flow crossing—due to stochastic coupling between and (e.g., random pairing of and ). These intersections introduce approximation errors in the neural network, leading to curved sampling trajectories [10]. Our toy experiment, illustrated in Fig. 1(a), clearly demonstrates this issue: the simulated sampling trajectories become curved due to flow crossing, rendering one-step simulation inaccurate. To address this problem, Rectified flow [10] introduces a reflow procedure. This procedure iteratively straightens the trajectories by reconstructing a more deterministic and direct pairing of and without altering the marginal distributions. Specifically, the reflow procedure involves generating a new coupling of using a pre-trained Rectified flow model , where denotes the iteration of the reflow procedure, and . By iteratively refining the coupling and the velocity field, the reflow procedure reduces flow crossing, resulting in straighter trajectories and improved accuracy in fewer steps.

4 Method
We aim to develop a generative model based on the ODE framework that enables faster generation without compromising quality. To achieve this, we propose a novel approach called Constant Acceleration Flow (CAF). Specifically, CAF formulates an ODE trajectory that transports with a constant acceleration, offering a more expressive and precise estimation of the ODE flow compared to constant velocity models. Additionally, we propose two novel techniques that address the problem of flow crossing: 1) initial velocity conditioning and 2) reflow procedure for learning initial velocity. The overall training pipeline is presented in Alg. 1.
4.1 Constant Acceleration Flow
We propose a novel ODE framework based on the constant acceleration equation, which is driven by the empirical observations and over time as:
(4) |
where is the initial velocity field and is the acceleration field. We abbreviate time variable for notation simplicity, i.e., , . By integrating both sides of (4) with respect to and assuming a constant acceleration field, i.e., , we derive the following equation:
(5) |
Given the initial velocity field , the acceleration field can be derived as
(6) |
by setting and the constant acceleration assumption. Then, we propose a time-differentiable interpolation as:
(7) |
by substituting (6) to (5). Using this result, we can easily simulate an intermediate sample on our CAF ODE trajectory.
Learning initial velocity field.
Selecting an appropriate initial velocity field is crucial, as different initial velocities lead to distinct flow dynamics. Here, we define the initial velocity field as a scaled displacement vector between and :
(8) |
where is a hyperparameter that adjusts the scale of the initial velocity. This configuration enables straight ODE trajectories between distributions and , similar to those in Rectified flow. However, varying changes the flow characteristics: 1) simulates constant velocity flows, 2) leads to a model with a positive acceleration, and 3) results in a negative acceleration, as illustrated in Fig. 3. Empirically, we observe that the negative acceleration model is more effective for image sampling, possibly due to its ability to finely tune step sizes near data distribution.
The initial velocity field is learned using a neural network , which is optimized by minimizing the distance metric between the target and estimated velocities as
(9) |
where is a time distribution defined on . Note that our velocity model learns target initial velocity defined at . This differs from Rectified flow, which learns target velocity field defined over .
Learning acceleration field.
4.2 Addressing flow crossing
Rectified flow addresses the issue of flow crossing by a reflow procedure. However, even after the procedure, trajectories may still intersect each other. Such intersections hinder learning straight ODE trajectories, as demonstrated in Fig. 1(a). Similarly, our acceleration model also encounters the flow crossing problem. This leads to inaccurate estimation, as the model struggles to predict estimation on these intersections correctly. To further address the flow crossing, we propose two techniques.
Initial velocity conditioning (IVC).
We propose conditioning the estimated initial velocity as the input of the acceleration model, i.e., . This approach provides the acceleration model with auxiliary information on the flow direction, enhancing its capability to distinguish correct estimations and mitigate ambiguity at the intersections of trajectories, as illustrated in Fig. 1. Our IVC circumvents the non-intersecting condition required in Rectified flow (see Theorem 3.6 in [10]), which is a key assumption for achieving a straight coupling . By reducing the ambiguity arising from intersections, CAF can learn straight trajectories with less constrained couplings, which is quantitatively assessed in Tab. 5.
To incorporate IVC into learning the acceleration model, we reformulate (11) as:
(12) |
where indicates stop-gradient operation. Since our velocity model learns to predict the initial velocity (see (9)), we ensure that the model can handle both forward and reverse CAF ODEs, which start from and , respectively. Thus, our acceleration model can generalize across different flow directions, enabling inversion as demonstrated in Sec. B.2.
Reflow for initial velocity.
It is also important to improve the accuracy of the initial velocity model. Following [10], we address the inaccuracy caused by stochastic pairing of and by employing a pre-trained generative model , which constructs a more deterministic coupling of and . We subsequently use this new coupling to train the initial velocity and acceleration models.
4.3 Sampling
After training the initial velocity and acceleration models, we generate samples using the CAF ODE introduced in (4). The discrete sampling process is given by:
(13) |
where is the total number of steps, , , and where (See Alg. 2). We adopt since it empirically improves accuracy, especially in the small regime. Notably, when (one-step generation), simplifies to , leading to the closed-form solution in (5). See Alg. 3 for inversion algorithm.
5 Experiment
We evaluate the proposed Constant Acceleration Flow (CAF) across various scenarios, including both synthetic and real-world datasets. In Sec. 5.1, our investigation begins with a simple two-dimensional synthetic dataset, where we compare the performance of Rectified flow and CAF to clearly demonstrate the effectiveness of our model. Next, we extend our experiments to real-world image datasets, specifically CIFAR-10 (32×32) and ImageNet (64×64), in Sec. 5.2. These experiments highlight CAF’s ability to generate high-quality images with a single sampling step. Furthermore, we conduct an in-depth analysis of CAF through evaluations of coupling preservation, straightness, inversion tasks, and an ablation study in Sec. 5.3.
5.1 Synthetic experiments
We demonstrate the advantages of the Constant Acceleration Flow (CAF) over the constant velocity flow model, Rectified Flow [10], through synthetic experiments. For the neural networks, we use multilayer perceptrons (MLPs) with five hidden layers and 128 units per layer. Initially, we train 1-Rectified flow on 2D synthetic data to establish a deterministic coupling. We then train both CAF and 2-Rectified flow. For CAF, we incorporate the initial velocity into the acceleration model by concatenating it with the input, ensuring that the model capacities of both CAF and 2-Rectified flow remain comparable. We set as distance. Fig. 2 presents samples generated from CAF in one step and from 2-Rectified flow in two steps. Our CAF more accurately approximates the target distribution than 2-Rectified flow. In particular, CAF with (negative acceleration) learns the most accurate distribution. In contrast, 2-Rectified flow frequently generates samples that significantly deviate from , indicating its difficulty in accurately estimating straight ODE trajectories. This experiment shows that reflowing alone may not overcome the flow crossing problem, leading to poor estimations, whereas our proposed acceleration modeling and IVC effectively address this issue. Moreover, Fig. 3 shows sampling trajectories from CAF trained with different hyperparameters . It clearly demonstrates that controls the flow dynamics as we intended: indicates negative acceleration, represents constant velocity, and corresponds to positive acceleration flows. Additional synthetic examples are provided in Fig. 6.
5.2 Real-data experiments
To further validate the effectiveness of our approach, we train CAF on real-world image datasets, specifically CIFAR-10 at 32×32 resolution and ImageNet at 64×64 resolution. To create a deterministic coupling , we utilize the pre-trained EDM models [29] and adopt the U-Net architecture of ADM [30] for the initial velocity and acceleration models. In the acceleration model, we double the input dimension of first layer to concatenate the initial velocity to the input of the acceleration model, which marginally increases the total number of parameters. We set and as LPIPS-Huber loss [43] for all real-data experiments.
Model | Unconditional | Conditional | |
---|---|---|---|
FID | FID | ||
GAN Models | |||
BigGAN [22] | 1 | 8.51 | - |
StyleGAN-Ada [23] | 1 | 2.92 | 2.42 |
StyleGAN-XL [24] | 1 | - | 1.85 |
Diffusion/Consistency Models | |||
Score SDE [1] | 2000 | 2.20 | - |
DDPM [2] | 1000 | 3.17 | - |
VDM [27] | 1000 | 7.41 | - |
LSGM [28] | 138 | 2.10 | - |
DDIM [26] | 10 | 13.36 | - |
EDM [29] | 35 | 2.01 | 1.82 |
5 | 37.75 | 35.54 | |
CT [6] | 2 | 5.83 | - |
1 | 8.70 | - | |
Diffusion/Consistency Models – Distillation | |||
Diff-Instruct [9] | 1 | 4.53 | - |
DMD [44] | 1 | 3.77 | - |
DFNO [5] | 1 | 3.78 | - |
TRACT [45] | 1 | 3.78 | - |
KD [46] | 1 | 9.36 | - |
CD [6] | 2 | 2.93 | - |
1 | 3.55 | - | |
CTM [7] | 2 | 1.87 | 1.63 |
1 | 1.98 | 1.73 | |
Rectified Flow Models | |||
2-Rectified Flow [10] | 2 | 7.89 | 3.74 |
1 | 11.81 | 6.88 | |
2-Rectified Flow + Distill [10] | 1 | 4.84 | - |
\cdashline1-5 CAF (Ours) | 1 | 4.81 | 2.68 |
CAF + GAN (Ours) | 1 | 1.48 | 1.39 |
Model | FID | IS | Rec | |
---|---|---|---|---|
GAN Models | ||||
BigGAN-deep [22] | 1 | 4.06 | - | 0.48 |
StyleGAN-XL [24] | 1 | 2.09 | 82.35 | 0.52 |
Diffusion/Consistency Models | ||||
DDIM [26] | 50 | 13.7 | - | 0.56 |
10 | 18.3 | - | 0.49 | |
DDPM [2] | 250 | 11.0 | - | 0.58 |
iDDPM [47] | 250 | 2.92 | - | 0.62 |
ADM [30] | 250 | 2.07 | - | 0.63 |
EDM [29] | 79 | 2.44 | 48.88 | 0.67 |
5 | 55.3 | - | - | |
DPM-solver [48] | 20 | 3.42 | - | - |
10 | 7.93 | - | - | |
DEIS [49] | 20 | 3.10 | - | - |
10 | 6.65 | - | - | |
CT [6] | 2 | 11.1 | - | 0.56 |
1 | 13.0 | - | 0.47 | |
Diffusion/Consistency Models – Distillation | ||||
Diff-Instruct [9] | 1 | 5.57 | - | - |
DMD [44] | 1 | 2.62 | - | - |
TRACT [45] | 1 | 7.43 | - | - |
DFNO [5] | 1 | 7.83 | - | 0.61 |
PD [3] | 1 | 15.39 | - | 0.62 |
CD [6] | 2 | 4.70 | - | 0.64 |
1 | 6.20 | 40.08 | 0.57 | |
CTM [7] | 2 | 1.73 | 64.29 | 0.57 |
1 | 1.92 | 70.38 | 0.57 | |
Rectified Flow Models | ||||
CAF (Ours) | 1 | 6.52 | 37.45 | 0.62 |
CAF + GAN (Ours) | 1 | 1.69 | 62.03 | 0.64 |
Baselines and evaluation. We evaluate state-of-the-art diffusion models [2, 29, 28, 1, 7], GANs [22, 23, 24], and few-step generation approaches [6, 7]. We primarily assess the image generation quality of our method using the Fréchet Inception Distance (FID) [50] and Inception Score (IS) [51]. Additionally, we evaluate diversity using the recall metric following [10, 6, 7].
Distillation.
Distilling a few-step student model from a pre-trained teacher model has recently become essential for high-quality few-step generation [7, 6, 10, 11]. InstaFlow [11] has observed that learning straighter trajectories and achieving good coupling significantly enhance distillation performance. Moreover, CTM [7] and DMD [44] incorporate an adversarial loss as an auxiliary loss to facilitate the training of the student model. We empirically found that incorporating the adversarial loss alone was sufficient to achieve superior performance for one-step sampling without introducing instability. For training details, please refer to Sec. A.
CIFAR-10.
We present the experimental results on CIFAR-10 in Tab. 5.2. Our base unconditional CAF model (4.81 FID, ) significantly improves the FID compared to recent state-of-the-art diffusion models (without distillation), including DDIM [26] (13.36 FID, ), EDM (37.75 FID, ), and 2-Rectified flow (7.89 FID, ) in a few-step generation (e.g., ). We retrained 2-Rectified flow using the official codes of [10], achieving a slightly better performance than the officially reported performance (12.21 FID) for one-step generation [10]. CAF’s remarkable 3.08 FID improvement over 2-Rectified flow () highlights the effectiveness of acceleration modeling in a fast generation. Our approach is also effective in class-conditional generation, where the base CAF model (2.68 FID, ) shows a significant FID improvement over EDM (35.54 FID, ) and 2-Rectified flow (3.74 FID, ). Additionally, after adversarial training, CAF achieves a superior FID of 1.48 for unconditional generation and 1.39 for conditional generation with . Lastly, we qualitatively compare the 2-Rectified flow and our CAF in Fig. 4, where CAF generates more vivid samples with intricate details than 2-Rectified flow.
ImageNet.
We extend our evaluation to the ImageNet dataset at 64×64 resolution to demonstrate the scalability and effectiveness of our CAF model on more complex and higher-resolution images. Similar to the results on CIFAR-10, our base conditional CAF model significantly improves the FID compared to recent state-of-the-art diffusion models (without distillation) in the small regime (e.g., ). Specifically, CAF (6.52 FID, ) outperforms models such as DPM-solver [48] (7.93 FID, ), CT [6] (11.1 FID, ), and EDM [29] (55.3 FID, ). This validates that the superior performance of CAF can be effectively generalized to complex and large-scale datasets. Additionally, after adversarial training, CAF outperforms or is competitive with state-of-the-art distillation baselines in one-step generation. Notably, CAF achieves the best FID performance of 1.69, surpassing strong baselines. We also demonstrate one-step qualitative results in Fig. 14.


Metric | 2-Rectified Flow | CAF (ours) |
---|---|---|
LPIPS | 0.092 | 0.041 |
PSNR | 29.79 | 33.16 |
Dataset | 2-Rectified Flow | CAF (ours) |
---|---|---|
2D | 0.065 | 0.058 |
CIFAR-10 | 0.043 | 0.034 |
Config |
|
|
|
FID | ||||||
---|---|---|---|---|---|---|---|---|---|---|
A | ✗ | ✗ | ✗ | 378 | ||||||
B | ✗ | ✗ | ✔ | 6.88 | ||||||
C | ✔(h=1.5) | ✗ | ✔ | 3.82 | ||||||
D | ✔(h=1.5) | ✔ | ✔ | 2.68 | ||||||
E | ✔(h=1) | ✔ | ✔ | 3.02 | ||||||
F | ✔(h=0.5) | ✔ | ✔ | 2.73 |
5.3 Analysis
Coupling preservation.
We evaluate how accurately CAF and Rectified flow approximate the deterministic coupling obtained from pre-trained models via a reflow procedure. To analyze this, we first conduct synthetic experiments where the interpolant paths are crossed, as illustrated in Fig. 5. Due to the flow crossing, the sampling trajectory of Rectified flow fails to preserve the ground-truth coupling (interpolation path ), leading to a curved sampling trajectory. In contrast, our CAF learns the straight interpolation paths by incorporating acceleration, demonstrating superior coupling preservation ability.
Moreover, we evaluate the coupling preservation ability on real data from CIFAR-10. We randomly sample 1K training pairs from the deterministic coupling and measure the similarity between and , where is a generated sample from . In other words, we measure the distance between a ground truth image and a generated image corresponding to the same noise. If the coupling is well-preserved, the distance should be small. We use PSNR and LPIPS [52] as distance measures. The result in Tab. 5 demonstrates that CAF better preserves coupling. In terms of PSNR, CAF outperforms Rectified flow by 3.37. This is consistent with the qualitative result in Fig. 5, where from CAF resembles more to (ground truth) than from Rectified flow.
Flow straightness.
To evaluate the straightness of learned trajectories, we introduce the Normalized Flow Straightness Score (NFSS). Similar to previous works [10, 11], we measure flow straightness by the distance between the normalized displacement vector () and the normalized velocity vector as below:
(14) |
Here, a smaller value of indicates a straighter trajectory. We compare between CAF and Rectified flow using synthetic and real-world datasets, as presented in Tab. 5. For Rectified flow, we use , while for CAF, we use . The results show that CAF outperforms Rectified flow in flow straightness.


Inversion
We further demonstrate CAF’s capability in real-world applications by conducting zero-shot tasks such as reconstruction and box inpainting using inversion. We provide implemenetation details and algorithms in Sec. B.2. As shown in the Tab. 7 and 7, our method achieves lower reconstruction errors (CAF: 46.68 PSNR vs. RF: 33.34 PSNR) and better zero-shot inpainting capabilities even with fewer steps compared to baselines. These improvements are attributed to CAF’s superior coupling preservation capability. Moreover, we present qualitative comparisons between CAF and the baselines in Fig. 12 and 13, which further validates the quantitative results.
Ablation study.
We conduct an ablation study to evaluate the effectiveness of components in our framework under the one-step generation setting (). We examine the improvements achieved by 1) constant acceleration modeling, 2) initial velocity () conditioning, and 3) the reflow procedure for . The configurations and results are outlined in Tab. 5. Specifically, A and B correspond to 1-Rectified flow and 2-Rectified flow, respectively. Configurations C to F represent our CAF frameworks, with C being our CAF without IVC. By comparing A,B,C, and F, we demonstrate that all three components in our framework substantially improve the performance. In addition, we analyze the final model across various acceleration scales controlled by . The performance difference between D and F is relatively small, indicating that our framework is robust to the choice of hyperparameters. Empirically, we observe that configuration F, i.e., CAF () with negative acceleration, achieves the best FID of 2.68. Notably, our CAF without conditioning, still outperforms rectified flow (configuration B) by 3.06 FID. This highlights the critical role of constant acceleration modeling in enhancing the quality of few-step generation. Also, we verify the significance of reflowing by comparing configurations A and B, which achieve 378 FID and 6.88 FID, respectively.
6 Conclusion
In this paper, we have introduced the Constant Acceleration Flow (CAF) framework, which enhances precise ODE trajectory estimation by incorporating a controllable acceleration variable into the ODE framework. To address the flow crossing problem, we proposed two strategies: initial velocity conditioning and a reflow procedure. Our experiments on toy datasets, real-world dataset demonstrate CAF’s capabilities and scalability, achieving state-of-the-art FID scores. Furthermore, we conducted extensive ablation studies and analyses—including assessments of flow straightness, coupling preservation, and real-world applications—to validate and deepen our understanding of the effectiveness of our proposed components in learning accurate ODE trajectories. We believe that CAF offers a promising direction for efficient and accurate generative modeling, and we look forward to exploring its applications in more diverse settings such as 3D and video.
Acknowledgement
This work was supported by ICT Creative Consilience Program through the Institute of Information & Communications Technology Planning & Evaluation (IITP) grant funded by the Korea government (MSIT) (IITP-2024-RS-2020-II201819, 10%), the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) (NRF-2023R1A2C2005373, 45%), and the Virtual Engineering Platform Project (Grant No. P0022336, 45%), funded by the Ministry of Trade, Industry & Energy (MoTIE, South Korea).
References
- [1] Yang Song, Jascha Sohl-Dickstein, Diederik P Kingma, Abhishek Kumar, Stefano Ermon, and Ben Poole. Score-based generative modeling through stochastic differential equations. In International Conference on Learning Representations, ICLR, 2021.
- [2] Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising diffusion probabilistic models. In Advances in Neural Information Processing Systems, NeurIPS, 2020.
- [3] Tim Salimans and Jonathan Ho. Progressive distillation for fast sampling of diffusion models. In International Conference on Learning Representations, ICLR, 2022.
- [4] Chenlin Meng, Robin Rombach, Ruiqi Gao, Diederik Kingma, Stefano Ermon, Jonathan Ho, and Tim Salimans. On distillation of guided diffusion models. In Conference on Computer Vision and Pattern Recognition, CVPR, 2023.
- [5] Hongkai Zheng, Weili Nie, Arash Vahdat, Kamyar Azizzadenesheli, and Anima Anandkumar. Fast sampling of diffusion models via operator learning. In International Conference on Machine Learning, ICML, 2023.
- [6] Yang Song, Prafulla Dhariwal, Mark Chen, and Ilya Sutskever. Consistency models. In International Conference on Machine Learning, ICML, 2023.
- [7] Dongjun Kim, Chieh-Hsin Lai, Wei-Hsiang Liao, Naoki Murata, Yuhta Takida, Toshimitsu Uesaka, Yutong He, Yuki Mitsufuji, and Stefano Ermon. Consistency trajectory models: Learning probability flow ode trajectory of diffusion. In International Conference on Learning Representations, ICLR, 2024.
- [8] Simian Luo, Yiqin Tan, Longbo Huang, Jian Li, and Hang Zhao. Latent consistency models: Synthesizing high-resolution images with few-step inference. arXiv preprint arXiv:2310.04378, 2023.
- [9] Weijian Luo, Tianyang Hu, Shifeng Zhang, Jiacheng Sun, Zhenguo Li, and Zhihua Zhang. Diff-instruct: A universal approach for transferring knowledge from pre-trained diffusion models. In Advances in Neural Information Processing Systems, NeurIPS, 2024.
- [10] Xingchao Liu, Chengyue Gong, and Qiang Liu. Flow straight and fast: Learning to generate and transfer data with rectified flow. In International Conference on Learning Representations, ICLR, 2023.
- [11] Xingchao Liu, Xiwen Zhang, Jianzhu Ma, Jian Peng, et al. Instaflow: One step is enough for high-quality diffusion-based text-to-image generation. In International Conference on Learning Representations, ICLR, 2023.
- [12] Patrick Esser, Sumith Kulal, Andreas Blattmann, Rahim Entezari, Jonas Müller, Harry Saini, Yam Levi, Dominik Lorenz, Axel Sauer, Frederic Boesel, et al. Scaling rectified flow transformers for high-resolution image synthesis. arXiv preprint arXiv:2403.03206, 2024.
- [13] Qiang Liu. Rectified flow: A marginal preserving approach to optimal transport. arXiv preprint arXiv:2209.14577, 2022.
- [14] Yaron Lipman, Ricky TQ Chen, Heli Ben-Hamu, Maximilian Nickel, and Matt Le. Flow matching for generative modeling. In International Conference on Learning Representations, ICLR, 2022.
- [15] Alex Krizhevsky, Geoffrey Hinton, et al. Learning multiple layers of features from tiny images. 2009.
- [16] Diederik P Kingma and Max Welling. Auto-encoding variational bayes. In International Conference on Learning Representations, ICLR, 2014.
- [17] Aaron Van Den Oord, Oriol Vinyals, and Koray Kavukcuoglu. Neural discrete representation learning. In Advances in Neural Information Processing Systems, NeurIPS, 2017.
- [18] Laurent Dinh, Jascha Sohl-Dickstein, and Samy Bengio. Density estimation using real nvp. In International Conference on Learning Representations, ICLR, 2017.
- [19] Durk P Kingma and Prafulla Dhariwal. Glow: Generative flow with invertible 1x1 convolutions. In Advances in Neural Information Processing Systems, NeurIPS, 2018.
- [20] Derek Onken, Samy Wu Fung, Xingjian Li, and Lars Ruthotto. Ot-flow: Fast and accurate continuous normalizing flows via optimal transport. In Association for the Advancement of Artificial Intelligence, AAAI, 2021.
- [21] Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in Neural Information Processing Systems, NeurIPS, 2014.
- [22] Andrew Brock, Jeff Donahue, and Karen Simonyan. Large scale gan training for high fidelity natural image synthesis. In International Conference on Learning Representations, ICLR, 2018.
- [23] Tero Karras, Miika Aittala, Janne Hellsten, Samuli Laine, Jaakko Lehtinen, and Timo Aila. Training generative adversarial networks with limited data. In Advances in Neural Information Processing Systems, NeurIPS, 2020.
- [24] Axel Sauer, Katja Schwarz, and Andreas Geiger. Stylegan-xl: Scaling stylegan to large diverse datasets. In SIGGRAPH, 2022.
- [25] Yujin Kim, Dogyun Park, Dohee Kim, and Suhyun Kim. Naturalinversion: Data-free image synthesis improving real-world consistency. In Association for the Advancement of Artificial Intelligence, AAAI, 2022.
- [26] Jiaming Song, Chenlin Meng, and Stefano Ermon. Denoising diffusion implicit models. In International Conference on Learning Representations, ICLR, 2020.
- [27] Diederik Kingma, Tim Salimans, Ben Poole, and Jonathan Ho. Variational diffusion models. In Advances in Neural Information Processing Systems, NeurIPS, 2021.
- [28] Arash Vahdat, Karsten Kreis, and Jan Kautz. Score-based generative modeling in latent space. In Advances in Neural Information Processing Systems, NeurIPS, 2021.
- [29] Tero Karras, Miika Aittala, Timo Aila, and Samuli Laine. Elucidating the design space of diffusion-based generative models. In Advances in Neural Information Processing Systems, NeurIPS, 2022.
- [30] Prafulla Dhariwal and Alexander Nichol. Diffusion models beat gans on image synthesis. In Advances in Neural Information Processing Systems, NeurIPS, 2021.
- [31] James Betker, Gabriel Goh, Li Jing, Tim Brooks, Jianfeng Wang, Linjie Li, Long Ouyang, Juntang Zhuang, Joyce Lee, Yufei Guo, et al. Improving image generation with better captions. Computer Science. https://cdn. openai. com/papers/dall-e-3. pdf, 2023.
- [32] Sojin Lee, Dogyun Park, Inho Kong, and Hyunwoo J Kim. Diffusion prior-based amortized variational inference for noisy inverse problems. In European Conference on Computer Vision, ECCV, 2024.
- [33] Juyeon Ko, Inho Kong, Dogyun Park, and Hyunwoo J Kim. Stochastic conditional diffusion models for robust semantic image synthesis. In International Conference on Machine Learning, ICML, 2024.
- [34] Ruoshi Liu, Rundi Wu, Basile Van Hoorick, Pavel Tokmakov, Sergey Zakharov, and Carl Vondrick. Zero-1-to-3: Zero-shot one image to 3d object. In International Conference on Computer Vision, ICCV, 2023.
- [35] Jiaxiang Tang, Jiawei Ren, Hang Zhou, Ziwei Liu, and Gang Zeng. Dreamgaussian: Generative gaussian splatting for efficient 3d content creation. In International Conference on Learning Representations, ICLR, 2024.
- [36] Vikram Voleti, Chun-Han Yao, Mark Boss, Adam Letts, David Pankratz, Dmitry Tochilkin, Christian Laforte, Robin Rombach, and Varun Jampani. Sv3d: Novel multi-view synthesis and 3d generation from a single image using latent video diffusion. arXiv preprint arXiv:2403.12008, 2024.
- [37] Dogyun Park, Sihyeon Kim, Sojin Lee, and Hyunwoo J Kim. Ddmi: Domain-agnostic latent diffusion models for synthesizing high-quality implicit neural representations. In International Conference on Learning Representations, ICLR, 2024.
- [38] RunwayML Team. Runwayml - gen2. 2023.
- [39] Pika Art. Pika art – home. 2023.
- [40] Tim Brooks, Bill Peebles, Connor Holmes, Will DePue, Yufei Guo, Li Jing, David Schnurr, Joe Taylor, Troy Luhman, Eric Luhman, Clarence Ng, Ricky Wang, and Aditya Ramesh. Video generation models as world simulators. 2024.
- [41] Tianrong Chen, Jiatao Gu, Laurent Dinh, Evangelos A Theodorou, Joshua Susskind, and Shuangfei Zhai. Generative modeling with phase stochastic bridges. In International Conference on Learning Representations, ICLR, 2024.
- [42] Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. High-resolution image synthesis with latent diffusion models. In Conference on Computer Vision and Pattern Recognition, CVPR, 2022.
- [43] Sangyun Lee, Zinan Lin, and Giulia Fanti. Improving the training of rectified flows. In arXiv preprint arXiv:2405.20320, 2024.
- [44] Tianwei Yin, Michaël Gharbi, Richard Zhang, Eli Shechtman, Fredo Durand, William T Freeman, and Taesung Park. One-step diffusion with distribution matching distillation. In Conference on Computer Vision and Pattern Recognition, CVPR, 2024.
- [45] David Berthelot, Arnaud Autef, Jierui Lin, Dian Ang Yap, Shuangfei Zhai, Siyuan Hu, Daniel Zheng, Walter Talbott, and Eric Gu. Tract: Denoising diffusion models with transitive closure time-distillation. In arXiv preprint arXiv:2303.04248, 2023.
- [46] Eric Luhman and Troy Luhman. Knowledge distillation in iterative generative models for improved sampling speed. In arXiv preprint arXiv:2101.02388, 2021.
- [47] Alexander Quinn Nichol and Prafulla Dhariwal. Improved denoising diffusion probabilistic models. In International Conference on Machine Learning, ICML, 2021.
- [48] Cheng Lu, Yuhao Zhou, Fan Bao, Jianfei Chen, Chongxuan Li, and Jun Zhu. Dpm-solver: A fast ode solver for diffusion probabilistic model sampling in around 10 steps. In Advances in Neural Information Processing Systems, NeurIPS, 2022.
- [49] Qinsheng Zhang and Yongxin Chen. Fast sampling of diffusion models with exponential integrator. In arXiv preprint arXiv:2204.13902, 2022.
- [50] Martin Heusel, Hubert Ramsauer, Thomas Unterthiner, Bernhard Nessler, and Sepp Hochreiter. Gans trained by a two time-scale update rule converge to a local nash equilibrium. In Advances in Neural Information Processing Systems, NeurIPS, 2017.
- [51] Tim Salimans, Ian Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, and Xi Chen. Improved techniques for training gans. In Advances in Neural Information Processing Systems, NeurIPS, 2016.
- [52] Richard Zhang, Phillip Isola, Alexei A Efros, Eli Shechtman, and Oliver Wang. The unreasonable effectiveness of deep features as a perceptual metric. In Conference on Computer Vision and Pattern Recognition, CVPR, 2018.
- [53] Ilya Loshchilov and Frank Hutter. Decoupled weight decay regularization. In International Conference on Learning Representations, ICLR, 2019.
- [54] Jiahui Yu, Xin Li, Jing Yu Koh, Han Zhang, Ruoming Pang, James Qin, Alexander Ku, Yuanzhong Xu, Jason Baldridge, and Yonghui Wu. Vector-quantized image modeling with improved vqgan. In International Conference on Learning Representations, ICLR, 2022.
- [55] Mingxing Tan and Quoc Le. Efficientnet: Rethinking model scaling for convolutional neural networks. In International Conference on Machine Learning, ICML, 2023.
- [56] Hugo Touvron, Matthieu Cord, Matthijs Douze, Francisco Massa, Alexandre Sablayrolles, and Hervé Jégou. Training data-efficient image transformers & distillation through attention. In International Conference on Machine Learning, ICML, 2023.
- [57] Ron Mokady, Amir Hertz, Kfir Aberman, Yael Pritch, and Daniel Cohen-Or. Null-text inversion for editing real images using guided diffusion models. In Conference on Computer Vision and Pattern Recognition, CVPR, 2023.
- [58] Inbar Huberman-Spiegelglas, Vladimir Kulikov, and Tomer Michaeli. An edit friendly ddpm noise space: Inversion and manipulations. In Conference on Computer Vision and Pattern Recognition, 2024.
Appendix A Implementation details
We utilize the pre-trained EDM model [29] to build the deterministic coupling for training our models. To construct deterministic couplings for CIFAR-10 and ImageNet, we select and , respectively, using deterministic sampling following the protocol in [29]. For CIFAR-10 and ImageNet, we generate 1M and 3M pairs, respectively. We use the batch size of 2048 and train for 700K/700K iterations on ImageNet. For CIFAR-10, we use the batch size of 512 and train for 500K/500K iterations. For all experiments, we use AdamW [53] optimizer with a learning rate of 0.0001 and apply an Exponential Moving Average (EMA) with a 0.999 decay rate. For training acceleration model, we initialize it with initial velocity model for faster convergence.
For adversarial training, we employ adversarial loss using real data from [24]:
(15) |
where is a discriminator and . In the end, we use the following combined loss to update the acceleration model:
(16) |
where corresponds to (12) and are weight hyperparameters. Following [54, 42], we employ adaptive weighting as , where is the last layer of the acceleration model. Without , we found the training unstable and frequently exhibit mode collapse issue, which is a common problem with adversarial training. We follow the training configuration from StyleGAN-XL [24]. We bilinearly upscale the image to 224224 resolution and use EfficientNet [55] and DeiT-base [56] for extracting features. During the adversarial training, we only optimize the acceleration model and discriminator with the learning rate of 2e-5 and 1e-3, respectively. We keep the parameters of the initial velocity model fixed for stable training. The total training takes about 21 days with 8 NVIDIA A100 GPUs for ImageNet, and takes 10 days 8 NVIDIA RTX3090 GPUs for CIFAR-10.
Appendix B Additional results
B.1 Additional qualitative results
2D toy dataset.
Real-world dataset.
In Fig. 8 and 9, we show additional generation results from our base CAF model on CIFAR-10 with and . In Fig. 10, we compare the generation result between 2-RF and CAF distilled versions. Fig. 11 shows sampling results from our base CAF models with different hyperparameters . Lastly, Fig. 14 shows the generation results on ImageNet with .
B.2 Real-world applications
Inversion techniques are essential for real-world applications such as image and video editing [57, 58]. However, existing methods typically require 25–100 steps for accurate inversion, which can be computationally intensive. In contrast, our method significantly reduces the inference time by enabling inversion in just a few steps (e.g., ). We demonstrate this efficiency in two tasks: reconstruction and box inpainting.
To reconstruct , we first invert to obtain , as described in Alg. 3. We then use the generation process (Alg. 2) with and same initial velocity used in Alg. 3 to generate . For box inpainting, we inject conditional information—the non-masked image region—into the iterative inversion and generation procedures, as detailed in Alg. 4. As demonstrated in Tab. 7 and 7, our method achieves better reconstruction quality (CAF: 46.68 PSNR vs. RF: 33.34 PSNR) and zero-shot inpainting capability even with fewer steps compared to baseline methods. Qualitative results are presented in Fig. 12 and 13, which further illustrate the effectiveness of our approach. This demonstrate that our method can be efficiently used for real-world applications, offering both speed and accuracy advantages over existing techniques.
B.3 Comparison with previous acceleration modeling literatures
Here, we elaborate on the crucial differences between AGM [41] and CAF. The main distinction is that CAF assumes constant acceleration, whereas AGM predicts time-dependent acceleration. Since the CAF ODE assumes that the acceleration term is constant with time, there is no need to solve time-dependent differential equations iteratively. This allows for a closed-form solution that supports efficient and accurate sampling, given that the learned velocity and acceleration models are accurate. Specifically, the solution for CAF ODE is given by:
(17) | ||||
(18) |
The integral simplifies thanks to the constant acceleration assumption, leading to one-step sampling. In contrast, AGM’s acceleration is time-varying, meaning that the differential equation cannot be reduced in an analytic form. It requires multiple steps to approximate the true solution accurately. In Tab. 8, we systemically compare AGM with our CAF, where CAF consistently outperforms AGM. Moreover, we conducted additional experiments where AGM was trained with deterministic couplings as in our reflow setting. Incorporating reflow into AGM did not improve its performance in the few-step regime, which further highlights the distinct advantage of CAF over AGM.
Model | PSNR | LPIPS | |
---|---|---|---|
CM | - | N/A | N/A |
CTM | - | N/A | N/A |
EDM | 4 | 13.85 | 0.447 |
2-RF | 2 | 33.34 | 0.094 |
2-RF | 1 | 29.33 | 0.204 |
CAF (Ours) | 1 | 46.68 | 0.007 |
CAF (+GAN) (Ours) | 1 | 40.84 | 0.028 |
Model | NFE | FID |
---|---|---|
CM | 18 | 13.16 |
CTM | - | N/A |
EDM | - | N/A |
2-RF | 12 | 16.41 |
CAF (Ours) | 12 | 10.39 |
CAF (+GAN) (Ours) | 12 | 10.91 |
Model | Acceleration | Closed-form solution | Reflow for velocity | FID on CIFAR-10 |
---|---|---|---|---|
AGM [41] | Time-varying | No | No | 11.88 () |
AGM (enhanced ver.) | Time-varying | No | Yes | 15.23 () |
CAF (Ours) | Constant | Yes | Yes | 4.81 () |
Appendix C Marginal preserving property of Constant Acceleration Flow
We demonstrate that the flow generated by our Constant Acceleration Flow (CAF) ordinary differential equation (ODE) maintains the marginal of the data distribution, as established by the definitions and theorem in [10].
Definition C.1.
For a path-wise continuously differentiable process , we define its expected velocity and acceleration as follow:
(19) |
For , the conditional expectation is not defined and we set and arbitrarily, for example and .
Definition C.2.
[10] We denote that is rectifiable if is locally bounded and the solution to the integral equation of the form
(20) |
exists and is unique. In this case, is called the rectified flow induced by .
Theorem 1.
[10] Assume is rectifiable and is its rectified flow. Then Law = Law
Refer to [10] for the proof of Theorem 1.
We will now show that our CAF ODE satisfies Theorem 1 by proving that our proposed ODE (4) induces , which is the rectified flow as defined in Definition 20. In (4), we define the CAF ODE as
(21) |
By taking the conditional expectation on both sides, we obtain
(22) |
from Definition 19. Then, the solution of the integral equation of CAF ODE is identical to the solution in Definition 20 by (22):
(23) | ||||
(24) |
This indicates that induced by CAF ODE is also a rectified flow. Therefore, the CAF ODE satisfies the marginal preserving property, i.e., , as stated in Theorem 1.
Appendix D Limitation and Broader impacts
D.1 Limitations
One limitation of our model is the increased number of function evaluations (NFE) required for -step generation. While Rectified flow achieves an NFE of by only computing the velocity at each step, our method necessitates an additional computation, resulting in a total NFE of . This is because we compute the initial velocity at the beginning and the acceleration at each step. Although this extra evaluation slightly increases the computational burden, it is relatively minor in terms of overall performance and still enables efficient few-step generation. Moreover, this additional step can be reduced by jointly predicting velocity and acceleration terms with a single model, which we leave for future work. Another limitation is the additional effort required to generate supplementary data. We utilize generated data to create a deterministic coupling of noise and data samples for training CAF. While generating more data enhances our model’s performance, it can increase GPU usage, leading to higher carbon emissions.
D.2 Broader Impacts
Recent advancements in generative models hold significant potential for societal benefits across a wide array of applications, such as image and video generation and editing, medical imaging analysis, molecular design, and audio synthesis. Our CAF framework contributes to enhancing the efficiency and performance of existing diffusion models, offering promising directions for positive impacts across multiple domains. This suggests that in practical applications, users can utilize generative models more rapidly and accurately, enabling a broad spectrum of activities. However, it is crucial to acknowledge potential risks that must be carefully managed. The increased accessibility of generative models also broadens the potential for misuse. As these technologies become more widespread, the possibility of their exploitation for fraudulent activities, privacy breaches, and criminal behavior increases. It is vital to ensure their ethical and responsible use to prevent negative impacts. Establishing regulated ethical standards for developing and deploying generative AI technologies is necessary to prevent such misuse. Additionally, imposing restricted access protocols or verification systems to trace and authenticate generated contents will help ensure their responsible use.











