2022
DOI: 10.48550/arxiv.2205.12524
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Accelerating Diffusion Models via Early Stop of the Diffusion Process

Abstract: Denoising Diffusion Probabilistic Models (DDPMs) have achieved impressive performance on various generation tasks. By modeling the reverse process of gradually diffusing the data distribution into a Gaussian distribution, generating a sample in DDPMs can be regarded as iteratively denoising a randomly sampled Gaussian noise. However, in practice DDPMs often need hundreds or even thousands of denoising steps to obtain a high-quality sample from the Gaussian noise, leading to extremely low inference efficiency. … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
10
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 10 publications
(11 citation statements)
references
References 13 publications
0
10
0
Order By: Relevance
“…Vanilla DDPM [5] 0.796 4.179 0.669 0.779 TDPM [12] 0.801 3.608 0.676 0.776 ES-DDPM [13] 0.803 3.523 0.678 0.777 PD-DDPM 0.812 3.494 0.689 0.800…”
Section: Methodsmentioning
confidence: 99%
See 2 more Smart Citations
“…Vanilla DDPM [5] 0.796 4.179 0.669 0.779 TDPM [12] 0.801 3.608 0.676 0.776 ES-DDPM [13] 0.803 3.523 0.678 0.777 PD-DDPM 0.812 3.494 0.689 0.800…”
Section: Methodsmentioning
confidence: 99%
“…Bayesian U-Net [20] and Probabilistic U-Net [21] are representative methods that can estimate the uncertainty of segmentation. And we also compare PD-DDPM with other accelerating DDPMs, including TDPM [12] and ES-DDPM [13]. It should be emphasized that the size of ensemble in the comparison methods is also set to 5.…”
Section: Comparison Of Segmentation Performancementioning
confidence: 99%
See 1 more Smart Citation
“…It usually requires hundreds or even thousands of network evaluations, which limits various downstream applications for DDPMs. There already existed methods (Watson et al, 2021;Lyu et al, 2022;Lam et al, 2022;Salimans & Ho, 2021) which depend on extra training stage to derive fast sampling methods. The trainingbased methods usually require tremendous training costs for different data manifolds and tasks, which inspires many works to explore a training-free sampler based on numerical methods.…”
Section: Denoising Diffusion Probabilistic Modelsmentioning
confidence: 99%
“…Although we independently came up with this idea, a others have explored a similar approach for image editing [33]. Related, many recent papers have consider running reverse diffusion from some intermediate step rather than pure Gaussian noise [32,50,38,52], as we do in this paper.…”
Section: Related Workmentioning
confidence: 99%