2022
DOI: 10.1007/978-3-031-20050-2_17
|View full text |Cite
|
Sign up to set email alerts
|

Subspace Diffusion Generative Models

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
38
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
7
2
1

Relationship

1
9

Authors

Journals

citations
Cited by 44 publications
(51 citation statements)
references
References 2 publications
0
38
0
Order By: Relevance
“…Therefore, the method is currently intended for offline processing only. However, the fast pace of advancements in diffusion models provides a promising outlook on substantial speed-up of the reverse diffusion process [51]. Given the flexibility of the proposed method, improved sampling schemes are easily adopted, potentially enabling real-time implementation in the future.…”
Section: Discussionmentioning
confidence: 99%
“…Therefore, the method is currently intended for offline processing only. However, the fast pace of advancements in diffusion models provides a promising outlook on substantial speed-up of the reverse diffusion process [51]. Given the flexibility of the proposed method, improved sampling schemes are easily adopted, potentially enabling real-time implementation in the future.…”
Section: Discussionmentioning
confidence: 99%
“…Subspace diffusion models [Jing et al, 2022] also consider correlated diffusion, with a particular emphasis on focusing the diffusion to most relevant factors of variation for statistical and computational efficiency. Additionally, latent-space diffusion models [Rombach et al, 2022] might be viewed as learning a transformed coordinate system in which the diffusion process can more efficiently model the targer distribution.…”
Section: Supplementary Informationmentioning
confidence: 99%
“…In this work, we leverage the recently popularized class of denoising diffusion models, , which have already shown promising results for protein structure generation, conformer generation, and docking . In particular, we train a score-based generative model on CG structures sampled from the CG equilibrium distribution.…”
Section: Introductionmentioning
confidence: 99%