2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW) 2021
DOI: 10.1109/cvprw53098.2021.00043
|View full text |Cite
|
Sign up to set email alerts
|

Generic Image Restoration with Flow Based Priors

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
13
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 11 publications
(13 citation statements)
references
References 14 publications
0
13
0
Order By: Relevance
“…In this work we explored and clarified the tight relationships between joint map-x-z estimation, splitting and continuation schemes and the more common map-z estimator in the context of inverse problems with a generative prior. On the other hand map-x estimators (which are otherwise standard in bayesian imaging) remained largely unexplored in the context of generative priors, due to the optimization challenges they impose, until the recent work of Helminger et al [24], Whang et al [57] showed that a normalizing flow-based generative model allows to overcome those challenges and deems this problem tractable. Similarly Oberlin and Verm [35] use Glow (an invertible normalizing flow) to compare synthesis-based and analysis-based reconstructions.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…In this work we explored and clarified the tight relationships between joint map-x-z estimation, splitting and continuation schemes and the more common map-z estimator in the context of inverse problems with a generative prior. On the other hand map-x estimators (which are otherwise standard in bayesian imaging) remained largely unexplored in the context of generative priors, due to the optimization challenges they impose, until the recent work of Helminger et al [24], Whang et al [57] showed that a normalizing flow-based generative model allows to overcome those challenges and deems this problem tractable. Similarly Oberlin and Verm [35] use Glow (an invertible normalizing flow) to compare synthesis-based and analysis-based reconstructions.…”
Section: Discussionmentioning
confidence: 99%
“…• the inversion of G, and • the hard constraint x ∈ M. These operations are are all memory and/or computationally intensive, except when they are partially addressed by the use of a normalizing flow like in [24,57].…”
mentioning
confidence: 99%
“…Learned importance sampling has also been studied for complex luminaires rendering [ZBX * 21]. Besides these tasks, NFs have been utilized in computer graphics and vision for image [KD18, WZY22] and video [KBE * 19] generation, compression [HDGS20], super‐resolution [LDVGT20], domain translation [GCS * 20], and uncertainty quantification [WLM * 22, SFMP20]. We build upon Neural Importance Sampling [MMR * 19] and propose a lightweight model for sampling and PDF evaluation of environment maps, as we describe in Section 4.…”
Section: Related Workmentioning
confidence: 99%
“…On the other hand, there is another notable class of approaches recently put forward is based on flow-based invertible neural networks. These generative networks have invertible architectures and a latent space with the same size of the image space, and thus have zero representation error [1,3,22,20,28,34,33,13]. Both these type of approaches attempt to limit the representation error during training or inversion.…”
Section: Related Workmentioning
confidence: 99%