Handbook of Mathematical Models and Algorithms in Computer Vision and Imaging 2023
DOI: 10.1007/978-3-030-98661-2_68
|View full text |Cite
|
Sign up to set email alerts
|

Learned Regularizers for Inverse Problems

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
38
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
4
2

Relationship

0
6

Authors

Journals

citations
Cited by 29 publications
(56 citation statements)
references
References 21 publications
0
38
0
Order By: Relevance
“…6. (Left to Right) ground truth, reconstruction using Wavelet sparsity regularization, Adversarial Regularizer [24], postprocessing using UNet [18], UNet with FJA&FJ and UNet with SJA&SJ, postprocessing using UNet [18] regularized with SJA&SJ.…”
Section: Discussionmentioning
confidence: 99%
See 3 more Smart Citations
“…6. (Left to Right) ground truth, reconstruction using Wavelet sparsity regularization, Adversarial Regularizer [24], postprocessing using UNet [18], UNet with FJA&FJ and UNet with SJA&SJ, postprocessing using UNet [18] regularized with SJA&SJ.…”
Section: Discussionmentioning
confidence: 99%
“…Finally there is also a new suite of techniques that leverage the knowledge of forward operator as follows: the reconstruction of the desired data vector given the measurements vector is carried out using a (regularized) optimization problem using the underlying model; however, the regularizer within such an optimization problem is itself learnt directly from a set of data examples. One such recent (unsupervised) approach relies on the use of adversarially learnt data dependent regularizers [24]. Another suite of techniques uses instead data representations learnt directly from data in any underlying model based optimization problem.…”
Section: Model-aware Data Driven Approachesmentioning
confidence: 99%
See 2 more Smart Citations
“…In this sense, the algorithm can be considered semi-supervised. This idea was followed, for example, in Lunz et al (2018), andLi et al (2020). Taking a Bayesian viewpoint, one can also learn prior distributions as deep NNs; this was done in Barbano et al (2020).…”
Section: Deep Neural Network Meet Inverse Problemsmentioning
confidence: 99%