2019
DOI: 10.1017/s0962492919000059
|View full text |Cite
|
Sign up to set email alerts
|

Solving inverse problems using data-driven models

Abstract: Recent research in inverse problems seeks to develop a mathematically coherent foundation for combining data-driven models, and in particular those based on deep learning, with domain-specific knowledge contained in physical–analytical models. The focus is on solving ill-posed inverse problems that are at the core of many challenging applications in the natural sciences, medicine and life sciences, as well as in engineering and industrial applications. This survey paper aims to give an account of some of the m… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

3
417
0
1

Year Published

2019
2019
2023
2023

Publication Types

Select...
7
1

Relationship

2
6

Authors

Journals

citations
Cited by 499 publications
(421 citation statements)
references
References 463 publications
(769 reference statements)
3
417
0
1
Order By: Relevance
“…However, when inverse problems are extremely ill-conditioned, the MBIR approach using handcrafted regularizers g(x) has limitations in recovering signals. Alternatively, there has been a growing trend in learning sparsifying regularizers (e.g., convolutional regularizers [16], [17], [32], [34], [35]) from training datasets and applying the learned regularizers to the following MBIR problem [33]:…”
Section: Backgrounds: Mbir Using Learned Regularizersmentioning
confidence: 99%
See 1 more Smart Citation
“…However, when inverse problems are extremely ill-conditioned, the MBIR approach using handcrafted regularizers g(x) has limitations in recovering signals. Alternatively, there has been a growing trend in learning sparsifying regularizers (e.g., convolutional regularizers [16], [17], [32], [34], [35]) from training datasets and applying the learned regularizers to the following MBIR problem [33]:…”
Section: Backgrounds: Mbir Using Learned Regularizersmentioning
confidence: 99%
“…The downside of applying solution (33) is that it would require additional memory to store the corresponding extrapolated points -{ź (i+1) l,k } -and the memory grows with N , L, and K. Considering the sharpness of the majorizer in (30), i.e., M Z = I N , and the memory issue, it is reasonable to consider the solution (33) with no extrapolation, i.e., {E . While having these benefits, empirically (31) has equivalent convergence rates as (33) using {λ Z = 1 + }; see Fig.…”
Section: B Sparse Code Update: {Z Lk }-Block Optimizationmentioning
confidence: 99%
“…The standard approach to overcome the ill-posedness is to incorporate a priori information, an approach that is known as regularization. There are plenty of regularization methods for computed tomography ranging from established model-based methods [5,16,48], to more recent end-to-end data-driven approaches, for the latter see the survey in [3]. Additionally, the sparsity of the solution under the shearlet representation [8] or the adjoint operator [1] can be used for regularization.…”
Section: Higher-order Wavefront Set Extraction and Inverse Problemsmentioning
confidence: 99%
“…The relationship between k-space data and MR image can be described, using the well-known MRI encoding matrix in an equality constraint [1,11]. With such computationally scalable and tractable prior model, the maximum a posterior can serve as an effective estimator [8] for the high dimensional image reconstruction problem tackled in this study. To summarize, the Bayesian inference for MRI reconstruction had two separate models: the k-space likelihood model that was used to encourage data consistency and the image prior model that was used to exploit knowledge learned from an MRI database.…”
Section: Introductionmentioning
confidence: 99%