2020
DOI: 10.1088/1361-6420/ab6d57
|View full text |Cite
|
Sign up to set email alerts
|

NETT: solving inverse problems with deep neural networks

Abstract: Recovering a function or high-dimensional parameter vector from indirect measurements is a central task in various scientific areas. Several methods for solving such inverse problems are well developed and well understood. Recently, novel algorithms using deep learning and neural networks for inverse problems appeared. While still in their infancy, these techniques show astonishing performance for applications like low-dose CT or various sparse data problems. However, there are few theoretical results for deep… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

2
171
0
1

Year Published

2020
2020
2024
2024

Publication Types

Select...
7
1
1

Relationship

1
8

Authors

Journals

citations
Cited by 198 publications
(193 citation statements)
references
References 52 publications
2
171
0
1
Order By: Relevance
“…The inverse problem that we are attempting to solve in CCM is an ill-conditioned linear system of equations that can be represented approximately as y = b*x + c, where y is the recorded sensor image, b is the system transfer function, x is the unknown object and c is the noise. ANNs have been shown to be good candidates to solve poorly conditioned inverse problems such as these previously [8]. Specifically, the universal approximation theorem guarantees that an ANN is able to closely approximate any continuous function similar to the one outlined above [9].…”
Section: Neural Network Architecturementioning
confidence: 98%
“…The inverse problem that we are attempting to solve in CCM is an ill-conditioned linear system of equations that can be represented approximately as y = b*x + c, where y is the recorded sensor image, b is the system transfer function, x is the unknown object and c is the noise. ANNs have been shown to be good candidates to solve poorly conditioned inverse problems such as these previously [8]. Specifically, the universal approximation theorem guarantees that an ANN is able to closely approximate any continuous function similar to the one outlined above [9].…”
Section: Neural Network Architecturementioning
confidence: 98%
“…Deep learning and in particular deep convolutional neural networks (CNNs) have been very successfully applied to a great variety of pattern recognition and image processing tasks. Recently a lot of research has been done in solving inverse problems, incorporating deep learning techniques, including efficient and accurate image reconstruction methods in tomographic problems [2,5,12,13,14,28,17].…”
Section: Learning the Weights In The Ubpmentioning
confidence: 99%
“…Many other learned iterative methods are based on classical iterative methods, but execute far fewer steps in their reconstruction procedure: variational networks [25]- [27] can be seen as a learned variant of proximal gradient methods, where kernel-function pairs of the regularisation term are learned. Learning the regularisation term is also the idea behind [28] and [29]. In [30], a learned variant of gradient-descent is applied, while learned proximal operators are investigated in [31].…”
Section: Introductionmentioning
confidence: 99%
“…This makes them precarious for implementation in many applications, especially in clinical practice. A convergence proof for a method in which the regulariser was learned is given in [29]. Very recently, Banert et al [36] presented a convergence proof for learned algorithms that make use of proximal operators.…”
Section: Introductionmentioning
confidence: 99%