2021
DOI: 10.1109/tsp.2021.3125601
|View full text |Cite
|
Sign up to set email alerts
|

Deep Learning Model-Aware Regulatization With Applications to Inverse Problems

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
2

Relationship

2
5

Authors

Journals

citations
Cited by 7 publications
(6 citation statements)
references
References 47 publications
0
6
0
Order By: Relevance
“…As we highlighted in Sections II and IV, theoretical studies of data-driven deep learning methods for inverse problems still largely lack a good understanding of the precise role of training data, including the fundamental notion of generalization. Beyond the works on unfolding methods highlighted in Section IV, an example work on the generalization error in inverse problems is [3], with the generalization bounds depending on (i) a complexity measure of the signal space, and (ii) norms of the Jacobian matrices of both the network itself and the network composed with the forward model.…”
Section: Other Topics On Deep Learning Methods In Inverse Problemsmentioning
confidence: 99%
See 1 more Smart Citation
“…As we highlighted in Sections II and IV, theoretical studies of data-driven deep learning methods for inverse problems still largely lack a good understanding of the precise role of training data, including the fundamental notion of generalization. Beyond the works on unfolding methods highlighted in Section IV, an example work on the generalization error in inverse problems is [3], with the generalization bounds depending on (i) a complexity measure of the signal space, and (ii) norms of the Jacobian matrices of both the network itself and the network composed with the forward model.…”
Section: Other Topics On Deep Learning Methods In Inverse Problemsmentioning
confidence: 99%
“…(i) G is a Lipschitz continuous function, with Lipschitz constant denoted by L; (ii) G is a neural network with ReLU activations, 3 and the width and depth of the network are denoted by w and d. The Lipschitz assumption can easily be shown to be satisfied by neural networks with Lipschitz activation functions (e.g., ReLU, sigmoid, and more) and bounded weights, and the ReLU network assumption is also natural in view of the ubiquity of ReLU networks in practice. While the second class is essentially encompassed by the first, it is still of interest to study it separately, since doing so yields slightly stronger results, as well as further insights via a distinct analysis.…”
Section: A Backgroundmentioning
confidence: 99%
“…When G satisfies the Lipschitz property (Theorem 1), the idea is to establish the desired behavior on a finite subset of S = {G(z) : z 2 ≤ r}, and then transfer this to the full set. 3 When working with a finite subset, one can study the normpreserving properties of Gaussian matrices, as pioneered by Johnson and Lindenstrauss [66]. The rough intuition behind the scaling on m is that we need to cover S such that every signal in S is δ-close to some point, and by the Lipschitz property of G, this amounts to similarly covering {z ∈ R k : z 2 ≤ r} with closeness δ L .…”
Section: B Statistical Upper Bounds On the Reconstruction Errormentioning
confidence: 99%
“…Image processing is the best-known application of behavioral approaches to inverse problems. To cite just a few examples, classical DNNs are compared in [14] with classical sparse reconstruction algorithms, while several CNNs are presented in [15] for medical applications of magnetic resonance imaging. Other possible approaches include recurrent neural networks (where node-connecting weights form a directed graph) and generative adversarial networks (two networks competing into a sort of game [13] each to achieve a different objective in the data processing, regularizing in this way the overall behavior).…”
Section: Introductionmentioning
confidence: 99%