2018
DOI: 10.48550/arxiv.1806.03963
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Neural Proximal Gradient Descent for Compressive Imaging

Abstract: Recovering high-resolution images from limited sensory data typically leads to a serious ill-posed inverse problem, demanding inversion algorithms that effectively capture the prior information. Learning a good inverse mapping from training data faces severe challenges, including: (i) scarcity of training data; (ii) need for plausible reconstructions that are physically feasible; (iii) need for fast reconstruction, especially in real-time applications. We develop a successful system solving all these challenge… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
12
0

Year Published

2018
2018
2022
2022

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 10 publications
(12 citation statements)
references
References 26 publications
(46 reference statements)
0
12
0
Order By: Relevance
“…( 1)- (3), where the likelihood p(u|x) is a multivariate Gaussian distribution, i.e. p(u|x) ∝ e − 1 2 Σ −1/2 (u−x) 2 2 . Including the prior and taking the negative logarithm gives the expression of the denoiser:…”
Section: General Gaussian Denoisermentioning
confidence: 99%
See 1 more Smart Citation
“…( 1)- (3), where the likelihood p(u|x) is a multivariate Gaussian distribution, i.e. p(u|x) ∝ e − 1 2 Σ −1/2 (u−x) 2 2 . Including the prior and taking the negative logarithm gives the expression of the denoiser:…”
Section: General Gaussian Denoisermentioning
confidence: 99%
“…In order to increase the interpretability and genericity of deep neural networks, the field is evolving towards methods that combine them with traditional optimization algorithms. A popular approach, seen for example in [1], [2], [3], [4], [5], [6], [7], consists in defining so-called "unrolled" neural…”
Section: Introductionmentioning
confidence: 99%
“…Because we differentiate through a cone program by implicitly differentiating its solution map, our method can be paired with any algorithm for solving convex cone programs. In contrast, methods that differentiate through every step of an optimization procedure must be customized for each algorithm (e.g., [33,30,56]). Moreover, such methods only approximate the derivative, whereas we compute it analytically (when it exists).…”
Section: Related Workmentioning
confidence: 99%
“…Currently, deep learning(DL) techniques have been widely adopted in CT and demonstrate promising reconstruction performance [2,8,24,28,32,33]. By further combining the iterative algorithms with DL, a series of iterative frameworks with the accordingly designed neural-network-based modules are proposed [1,3,5,7,13,19,29]. ADMMNet [25] introduces a neural-network-based module in reconstruction problem and achieves remarkable performance.…”
Section: Introductionmentioning
confidence: 99%