2016 IEEE International Conference on Computational Photography (ICCP) 2016
DOI: 10.1109/iccphot.2016.7492871
|View full text |Cite
|
Sign up to set email alerts
|

Learning joint demosaicing and denoising based on sequential energy minimization

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
76
0

Year Published

2017
2017
2022
2022

Publication Types

Select...
4
2
1

Relationship

1
6

Authors

Journals

citations
Cited by 75 publications
(76 citation statements)
references
References 36 publications
0
76
0
Order By: Relevance
“…We formulate image reconstruction as a variational model and embed this model in a gradient descent scheme, which forms the specific VN structure. The VN was first introduced as a trainable reaction-diffusion model (6) with application to classic image processing tasks (6,29,30). All these tasks are similar in the sense that the data are corrupted by unstructured noise in the image domain.…”
Section: Discussionmentioning
confidence: 99%
See 2 more Smart Citations
“…We formulate image reconstruction as a variational model and embed this model in a gradient descent scheme, which forms the specific VN structure. The VN was first introduced as a trainable reaction-diffusion model (6) with application to classic image processing tasks (6,29,30). All these tasks are similar in the sense that the data are corrupted by unstructured noise in the image domain.…”
Section: Discussionmentioning
confidence: 99%
“…All parameters of the approach are learned from data. This approach has been successfully applied to a number of image processing tasks including image denoising (6), JPEG deblocking (6), demosaicing (29) and image inpainting (30). For MRI reconstruction, we rewrite the trainable gradient descent scheme with time-varying parameters Kit, Φitʹ , λ t as ut+1=uttruei=1Nkfalse(Kitfalse)Φitʹfalse(Kitutfalse)λtAfalse(boldAutboldffalse),0.5em0tT1.Additionally, we omit the step size α t in Eq.…”
Section: Theorymentioning
confidence: 99%
See 1 more Smart Citation
“…Later works [27,23,43,44] allow the lower-level parameters to change in between the fixed number of iterations, leading to structures that model differential equations and stray further from underlying modelling. As pointed out in [53], these strategies are more aptly considered as a set of nested quadratic lower-level problems. Several techniques have been developed in the field of structured support vector machines (SSVMs) [92,28,1,95] that are very relevant to the task of learning energy models, as SSVMs can be understood as bi-level problems with a lower-level energy that is linear in θ and often a noncontinuous higher-level loss.…”
Section: Related Workmentioning
confidence: 99%
“…Some representative examples of such methods include trainable random field models such as separable Markov random field (MRFSepa) [35] regression tree fields (RTF) [36], cascaded shrinkage fields (CSF) [8], trainable nonlinear reaction diffusion (TRD) models [9] and their extensions [37], [38], [39]. The state-ofthe-art CSF and TRD methods can be derived from the FoE model [30] by unrolling corresponding optimization iterations to be feed-forward networks, where the parameters of each network are trained by minimizing the error between its output images and ground truth for each specific task.…”
Section: Introductionmentioning
confidence: 99%