2016
DOI: 10.1145/2980179.2982399
|View full text |Cite
|
Sign up to set email alerts
|

Deep joint demosaicking and denoising

Abstract: FlexISP 32.5 dB Ours 38.4 dB Adobe CR 31.7 dB reference noisy ours 33.3 dB ref. [Condat 2012] 32.4 dB Figure 1:We propose a data-driven approach for jointly solving denoising and demosaicking. By carefully designing a dataset made of rare but challenging image features, we train a neural network that outperforms both the state-of-the-art and commercial solutions on demosaicking alone (group of images on the left, insets show error maps), and on joint denoising-demosaicking (on the right, insets show close-ups)… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
468
0
1

Year Published

2017
2017
2023
2023

Publication Types

Select...
4
3
2

Relationship

0
9

Authors

Journals

citations
Cited by 437 publications
(495 citation statements)
references
References 57 publications
1
468
0
1
Order By: Relevance
“…Plug-and-play priors ( [31], [32], [33], [43]): Like the plug-and-play work, our method uses formal optimization with a proximal operator framework. However, while plug-and-play methods adopt an existing generic Gaussian denoiser for the prior proximal operator, our method trains the prior proximal operator with discriminative learning technique.…”
Section: E Connection and Difference With Related Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…Plug-and-play priors ( [31], [32], [33], [43]): Like the plug-and-play work, our method uses formal optimization with a proximal operator framework. However, while plug-and-play methods adopt an existing generic Gaussian denoiser for the prior proximal operator, our method trains the prior proximal operator with discriminative learning technique.…”
Section: E Connection and Difference With Related Methodsmentioning
confidence: 99%
“…The state-ofthe-art CSF and TRD methods can be derived from the FoE model [30] by unrolling corresponding optimization iterations to be feed-forward networks, where the parameters of each network are trained by minimizing the error between its output images and ground truth for each specific task. Another line of research apply neural networks for image restoration, such as multi-layer perceptrons [40], deep convolutional networks [41], [42], [43] and deep recurrent neural networks [44]. Discriminative approaches owe their computational efficiency at run-time to a particular feed-forward structure whose trainable parameters are optimized for a particular task during training.…”
Section: Introductionmentioning
confidence: 99%
“…Wang [Wan14] used 4 × 4 patches to train a multilayer neural network while minimizing a suitable objective function. Gharbi et al constructed a dataset with hard cases, which were used to train a CNN for joint demosaicing and denoising [GCPD16]. All these methods were specifically designed to reconstruct Bayer filtered images.…”
Section: Related Workmentioning
confidence: 99%
“…Improving the performance of classifiers and CNNs by augmenting training data sets is widely known and well established [34][35][36][37]. Common data augmentation methods are to shift, rotate, scale, flip, crop, transform, compress or blur the training images to extend the training database.…”
Section: Related Workmentioning
confidence: 99%