2018
DOI: 10.1587/transinf.2017edl8173
|View full text |Cite
|
Sign up to set email alerts
|

End-to-End Exposure Fusion Using Convolutional Neural Network

Abstract: SUMMARYIn this paper, we describe the direct learning of an end-toend mapping between under-/over-exposed images and well-exposed images. The mapping is represented as a deep convolutional neural network (CNN) that takes multiple-exposure images as input and outputs a highquality image. Our CNN has a lightweight structure, yet gives state-ofthe-art fusion quality. Furthermore, we know that for a given pixel, the influence of the surrounding pixels gradually increases as the distance decreases. If the only pixe… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
15
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
8
2

Relationship

0
10

Authors

Journals

citations
Cited by 18 publications
(15 citation statements)
references
References 10 publications
0
15
0
Order By: Relevance
“…Since then, many MEF algorithms based on deep learning have been proposed. In 2018, Wang [75] proposed a supervised CNN-based framework for MEF. The main innovation of the approach was that it used the CNN model to gain multiple sub-images of the input images to use more neighborhood information for convolution operation.…”
Section: Supervised Methodsmentioning
confidence: 99%
“…Since then, many MEF algorithms based on deep learning have been proposed. In 2018, Wang [75] proposed a supervised CNN-based framework for MEF. The main innovation of the approach was that it used the CNN model to gain multiple sub-images of the input images to use more neighborhood information for convolution operation.…”
Section: Supervised Methodsmentioning
confidence: 99%
“…Many of them attempt to do this using a single shot 21,22 or by simply having a extreme number of weights. [21][22][23][24] For real-time, robust driver recognition, both of these are unacceptable. In the former case, we expect that a single shot will have a high probability of either misfiring or inaccurately capturing the face, due to the potential for obstructive glare.…”
Section: Related Workmentioning
confidence: 99%
“…In these methods, two source images with different exposures are directly input into a fusion network, and the fused image is obtained from the output of the network. The fusion networks can be trained in a common supervised way using ground truth fusion images (Zhang et al 2020b;Wang et al 2018;Li and Zhang 2018) or in an unsupervised way by encouraging the fused image to retain different aspects of the important information in the source images (Xu et al 2020a;Ram Prabhakar, Sai Srikar, and Venkatesh Babu 2017;Xu et al 2020b;Zhang et al 2020a;Ma et al 2019b). However, both supervised and unsupervised MEF methods require a large amount of multi-exposure data for training.…”
Section: Introductionmentioning
confidence: 99%