2020
DOI: 10.48550/arxiv.2006.15833
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

End-to-End Differentiable Learning to HDR Image Synthesis for Multi-exposure Images

Abstract: Recent deep learning-based methods have reconstructed a high dynamic range (HDR) image from a single low dynamic range (LDR) image by focusing on the exposure transfer task to reconstruct the multi-exposure stack. However, these methods often fail to fuse the multi-exposure stack into a perceptually pleasant HDR image as the local inversion artifacts are formed in the HDR imaging (HDRI) process. The artifacts arise from the impossibility of learning the whole HDRI process due to its non-differentiable structur… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
5
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
2
2

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(5 citation statements)
references
References 22 publications
0
5
0
Order By: Relevance
“…The proposed method is compared with seven recent state-ofthe-art Convolutional Neural Networks-based approaches: Deep reverse tone mapping opearator (DrTMO) [9], Deep recursive high dynamic range imaging (HDRI) [11], Deep Single HDRI [18], Deep Chain HDRI [18], Deep Diff HDRI [22], Deep Mask HDRI [16], and Deep HDR-UNet [21]. The commonly used HDR-VDP-2.2 [34] index is adopted to measure the quality of HDR reconstruction.…”
Section: Comparisons On the Predicted Hdr Imagesmentioning
confidence: 99%
See 3 more Smart Citations
“…The proposed method is compared with seven recent state-ofthe-art Convolutional Neural Networks-based approaches: Deep reverse tone mapping opearator (DrTMO) [9], Deep recursive high dynamic range imaging (HDRI) [11], Deep Single HDRI [18], Deep Chain HDRI [18], Deep Diff HDRI [22], Deep Mask HDRI [16], and Deep HDR-UNet [21]. The commonly used HDR-VDP-2.2 [34] index is adopted to measure the quality of HDR reconstruction.…”
Section: Comparisons On the Predicted Hdr Imagesmentioning
confidence: 99%
“…and the histogram loss in [22] to ensure that the generated image has a similar global tone with the target image: where O denotes the ground truth image, L denotes the intensity levels, and cnt l indicates the number of pixels, which has a rounded down intensity l in the input image. The total loss of EAM can be formulated as:…”
Section: Lightweight Exposure Adjustment Modelmentioning
confidence: 99%
See 2 more Smart Citations
“…A better estimation is expected by separating these attributes. Reconstructing HDR in RGB color space may feature a weakness such as severe local inversion artifacts on reconstructed HDR images and insufficient contrast [2,15,16]. This is partly because loss functions were simple mean-absolute error (MAE) or mean-squared error (MSE) between the predicted and target images.…”
Section: Introductionmentioning
confidence: 99%