2017 IEEE International Conference on Computer Vision (ICCV) 2017
DOI: 10.1109/iccv.2017.505
|View full text |Cite
|
Sign up to set email alerts
|

DeepFuse: A Deep Unsupervised Approach for Exposure Fusion with Extreme Exposure Image Pairs

Abstract: We present a novel deep learning architecture for fusing static multi-exposure images. Current multi-exposure fusion (MEF) approaches use hand-crafted features to fuse input sequence. However, the weak hand-crafted representations are not robust to varying input conditions. Moreover, they perform poorly for extreme exposure image pairs. Thus, it is highly desirable to have a method that is robust to varying input conditions and capable of handling extreme exposure without artifacts. Deep representations have k… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4

Citation Types

1
321
0
1

Year Published

2019
2019
2024
2024

Publication Types

Select...
4
2

Relationship

0
6

Authors

Journals

citations
Cited by 479 publications
(323 citation statements)
references
References 26 publications
(45 reference statements)
1
321
0
1
Order By: Relevance
“…As we can see from this figure, the combined images generated by our method are more visually pleasant than the other approaches, with every part in focus, and with clear and sharp boundary edges. In contrast, the comparison methods, CSR [3], Deepfuse network [44] and Densefuse network [45], all lead to different levels of blurring artefacts across the boundary areas, as shown in the close-ups of the toy dog and the fence.…”
Section: Multi-focus Image Fusionmentioning
confidence: 98%
See 4 more Smart Citations
“…As we can see from this figure, the combined images generated by our method are more visually pleasant than the other approaches, with every part in focus, and with clear and sharp boundary edges. In contrast, the comparison methods, CSR [3], Deepfuse network [44] and Densefuse network [45], all lead to different levels of blurring artefacts across the boundary areas, as shown in the close-ups of the toy dog and the fence.…”
Section: Multi-focus Image Fusionmentioning
confidence: 98%
“…However, the transform domain is manually [43] proposed a simple CNN to predict the decision map for multi-focus image fusion. Prabhakar et al [44] proposed a CNN based unsupervised image fusion method to fuse one under-exposed image with an over-exposed one. Li et al [45] proposed a CNN network with the dense block structure to solve the infrared and visible image fusion problem.…”
Section: Multi-modal Image Fusionmentioning
confidence: 99%
See 3 more Smart Citations