2020
DOI: 10.1007/s00521-020-05387-4
|View full text |Cite
|
Sign up to set email alerts
|

GANFuse: a novel multi-exposure image fusion method based on generative adversarial networks

Abstract: In this paper, a novel multi-exposure image fusion method based on generative adversarial networks (termed as GANFuse) is presented. Conventional multi-exposure image fusion methods improve their fusion performance by designing sophisticated activity-level measurement and fusion rules. However, these methods have a limited success in complex fusion tasks. Inspired by the recent FusionGAN which firstly utilizes generative adversarial networks (GAN) to fuse infrared and visible images and achieves promising perf… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
12
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
6
2

Relationship

0
8

Authors

Journals

citations
Cited by 43 publications
(12 citation statements)
references
References 36 publications
0
12
0
Order By: Relevance
“…Xu [17] designed an end-to-end architecture for MEF based on GAN, named MEF-GAN, and used the dataset from Cai [79]. Following [17] and [90], a GAN-based MEF network, named GANFuse, was proposed in [91]. There were two main differences between GANFuse and the GAN-based MEF approaches above.…”
Section: Unsupervised Methodsmentioning
confidence: 99%
“…Xu [17] designed an end-to-end architecture for MEF based on GAN, named MEF-GAN, and used the dataset from Cai [79]. Following [17] and [90], a GAN-based MEF network, named GANFuse, was proposed in [91]. There were two main differences between GANFuse and the GAN-based MEF approaches above.…”
Section: Unsupervised Methodsmentioning
confidence: 99%
“…The objective function maximizes the structural consistency between the fused image and each input image. [19] and [150] proposed GAN-based unsupervised frameworks, inspired by CycleGAN [101], to learn LDR image fusion. [151] and [152] explored the correspondence of source LDR images.…”
Section: Hdr With Novel Learning Strategies 61 Hdr Imaging With Unsup...mentioning
confidence: 99%
“…DL-based HDR imaging methods often achieve stateof-the-art (SoTA) performances on various benchmark datasets. Deep neural network (DNN) models have been developed based on diverse architectures, ranging from convolutional neural networks (CNNs) [9], [10], [16] to generative adversarial networks ‚ L. Wang (GANs) [17], [18], [19]. In general, SoTA-DNN-based methods differ in terms of five major aspects: network design that considers the number and domain of input LDR images [9], [10], [14], purpose of HDR imaging in multitask learning [20], [21], different sensors being used to obtain deep HDR imaging [22], [23], [24], novel learning strategies [17], [25], [26], and practical applications [27], [28], [29].…”
Section: Introductionmentioning
confidence: 99%
“…To relax the constraint of dataset, we propose a GAN-based fusion method to optimize the network using unpaired dataset, named UPHDR-GAN. First, compared to famous single-image enhancement methods [14,15,16] and some recent GAN-based image fusion method [17,18,19] that are trained on paired dataset, the proposed method trains unpaired dataset and learns mapping from LDR domain to HDR domain. Second, unlike some methods s that are designed for unpaired datasets mainly concentrating on processing single-input [20], our UPHDR-GAN is a multi-input method with the consideration of moving objects.…”
Section: Introductionmentioning
confidence: 99%
“…Ma et al usedGAN to fuse infrared and visible information, obtaining a fused image with major infrared intensities together with additional visible gradients[55,56]. Recently, there are some GAN-based methods are proposed to handle multi-exposure images[17,18,19], Xu et al[17] and Yang et al[18] fused two inputs, the under-exposed image and the over-exposed image, to generate an informative output. Niu et al[19] proposed a reference-based residual merging block for aligning large object motions in the feature domain, and a deep HDR supervision scheme for eliminating artifacts of the reconstructed HDR images.…”
mentioning
confidence: 99%