2022
DOI: 10.1109/tcsvt.2022.3144455
|View full text |Cite
|
Sign up to set email alerts
|

Attention-Guided Global-Local Adversarial Learning for Detail-Preserving Multi-Exposure Image Fusion

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
12
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
2

Relationship

1
6

Authors

Journals

citations
Cited by 69 publications
(22 citation statements)
references
References 65 publications
0
12
0
Order By: Relevance
“…It is the first time that GAN is introduced into MEF. Liu et al [19] presented an attentionguided global-local adversarial learning network to fuse color distortions and enhance details in images. As a secondary, they formulated a novel edge loss function and a spatial feature transform layer to refine the fusion process.…”
Section: B Deep Learning Methodsmentioning
confidence: 99%
See 2 more Smart Citations
“…It is the first time that GAN is introduced into MEF. Liu et al [19] presented an attentionguided global-local adversarial learning network to fuse color distortions and enhance details in images. As a secondary, they formulated a novel edge loss function and a spatial feature transform layer to refine the fusion process.…”
Section: B Deep Learning Methodsmentioning
confidence: 99%
“…In the field of MEF, Structure Similarity Index Measure of Multi-exposure Image Fusion (MEF-SSIM) [43] and Peak Signal-to-Noise Ratio (PSNR) are introduced to conduct quantitative analysis, which are commonly used in abundant excellent works, e.g., [19], [47], [48] and [68]. At the same time, we also introduced Mutual Information (MI) [69] and Correlation Coefficient (CC) [70] as supplements.…”
Section: B Evaluation Metricsmentioning
confidence: 99%
See 1 more Smart Citation
“…[55] proposed a novel DMEF model and Liu et al. [56] presented an attention‐guided global‐local adversarial learning network. Based on Retinex theory, RetinexNet [57] employed the Enhance‐Net to process the image's illumination map generated by the Decom‐Net.…”
Section: Introductionmentioning
confidence: 99%
“…Ren et al [54] united encoder-decoder with spatially variant recurrent neural network (RNN) to generate visually pleasing images. Relying on the multi-exposure fusion framework, Wu et al [55] proposed a novel DMEF model and Liu et al [56] presented an attention-guided globallocal adversarial learning network. Based on Retinex theory, RetinexNet [57] employed the Enhance-Net to process the image's illumination map generated by the Decom-Net.…”
Section: Introductionmentioning
confidence: 99%