2023
DOI: 10.1007/s00371-023-02880-4
|View full text |Cite
|
Sign up to set email alerts
|

Coarse-to-fine multi-scale attention-guided network for multi-exposure image fusion

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(1 citation statement)
references
References 50 publications
0
1
0
Order By: Relevance
“…IFCNN [11] employed a dualbranch architecture with shared weights, merged features extracted from convolution based on element mean, and proceeded to train the model utilizing perceptual loss and the fundamental loss between the source image sequence and the simulated "ground truth". CFMSAN [26] employs a multi-scale attention-guided network to extract features across various scales, generating attention weight maps of multiple sizes. These weight maps guide the fusion result generation.…”
Section: Existing Mef Methodsmentioning
confidence: 99%
“…IFCNN [11] employed a dualbranch architecture with shared weights, merged features extracted from convolution based on element mean, and proceeded to train the model utilizing perceptual loss and the fundamental loss between the source image sequence and the simulated "ground truth". CFMSAN [26] employs a multi-scale attention-guided network to extract features across various scales, generating attention weight maps of multiple sizes. These weight maps guide the fusion result generation.…”
Section: Existing Mef Methodsmentioning
confidence: 99%