2021
DOI: 10.48550/arxiv.2105.10697
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

ADNet: Attention-guided Deformable Convolutional Network for High Dynamic Range Imaging

Abstract: In this paper, we present an attention-guided deformable convolutional network for hand-held multi-frame high dynamic range (HDR) imaging, namely ADNet. This problem comprises two intractable challenges of how to handle saturation and noise properly and how to tackle misalignments caused by object motion or camera jittering. To address the former, we adopt a spatial attention module to adaptively select the most appropriate regions of various exposure low dynamic range (LDR) images for fusion. For the latter o… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(1 citation statement)
references
References 38 publications
(37 reference statements)
0
1
0
Order By: Relevance
“…The methods in the first group [19,23,24,26,29,30,31,32,33,34,35,36,37,38] suffer from the problem of not accurately processing over-and under-exposed regions, but provide more compact systems. Methods in the second group use a group of bracketed overand under-exposed images as input to directly learn to generate HDR output [39,39,40,41,42,43,44,45,46], as similarly utilized in photographic HDR generation. Methods belonging to these two groups differ mainly according to their network components, such as non-local blocks [42] and attention mechanisms [40].…”
Section: Related Workmentioning
confidence: 99%
“…The methods in the first group [19,23,24,26,29,30,31,32,33,34,35,36,37,38] suffer from the problem of not accurately processing over-and under-exposed regions, but provide more compact systems. Methods in the second group use a group of bracketed overand under-exposed images as input to directly learn to generate HDR output [39,39,40,41,42,43,44,45,46], as similarly utilized in photographic HDR generation. Methods belonging to these two groups differ mainly according to their network components, such as non-local blocks [42] and attention mechanisms [40].…”
Section: Related Workmentioning
confidence: 99%