2022
DOI: 10.1007/978-3-031-19797-0_24
|View full text |Cite
|
Sign up to set email alerts
|

Event-Based Fusion for Motion Deblurring with Cross-modal Attention

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
20
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
2

Relationship

0
6

Authors

Journals

citations
Cited by 41 publications
(20 citation statements)
references
References 48 publications
0
20
0
Order By: Relevance
“…Jiang et al [14] used convolutional models and mined the motion information and edge information to assist deblurring. Sun et al [35] proposed a multihead attention mechanism for fusing information from both modalities, and designed an event representation specifically for the event-based image deblurring task. Kim et al [15] further extended the task to images with unknown exposure time by activating the events that are most related to the blurry image.…”
Section: Related Workmentioning
confidence: 99%
See 4 more Smart Citations
“…Jiang et al [14] used convolutional models and mined the motion information and edge information to assist deblurring. Sun et al [35] proposed a multihead attention mechanism for fusing information from both modalities, and designed an event representation specifically for the event-based image deblurring task. Kim et al [15] further extended the task to images with unknown exposure time by activating the events that are most related to the blurry image.…”
Section: Related Workmentioning
confidence: 99%
“…3 shows the detailed architecture of the proposed EGACA. We simplify the multi-head channel attention of EFNet [35] to channel attention from SENet [10]. Two Channel Squeeze (CS) blocks extract channel weights from the current events, and two weights multiply event features and image features for self-attention and event-guided attention to image features, respectively.…”
Section: Event-guided Adaptive Channel Attentionmentioning
confidence: 99%
See 3 more Smart Citations