2024
DOI: 10.1117/1.jei.33.2.023003
|View full text |Cite
|
Sign up to set email alerts
|

Image super-resolution using dilated neighborhood attention transformer

Li Chen,
Jinnian Zuo,
Kai Du
et al.

Abstract: Transformer-based methods have achieved impressive performance in image super-resolution (SR). To reduce the computational cost and redundancy of global attention, most transformer-based methods adopt a localized attention mechanism, which diminishes the desirable characteristics of self-attention (SA), such as the effective modeling of long-range dependencies and the ability to capture a global receptive field. To alleviate this problem, we propose a dilated neighborhood attention transformer for image SR (Di… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...

Citation Types

0
0
0

Publication Types

Select...

Relationship

0
0

Authors

Journals

citations
Cited by 0 publications
references
References 62 publications
0
0
0
Order By: Relevance

No citations

Set email alert for when this publication receives citations?