2022
DOI: 10.48550/arxiv.2212.04362
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

CiaoSR: Continuous Implicit Attention-in-Attention Network for Arbitrary-Scale Image Super-Resolution

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 0 publications
0
1
0
Order By: Relevance
“…However, in real-world scenarios, scaling up LR videos with user-desired scales has more practical value. Recent work on arbitrary-scale single-image super-resolution (Hu et al 2019;Chen, Liu, and Wang 2021;Lee and Jin 2022;Cao et al 2022) has explored the possibility of replacing the pixelshuffle-based upsampling module and supporting arbitrary-scale super-resolution. These methods can be divided into two categories: the implicit neural function-based (Chen, Liu, and Wang 2021;Lee and Jin 2022;Cao et al 2022) and the filter-based (Hu et al 2019) approaches.…”
Section: Introductionmentioning
confidence: 99%
“…However, in real-world scenarios, scaling up LR videos with user-desired scales has more practical value. Recent work on arbitrary-scale single-image super-resolution (Hu et al 2019;Chen, Liu, and Wang 2021;Lee and Jin 2022;Cao et al 2022) has explored the possibility of replacing the pixelshuffle-based upsampling module and supporting arbitrary-scale super-resolution. These methods can be divided into two categories: the implicit neural function-based (Chen, Liu, and Wang 2021;Lee and Jin 2022;Cao et al 2022) and the filter-based (Hu et al 2019) approaches.…”
Section: Introductionmentioning
confidence: 99%