2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2019
DOI: 10.1109/cvpr.2019.01132
|View full text |Cite
|
Sign up to set email alerts
|

Second-Order Attention Network for Single Image Super-Resolution

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

2
827
0
5

Year Published

2019
2019
2024
2024

Publication Types

Select...
6
2

Relationship

0
8

Authors

Journals

citations
Cited by 1,436 publications
(938 citation statements)
references
References 25 publications
2
827
0
5
Order By: Relevance
“…Attention or non-local modeling is one of the choices to globally capture the feature response across the whole image. A lot of related works [31,7,26,27,15,5] have been proposed for computing vision successfully. There are several advantages of using attention operations: 1) It can directly compute the correlation between patterns across the image regardless of their distances; 2) It can efficiently reduce the number of kernels and depth of the network to achieve comparable or even better performance and 3) Finally, it is also easy to be embedded into any structure for operations.…”
Section: Introductionmentioning
confidence: 99%
“…Attention or non-local modeling is one of the choices to globally capture the feature response across the whole image. A lot of related works [31,7,26,27,15,5] have been proposed for computing vision successfully. There are several advantages of using attention operations: 1) It can directly compute the correlation between patterns across the image regardless of their distances; 2) It can efficiently reduce the number of kernels and depth of the network to achieve comparable or even better performance and 3) Finally, it is also easy to be embedded into any structure for operations.…”
Section: Introductionmentioning
confidence: 99%
“…In order to verify the validity of the model , we compare the performance on five standard benchmark datasets: Set5 [1], Set14 [27], B100 [17], Urban100 [8], and manga109 [18]. In terms of PSNR, SSIM and visual effects, We compare our models with the state-of-theart methods including Bicubic, SRCNN [5], VDSR [10], LapSRN [12], MemNet [22], EDSR [16], RDN [32], RCAN [31], SAN [4]. We also adopt self-ensemble strategy [16] to further improve our ADCSR and denote the selfensembled ADCSR as ADCSR+.…”
Section: Results With Bicubic Degradationmentioning
confidence: 99%
“…As can be seen from the table, the PSNR and SSIM of the algorithm in ×2, ×3, ×4 exceed the current state of the art. Figure 6 show the Qualitative comparison of our models with Bicubic, SRCNN [5], VDSR [10], LapSRN [12], MSLapSRN [13], EDSR [16], RCAN [31], and SAN [4] . The images of SRCNN, EDSR, and RCAN are derived from the author's open-source model and code.…”
Section: Results With Bicubic Degradationmentioning
confidence: 99%
See 1 more Smart Citation
“…Guo et al [32] proposed a DCT-DSR network to address the super-resolution problem in an image transform domain. To further enhance the quality of the image, Dai et al [33] and Li et al [35] proposed a secondorder attention mechanism and a feedback mechanism to perform super-resolution, respectively.…”
Section: Image Super-resolutionmentioning
confidence: 99%