2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW) 2019
DOI: 10.1109/iccvw.2019.00436
|View full text |Cite
|
Sign up to set email alerts
|

Image Super-Resolution via Attention Based Back Projection Networks

Abstract: Deep learning based image Super-Resolution (SR) has shown rapid development due to its ability of big data digestion. Generally, deeper and wider networks can extract richer feature maps and generate SR images with remarkable quality. However, the more complex network we have, the more time consumption is required for practical applications. It is important to have a simplified network for efficient image SR. In this paper, we propose an Attention based Back Projection Network (ABPN) for image superresolution.… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

3
49
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
5
2

Relationship

2
5

Authors

Journals

citations
Cited by 60 publications
(52 citation statements)
references
References 29 publications
3
49
0
Order By: Relevance
“…In the ABPN method [15], the size of the LR image patches is 32 × 32 and the size of the SR image patches is 64 × 64. In the IKC method [16], the kernel size is 21 × 21.…”
Section: Resultsmentioning
confidence: 99%
See 2 more Smart Citations
“…In the ABPN method [15], the size of the LR image patches is 32 × 32 and the size of the SR image patches is 64 × 64. In the IKC method [16], the kernel size is 21 × 21.…”
Section: Resultsmentioning
confidence: 99%
“…Methods in [15, 16] utilise deep neural networks. In [15], the authors proposed an attention‐based back projection network (ABPN) for SR.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…Inspired by [24] and [25], we also adopted the spatial attention block attached at each DPB to learn the correlations between hierarchical features as shown in Fig. 3.…”
Section: Feature Fusion Module (Ffm)mentioning
confidence: 99%
“…Specifically, in FENet, we adopt a convolutional layer to extract the initial features from the concatenation of LR input images and the transformed inputs. The self-attention block proposed in [25] is located at the end of FENet to recalibrate the features. Note that the self-attention takes only the current input for computation, as shown in Fig.…”
Section: Network Architecturementioning
confidence: 99%