2018 IEEE Visual Communications and Image Processing (VCIP) 2018
DOI: 10.1109/vcip.2018.8698663
|View full text |Cite
|
Sign up to set email alerts
|

Channel Attention and Multi-level Features Fusion for Single Image Super-Resolution

Abstract: Convolutional neural networks (CNNs) have demonstrated superior performance in super-resolution (SR). However, most CNN-based SR methods neglect the different importance among feature channels or fail to take full advantage of the hierarchical features. To address these issues, this paper presents a novel recursive unit. Firstly, at the beginning of each unit, we adopt a compact channel attention mechanism to adaptively recalibrate the channel importance of input features. Then, the multi-level features, rathe… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
13
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
7
1
1
1

Relationship

0
10

Authors

Journals

citations
Cited by 29 publications
(14 citation statements)
references
References 18 publications
0
13
0
Order By: Relevance
“…SCA-CNN [29] incorporates both spatial and channel-wise attention in a CNN to facilitate the task of image caption. [30] proposes a compact channel attention combined with the multi-level feature fusion mechanism which benefits image super-resolution.…”
Section: Self-attentionmentioning
confidence: 99%
“…SCA-CNN [29] incorporates both spatial and channel-wise attention in a CNN to facilitate the task of image caption. [30] proposes a compact channel attention combined with the multi-level feature fusion mechanism which benefits image super-resolution.…”
Section: Self-attentionmentioning
confidence: 99%
“…Attention mechanism can be used in super-resolution tasks to focus on the high-frequency details of the image, suppress the image noise and improve the performance of the reconstructed image. (Lu et al 2018) used channel attention to calibrate input elements, and aggregated multi-level features and shared recursive unit parameters to improve reconstruction quality. (Liu et al 2018) used attention method to distinguish texture region and smooth region to restore high-frequency details effectively.…”
Section: Attention Mechanismmentioning
confidence: 99%
“…Channel attention [12] provides an effective technique to recalibrate channel-wise features adaptively by explicitly modeling interdependencies between channels. Previous studies have proven the effectiveness of channel attention block [25,5,43,16] in the task of super-resolution. In the proposed OAM, we adopt the channel attention mechanism described in [16] to adaptively combine orientation-aware features to generate more distinctive features as:…”
Section: Orientation-aware Feature Extraction and Channel Attention Modulementioning
confidence: 99%