2020 42nd Annual International Conference of the IEEE Engineering in Medicine &Amp; Biology Society (EMBC) 2020
DOI: 10.1109/embc44109.2020.9176428
|View full text |Cite
|
Sign up to set email alerts
|

Single Fundus Image Super-Resolution Via Cascaded Channel-Wise Attention Network

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
3
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(3 citation statements)
references
References 11 publications
0
3
0
Order By: Relevance
“…Attention mechanism proves to be an effective component for CNN to boost the representation performance and improve the predicting results. In general, the attention mechanism can be separated into three different kinds: the channel-wise attention mecha-nism [60], the spatial attention mechanism [61] and the non-local attention mechanism [62]. The channel-wise attention mechanism embeds the image features into a vector and gives different weights to different feature channels.…”
Section: Attention Mechanismmentioning
confidence: 99%
See 1 more Smart Citation
“…Attention mechanism proves to be an effective component for CNN to boost the representation performance and improve the predicting results. In general, the attention mechanism can be separated into three different kinds: the channel-wise attention mecha-nism [60], the spatial attention mechanism [61] and the non-local attention mechanism [62]. The channel-wise attention mechanism embeds the image features into a vector and gives different weights to different feature channels.…”
Section: Attention Mechanismmentioning
confidence: 99%
“…The non-local attention mechanism calculates the global relationship of the image feature, and utilizes the matrix multiplication operation to conduct the attention procedure. The attention mechanism has been widely used in different computer vision and image processing tasks, such as image super-resolution [60], image dehazing [63], object detection [61] and image segmentation [62].…”
Section: Attention Mechanismmentioning
confidence: 99%
“…In recent years, attention mechanism is widely used on various tasks [8][9][10], which only focuses on selective parts of the whole visual space when and where as needed. However, as cross-domain research combining computer vision and natural language processing, relying on visual features alone is still not sufficient to generate high-quality captions, textual information is also crucial for improving model performance.…”
Section: Introductionmentioning
confidence: 99%