2021
DOI: 10.1109/tii.2020.2993842
|View full text |Cite
|
Sign up to set email alerts
|

Lightweight Attention Convolutional Neural Network for Retinal Vessel Image Segmentation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
32
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
9
1

Relationship

3
7

Authors

Journals

citations
Cited by 168 publications
(32 citation statements)
references
References 35 publications
0
32
0
Order By: Relevance
“…The attention mechanism was proposed in 2014 and has achieved great success in machine translation [25]. In recent years, attention mechanisms have been widely used in various different types of deep learning tasks such as NLP [27] and image recognition [28] and have become one of the most interesting and insightful approaches in the DL field. In a representative study in 2017, the Google Brain team abandoned the classic RNN/CNN structure and proposed a transformer model composed of only the attention mechanism [29].…”
Section: Attention-based Modelmentioning
confidence: 99%
“…The attention mechanism was proposed in 2014 and has achieved great success in machine translation [25]. In recent years, attention mechanisms have been widely used in various different types of deep learning tasks such as NLP [27] and image recognition [28] and have become one of the most interesting and insightful approaches in the DL field. In a representative study in 2017, the Google Brain team abandoned the classic RNN/CNN structure and proposed a transformer model composed of only the attention mechanism [29].…”
Section: Attention-based Modelmentioning
confidence: 99%
“…Deep learning methods are supervised, and image labeling is a difficult task [82]. Because the number of real medical images in training sets is very limited, it is impossible for deep learning models to learn all types of images from them.…”
Section: Annotations Sharingmentioning
confidence: 99%
“…Neural networks are composed of neural units, which include network weights and biases that can be learned. Each neural unit calculates the inputs and outputs, namely forward propagation, according to some existing formulas of the neural network [29], as shown in the Equation (3). Where w and b are the weights to be trained, x is the input, and s is the output.…”
Section: Convolutional Neural Networkmentioning
confidence: 99%