2022
DOI: 10.1109/lgrs.2021.3063381
|View full text |Cite
|
Sign up to set email alerts
|

Multistage Attention ResU-Net for Semantic Segmentation of Fine-Resolution Remote Sensing Images

Abstract: The attention mechanism can refine the extracted feature maps and boost the classification performance of the deep network, which has become an essential technique in computer vision and natural language processing. However, the memory and computational costs of the dot-product attention mechanism increase quadratically with the spatio-temporal size of the input. Such growth hinders the usage of attention mechanisms considerably in application scenarios with large-scale inputs. In this Letter, we propose a Lin… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
48
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
8

Relationship

1
7

Authors

Journals

citations
Cited by 91 publications
(48 citation statements)
references
References 14 publications
0
48
0
Order By: Relevance
“…To further confirm the effectiveness of the proposed MANet, we compare our method with state-of-the-art approaches presented in the literature. Specifically, the comparative methods not only include the scaling attention mechanism i.e., SE module [2] and CBAM [4] but also consider the simplified dot-product attention mechanism i.e., EAM [3], FAM [11], and LAM [15]. Besides, several comparative networks are also taken into comparison, including the DANet [5] which utilizes the conventional dot-product attention mechanism and other receptive-field-enlarging, i.e., PSPNet [12], DeepLabV3+ [14], as well as EaNet [17].…”
Section: Quantitative Comparison Diverse Methodsmentioning
confidence: 99%
See 2 more Smart Citations
“…To further confirm the effectiveness of the proposed MANet, we compare our method with state-of-the-art approaches presented in the literature. Specifically, the comparative methods not only include the scaling attention mechanism i.e., SE module [2] and CBAM [4] but also consider the simplified dot-product attention mechanism i.e., EAM [3], FAM [11], and LAM [15]. Besides, several comparative networks are also taken into comparison, including the DANet [5] which utilizes the conventional dot-product attention mechanism and other receptive-field-enlarging, i.e., PSPNet [12], DeepLabV3+ [14], as well as EaNet [17].…”
Section: Quantitative Comparison Diverse Methodsmentioning
confidence: 99%
“…As both space and time consumption of the standard dotproduct attention mechanism increase quadratically with the input size, several research has devoted to simplify the complexity of the attention mechanism, including the efficient attention mechanism (EAM) [3], the fast attention mechanism (FAM) [11], and the linear attention mechanism (LAM) [15]. As shown in Table Ⅵ, the proposed KAM achieves the best accuracy compared with other simplified dot-product attention mechanism, due to the appropriate simplified scheme adopted.…”
Section: ) Comparison With Simplified Dot-product Attentionmentioning
confidence: 99%
See 1 more Smart Citation
“…Recently, there has been an effort to reduce the memory footprint of the attention mechanism by introducing the concept of Linear attention [20]. This ideas was soon extended to 2D for computer vision problems [21]. In addition to these, the attention used in the recently introduced Visual Attention Transformer [22], which also helps in reducing the memory footprint for computer vision tasks.…”
Section: Related Work 1on Attentionmentioning
confidence: 99%
“…This has led to the study of convolutional neural networks and the development of semantic segmentation for aerial imagery [17]. There are scientific publications comparing these methods with more recent semantic segmentation architectures in satellite imagery [5,18].…”
Section: Introductionmentioning
confidence: 99%