2022
DOI: 10.1109/tgrs.2021.3093977
|View full text |Cite
|
Sign up to set email alerts
|

Multiattention Network for Semantic Segmentation of Fine-Resolution Remote Sensing Images

Abstract: Semantic segmentation of remote sensing images plays an important role in a wide range of applications including land resource management, biosphere monitoring and urban planning. Although the accuracy of semantic segmentation in remote sensing images has been increased significantly by deep convolutional neural networks, several limitations exist in standard models. First, for encoder-decoder architectures such as U-Net, the utilization of multi-scale features causes the underuse of information, where low-lev… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
54
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
3

Relationship

1
7

Authors

Journals

citations
Cited by 143 publications
(61 citation statements)
references
References 72 publications
0
54
0
Order By: Relevance
“…The multi-branch spatial-channel attention network proposed by Han et al [22] and the multi-attention network proposed by Li et al [21] simultaneously consider the spatial and channel relationship of feature maps. These works [21,22] prove that spatial and channel dependence can improve the performance of the semantic labeling of VHR images.…”
Section: Multi-scale Feature Extraction Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…The multi-branch spatial-channel attention network proposed by Han et al [22] and the multi-attention network proposed by Li et al [21] simultaneously consider the spatial and channel relationship of feature maps. These works [21,22] prove that spatial and channel dependence can improve the performance of the semantic labeling of VHR images.…”
Section: Multi-scale Feature Extraction Methodsmentioning
confidence: 99%
“…Refs. [21,22] prove the capture spatial dependence can significantly improve the performance of the semantic labeling of VHR images. Thus, it is of great significance to introduce the spatial relationship between pixels to complement multi-scale features.…”
mentioning
confidence: 97%
“…In this network, a channel attention gate assigns weights in accordance with the importance of each channel, and a spatial attention gate assigns weights in accordance with the importance of each pixel position for the entire channel. Li et al [28] proposed a Multi-Attention Network (MANet) for semantic segmentation of fine-resolution remote sensing images which uses multiple efficient attention modules including kernel attention and channel attention. Both two attention modules are used to decode feature maps from backbone layers to generate precise prediction map.…”
Section: Introductionmentioning
confidence: 99%
“…Encoder may play a more important role for the segmentation and limit the upper performance of the network because the decoder process the feature maps from the encoder. Moreover, the common attention module, such as channel and spatial attention gates in csAG-HRNet [27], vision transformer attention in MANet [28], will greatly increase the computational complexity and increase the difficulty of training. According to the above analysis, compared with the deep learning methods above, the main contributions and works of this paper are as follows:…”
Section: Introductionmentioning
confidence: 99%
“…Fine spatial resolution (FSR) remotely sensed images are characterized by rich spatial information and detailed objects with semantic content. Semantic segmentation using FSR remotely sensed imagery has been a hot topic in the remote sensing community, which essentially undertakes a dense pixel-level classification task and has been applied in various geo-related applications including land cover classification [1], infrastructure planning [2] and territorial management [3], as well as change detection [4] and other urban applications [5][6][7].…”
Section: Introductionmentioning
confidence: 99%