With the widespread application of semantic segmentation in remote sensing images with highresolution, how to improve the accuracy of segmentation becomes a research goal in the remote sensing field. An innovative Fully Convolutional Network (FCN) is proposed based on regional attention for improving the performance of the semantic segmentation framework for remote sensing images. The proposed network follows the encoder-decoder architecture of semantic segmentation and includes the following three strategies to improve segmentation accuracy. The enhanced GCN module is applied to capture the semantic features of remote sensing images. MGFM is proposed to capture different contexts by sampling at different densities. Furthermore, RAM is offered to assign large weights to high-value information in different regions of the feature map. Our method is assessed on two datasets: ISPRS Potsdam dataset and CCF dataset. The results indicate that our model with those strategies outperforms baseline models (DCED50) concerning F1, mean
Deep convolutional networks are of great significance for the automatic semantic annotation of remotely sensed images. Object position and semantic labeling are equally important in semantic segmentation tasks. However, the convolution and pooling operations of the convolutional network will affect the image resolution when extracting semantic information, which makes acquiring semantics and capturing positions contradictory. We design a duplex restricted network with guided upsampling. The detachable enhancement structure to separate opposing features on the same level. In this way, the network can adaptively choose how to trade-off classification and localization tasks. To optimize the detailed information obtained by encoding, a concentration-aware guided upsampling module is further introduced to replace the traditional upsampling operation for resolution restoration. We also add a content capture normalization module to enhance the features extracted in the encoding stage. Our approach uses fewer parameters and significantly outperforms previous results on two very high resolution (VHR) datasets: 84.81% (vs 82.42%) on the Potsdam dataset and 86.76% (vs 82.74%) on the Jiage dataset.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.