2022
DOI: 10.1109/lgrs.2022.3173473
|View full text |Cite
|
Sign up to set email alerts
|

Multilayer Feature Fusion Network With Spatial Attention and Gated Mechanism for Remote Sensing Scene Classification

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
7
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 18 publications
(9 citation statements)
references
References 24 publications
0
7
0
Order By: Relevance
“…Second, researchers proposed different single-model methods by employing functional modules as auxiliary fusion tools. For example, Meng et al. (2022) proposed a single-CNN method by using two self-designed modules for multilayer feature fusion.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Second, researchers proposed different single-model methods by employing functional modules as auxiliary fusion tools. For example, Meng et al. (2022) proposed a single-CNN method by using two self-designed modules for multilayer feature fusion.…”
Section: Related Workmentioning
confidence: 99%
“…Second, researchers proposed different single-model methods by employing functional modules as auxiliary fusion tools. For example, Meng et al (2022) proposed a single-CNN method by using two self-designed modules for multilayer feature fusion. Tian et al (2021) and Wan et al (2021) proposed two different single-CNN methods by fusing multi-scale features, while the former employs a DenseNet and the latter uses a ResNeXt.…”
Section: Related Workmentioning
confidence: 99%
“…Among these deep learning based methods, CNNs are the most commonly-utilized [2], [18]- [21], [44] as the convolutional filters are effective to extract multi-level features from the image. In the past two years, CNN based methods (e.g., DSENet [45], MS2AP [46], MSDFF [47], CADNet [48], LSENet [5], GBNet [49], MBLANet [50], MG-CAP [51], Contourlet CNN [52], STHP [53], SAGM [54], DARTS [55], LML [56], GCSANet [57]) still remain heated for aerial scene classification. On the other hand, recurrent neural network (RNN) based [25], auto-encoder based [58], [59] and generative adversarial network (GAN) based [60], [61] approaches have also been reported effective for aerial scene classification.…”
Section: A Aerial Scene Classificationmentioning
confidence: 99%
“…We compare the performance of our AGOS with three handcrafted features (PLSA, BOW, LDA) [17], [87], three typical CNN models (AlexNet, VGG, GoogLeNet) [17], [87], twentytwo latest CNN-based state-of-the-art approaches (MIDCNet [2], RANet [29], APNet [88], SPPNet [20], DCNN [28], TEXNet [89], MSCP [18], VGG+FV [21], DSENet [45], MS2AP [46], MSDFF [47], CADNet [48], LSENet [5], GBNet [49], MBLANet [50], MG-CAP [51], Contourlet CNN [52], STHP [53], SAGM [54], DARTS [55], LML [56], GCSANet [57]), one RNN-based approach (ARCNet [25]), two autoencoder based approaches (SGUFL [59], PARTLETS [58]) and two GAN-based approaches (MARTA [60], AGAN [61]) respectively. The performance under the backbone of ResNet-50, ResNet-101 and DenseNet-121 is all reported for fair evaluation as some latest methods [47], [48] use much deeper networks as backbone.…”
Section: Comparison With State-of-the-art Approachesmentioning
confidence: 99%
“…Although CNNs can automatically extract abundant features from RS images, they are not good at mining strong discriminative information for complicated RS scenes with high intra-class variability, large inter-class similarity, and various objects with different scales, leading to limited performance. In human perception, “attention” plays a very important role in deciding “what” and “where” to focus, 43 thereby enhancing the representation of target objects. Motivated by it, we introduce the idea of “attention” into the architecture and propose a novel MAANet with a series of specially designed attention models for dealing with the above challenges.…”
Section: Introductionmentioning
confidence: 99%