2022
DOI: 10.1109/tgrs.2022.3179288
|View full text |Cite
|
Sign up to set email alerts
|

A Shallow-to-Deep Feature Fusion Network for VHR Remote Sensing Image Classification

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
5
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
6
2

Relationship

1
7

Authors

Journals

citations
Cited by 24 publications
(9 citation statements)
references
References 44 publications
0
5
0
Order By: Relevance
“…Following the feature map normalization, LFAGCU incorporates a mechanism for introducing nonlinearity. VHR remote sensing images often exhibit intricate terrain boundaries, textures, and distinctive features, necessitating the network's capability to capture diverse nonlinear characteristics inherent in the data 34 . In Fig.…”
Section: Methodsmentioning
confidence: 99%
“…Following the feature map normalization, LFAGCU incorporates a mechanism for introducing nonlinearity. VHR remote sensing images often exhibit intricate terrain boundaries, textures, and distinctive features, necessitating the network's capability to capture diverse nonlinear characteristics inherent in the data 34 . In Fig.…”
Section: Methodsmentioning
confidence: 99%
“…Fusion of shallow artificial features and deep features has shown its effectiveness in multi-source image classification [48,49]. In order to offer a more comprehensive representation for Martian landforms from the single-band and gray-scale images, the extracted abstract convolutional features from scene-level view and the multi-texture features from local landform view are fused.…”
Section: ) Multi-view Features Fusion and Classificationmentioning
confidence: 99%
“…The proposed FGMCN and SPP methods are compared with some algorithms for performance validation, including ResNet-34 [20], SSRN [39], MSPSSRN [40], AMDF [3], CANet [41], and SDF 2 N [42] to ascertain the efficacy of the proposed approach, which are all CNN-based. Among them, ResNet-34 is a standard residual network, CANet is a residual network with an attention mechanism, SSRN is for hyperspectral images, while MSPSSRN, AMDF and SDF 2 N are specifically designed for multispectral images.…”
Section: Parameter Settingmentioning
confidence: 99%