2022
DOI: 10.1109/lgrs.2020.3016769
|View full text |Cite
|
Sign up to set email alerts
|

A Deep Neural Network Combined With Context Features for Remote Sensing Scene Classification

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
7
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
9

Relationship

0
9

Authors

Journals

citations
Cited by 9 publications
(7 citation statements)
references
References 32 publications
0
7
0
Order By: Relevance
“…GoogLeNet [40] 94.31 ± 0.89 92.70 ± 0.60 VGG-16 [40] 95.21 ± 1.20 94.14 ± 0.69 CRAN [42] 95.75 ± 0.80 94.21 ± 0.75 MobileNet V2 [43] 99.01 ± 0.21 97.88 ± 0.31 SE-MDPMNet [44] 98.95 ± 0.12 98.36 ± 0.14 Two-Stream Fusion [45] 98.02 ± 1.03 96.97 ± 0.75 ViT [4] 99.29 ± 0.34 98.75 ± 0.21 CFDNN [46] 98.62 ± 0.27 97.65 ± 0.18 Inception-v3-CapsNet [18] 99.05 ± 0.24 97.59 ± 0.16 GSSF [47] 99.24 ± 0.47 97.86 ± 0.56 PCNet [48] 99.25 ± 0.37 98.71 ± 0.22 GAN [26] 98.58 ± 0.33 97.54 ± 0. As shown in Figure 7, the confusion matrix with all 21 classes was also created to further examine the performance of FCIHMRT with an 80% training ratio.…”
Section: Methods 80% Training Ratio (Oa) 50% Training Ratio (Oa)mentioning
confidence: 99%
“…GoogLeNet [40] 94.31 ± 0.89 92.70 ± 0.60 VGG-16 [40] 95.21 ± 1.20 94.14 ± 0.69 CRAN [42] 95.75 ± 0.80 94.21 ± 0.75 MobileNet V2 [43] 99.01 ± 0.21 97.88 ± 0.31 SE-MDPMNet [44] 98.95 ± 0.12 98.36 ± 0.14 Two-Stream Fusion [45] 98.02 ± 1.03 96.97 ± 0.75 ViT [4] 99.29 ± 0.34 98.75 ± 0.21 CFDNN [46] 98.62 ± 0.27 97.65 ± 0.18 Inception-v3-CapsNet [18] 99.05 ± 0.24 97.59 ± 0.16 GSSF [47] 99.24 ± 0.47 97.86 ± 0.56 PCNet [48] 99.25 ± 0.37 98.71 ± 0.22 GAN [26] 98.58 ± 0.33 97.54 ± 0. As shown in Figure 7, the confusion matrix with all 21 classes was also created to further examine the performance of FCIHMRT with an 80% training ratio.…”
Section: Methods 80% Training Ratio (Oa) 50% Training Ratio (Oa)mentioning
confidence: 99%
“…Shi et al [32] proposed a dense fusion of multi-level features, through 3 × 3 depthwise separable convolution and 1 × 1 standard convolution, to extract the information of the current layer and fuse it with the features extracted from the previous layer. Deng et al [33] proposed a deep neural network incorporating contextual features, using the pre-trained VGG-16 as a feature extractor to obtain feature maps. Then, the feature map is input into two parallel modules, global average pooling (GAP) and long short-term memory (LSTM), to extract global and contextual features, respectively, and finally splicing global features and contextual features.…”
Section: Classification Of Remote Sensing Scene Imagesmentioning
confidence: 99%
“…To limit the value of μ and λ to a certain range, the limiting condition is added that the sum of μ and λ is 2, as shown in formula (11). By studying the gradient of DICE and BCE, it is found that the loss of DICE is greater than that of BCE.…”
Section: Experiments On Weight Of Loss Functionmentioning
confidence: 99%
“…In recent years, deep learning has become a heated topic in the study of artificial intelligence. Deep learning theory represented by convolutional neural network typically has made some achievements in the field of image classification [9][10][11], semantic segmentation [12,13], feature extraction [14], smart grid [15,16], etc. e method based on a convolutional neural network is able to complete the modeling process through automatic learning of features, avoiding the incomplete modeling process caused by human intervention in the early stage.…”
Section: Introductionmentioning
confidence: 99%