2022
DOI: 10.1109/tgrs.2021.3106915
|View full text |Cite
|
Sign up to set email alerts
|

SAR Target Classification Using the Multikernel-Size Feature Fusion-Based Convolutional Neural Network

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
32
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
7

Relationship

0
7

Authors

Journals

citations
Cited by 59 publications
(32 citation statements)
references
References 49 publications
0
32
0
Order By: Relevance
“…Table V and Fig. 9 also reveals that in terms of fusion featurebased methods, compared to [47] and [55], [60] and the proposed framework have higher accuracy and AUC values, which can be attributed to the latter's utilization of complementary advantages of handcrafted features and deep features [60], while the former only fuses handcrafted features or deep features. The proposed feature fusion framework is slightly better than [60], which is due to the more effective use of the phase information of the SAR data through the monogenic signal and CVNLNet.…”
Section: ) Recognition Based On the Feature Fusion Frameworkmentioning
confidence: 90%
See 3 more Smart Citations
“…Table V and Fig. 9 also reveals that in terms of fusion featurebased methods, compared to [47] and [55], [60] and the proposed framework have higher accuracy and AUC values, which can be attributed to the latter's utilization of complementary advantages of handcrafted features and deep features [60], while the former only fuses handcrafted features or deep features. The proposed feature fusion framework is slightly better than [60], which is due to the more effective use of the phase information of the SAR data through the monogenic signal and CVNLNet.…”
Section: ) Recognition Based On the Feature Fusion Frameworkmentioning
confidence: 90%
“…They include handcrafted featurebased methods, such as moment method [21], attributed scattering center (ASC) model [22] and Joint Sparse Representation (JSR) of monogenic components [30], as well as end-to-end neural networks, such as A-ConvNet [31], CV-CNN [39], CV-FCNN [43] and RVNLNet with the same architecture as CVNLNet. Furthermore, a fusion framework based on multiple handcrafted features [47], MKSFF-CNN based on fusion of multi-scale deep features [55] and FEC based on fusion of handcrafted features and deep features [60], are also used for comparison. Table V lists the classification accuracy of different methods.…”
Section: ) Recognition Based On the Feature Fusion Frameworkmentioning
confidence: 99%
See 2 more Smart Citations
“…In addition, we also compared the Receiver Operation Characteristics (ROC) of different models. ROC is an important metric that can be used to evaluate the detection effect under the same false positive [59,60]. ROC The higher the true positive rate (TPR), the better the detection effect.…”
Section: 60mentioning
confidence: 99%