2021
DOI: 10.1109/taes.2020.3031435
|View full text |Cite
|
Sign up to set email alerts
|

Explainability of Deep SAR ATR Through Feature Analysis

Abstract: Understanding the decision-making process of deep learning networks is a key challenge which has rarely been investigated for Synthetic Aperture Radar (SAR) images. In this paper, a set of new analytical tools is proposed and applied to a Convolutional Neural Network (CNN) handling Automatic Target Recognition (ATR) on two SAR datasets containing military targets. Firstly, an analysis of the respective influence of target, shadow and background areas on classification performance is carried out. The shadow app… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

1
16
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
6
3

Relationship

0
9

Authors

Journals

citations
Cited by 29 publications
(17 citation statements)
references
References 42 publications
1
16
0
Order By: Relevance
“…To know a target or scene for analysis, detection, or classification, it is desirable to have its SAR image acquired from different positions [26], [27]. Different relative viewing angles (resulting from changes of flight direction or target movement in different revisits) results in a kind of target rotation in SAR image.…”
Section: A Basic Sar Principlesmentioning
confidence: 99%
“…To know a target or scene for analysis, detection, or classification, it is desirable to have its SAR image acquired from different positions [26], [27]. Different relative viewing angles (resulting from changes of flight direction or target movement in different revisits) results in a kind of target rotation in SAR image.…”
Section: A Basic Sar Principlesmentioning
confidence: 99%
“…In this part, we use the visualization method to analyze the key parts that affect the target classification. In Belloni et al [53], researchers use a black square to occlude target images and the percentage of correctly classified images is used as the new intensity of the pixels located in the center of the black square in the classification map. Different from it, we occlude target parts instead of pixel squares.…”
Section: E Analysis On Key Parts With Part Occlusionmentioning
confidence: 99%
“…The learned correlation between these weights and each class's image features is critical to ensure that the CNN correctly identifies new images being analyzed [2]. Even the slightest of perturbations in pixel intensities in test image features can deter accurate image classification [3]. In order to accurately label any new image to a learned dataset class, the set of weights must represent a sufficiently diverse set of feature representations belonging to each image class [2].…”
Section: Introductionmentioning
confidence: 99%
“…This limitation ultimately leads to the need to train CNNs with a large number of examples containing all potential representations of the same image class. For imaging techniques involving electromagnetic (EM) probing, this often means parameterizing all applied incidence angles, phases, and instances of capture intensities and frequencies under different environments and backgrounds [3]. Similarly, for any other image platform, this leads to parameterizing every factor contributing to how each class image is developed or represented post processing [4].…”
Section: Introductionmentioning
confidence: 99%