Obtaining measured Synthetic Aperture Radar (SAR) data for training Automatic Target Recognition (ATR) models can be too expensive (in terms of time and money) and complex of a process in many situations. In response, researchers have developed methods for creating synthetic SAR data for targets using electromagnetic prediction software, which is then used to enrich an existing measured training dataset. However, this approach relies on the availability of some amount of measured data. In this work, we focus on the case of having 100% synthetic training data, while testing on only measured data. We use the SAMPLE dataset public released by AFRL, and find significant challenges to learning generalizable representations from the synthetic data due to distributional differences between the two modalities and extremely limited training sample quantities. Using deep learning-based ATR models, we propose data augmentation, model construction, loss function choices, and ensembling techniques to enhance the representation learned from the synthetic data, and ultimately achieved over 95% accuracy on the SAMPLE dataset. We then analyze the functionality of our ATR models using saliency and feature-space investigations and find them to learn a more cohesive representation of the measured and synthetic data. Finally, we evaluate the out-oflibrary detection performance of our synthetic-only models and find that they are nearly 10% more effective than baseline methods at identifying measured test samples that do not belong to the training class set. Overall, our techniques and their compositions significantly enhance the feasibility of using ATR models trained exclusively on synthetic data.
Training deep learning-based Synthetic Aperture Radar Automatic Target Recognition (SAR-ATR) systems for use in an "open-world" operating environment has thus far proven difficult. Most SAR-ATR systems are designed to achieve maximum accuracy for a limited set of classes, yet ignore the implications of encountering novel target classes during deployment. Even worse, the standard deep learning training objectives fundamentally inherit a closed-world assumption, and provide no guidance for how to handle out-of-distribution (OOD) data. In this work, we develop a novel training procedure called Adversarial Outlier Exposure (AdvOE) to co-design the ATR system for accuracy and OOD detection. Our method introduces a large, diverse and unlabeled auxiliary training dataset containing samples from the OOD set. The AdvOE objective encourages a Deep Neural Network to learn robust features of the in-distribution training data, while also promoting maximum entropy predictions for adversarially perturbed versions of the OOD data. We experiment with the recent SAMPLE dataset, and find our method nearly doubles OOD detection performance over the baseline in key settings, and excels when using only synthetic training data. As compared to several other advanced ATR training techniques, AdvOE also affords significant improvements in both classification and detection statistics. Finally, we conduct extensive experiments that measure the effect of OOD set granularity on detection rates; discuss the implications of using different detection algorithms; and develop a novel analysis technique to validate our findings and interpret the OOD detection problem from a new perspective.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.