One key advantage of the model-based approach for Automatic Target Recognition (ATh) is the wide the range of targets and acquisition scenarios that can be accommodated without algorithm re-training. This accrues from the use of predictive models which can be adjusted to hypothesized scenarios on-line. Approaches which rely on measured signature exemplars as the source of reference data for signature matching are constrained to those scenarios represented in the reference data base. The Moving and Stationary TArget Recognition (MSTAR) program will advance the state-of-the art in model-based ATh by developing, evaluating, and testing algorithm performance against a set of Extended Operating Conditions (EOCs) designed to reflect real-world battlefield scenarios. In addition to full 360 deg target aspect coverage over a range of depression angles, the EOCs include variations in squint angle, target articulation and configurations, obscuration due to occlusion and/or layover, and intra-class target variability [l]. These conditions can have a profound impact on the nature of the target signature, necessitating the development of explicit prediction and reasoning algorithms to provide robust target recognition. This paper provides a tutorial description ofthe impact ofthe MSTAR EOCs on SAR target signatures. A brief background discussion of the SAR imaging process is presented first. This is followed by a description of the impact of each EOC category on the target signature along with synthetic imagery examples to illustrate this impact.A key advantage of the MSTAR Model-Based Vision (MBV) paradigm is its flexibility to accommodate widely varying scenarios without algorithm re-training. The MSTAR program will demonstrate this advantage by designing and testing the MBV algorithm against measured data over a broad range of challenging real world battlefield scenarios. These Extended Operating Conditions (EOCs) represent conditions for which ATh algorithms to date have not been developed and tested. They are key drivers in the fundamental design of the MSTAR MBV algorithm [1]. 228/SPIEVOI. 2757 O-8194-2138-3/96/$6.OO Downloaded From: http://proceedings.spiedigitallibrary.org/ on 06/21/2015 Terms of Use: http://spiedl.org/terms
The Moving and Stationary TARget Recognition (MSTAR) model-based automatic target recognition (ATR) system utilizes a paradigm which matches features extracted from an unknown SAR target signature against predictions ofthose features generated from models of the sensing process and candidate target geometries. The candidate target geometry yielding the best match between predicted and extracted features defmes the identify of the unknown target. MSTAR will extend the current modej-based ATh state-ofthe-art in a number of significant directions. These include: use of Bayesian techniques for evidence accrual, reasoning over target subparts, coarse-to-fine hypothesis search strategies, and explicit reasoning over target articulation, configuration, occlusion, and layover. These advances also imply significant technical challenges--particularly for the MSTAR feature Prediction Module (MPM). In addition to accurate electromagnetics, the MPM must provide traceback between input target geometry and output features, online target geometry manipulation, target subpart feature prediction, explicit models for local scene effects, and generation of sensitivity and uncertainty measures for the predicted features. This paper describes the MPM design which is being developed to satisfy these requirements. The overall module structure is presented, along with the specific design elements focused on MSTAR requirements. Particular attention is paid to design elements that enable on-line prediction of features within the time constraints mandated by model-driven ATh. Finally, the current status, development schedule, and further extensions in the module design are described.
We present a technique to extract the three-dimensional (3-D) bistatic scattering center model of a target at microwave frequencies from its CAD model. The method is based on the shooting and bouncing ray (SBR) technique and is an extension of our previous work on extracting the monostatic 3-D scattering center model of complex targets. Using SBR, we first generate the bistatic 3-D radar image of the target based on a one-look inverse synthetic aperture radar (ISAR) algorithm. Next, we use the image processing algorithm CLEAN to extract the 3-D position and strength of the scattering centers from the bistatic radar image. We test the algorithm by extracting bistatic 3-D scattering centers from several test targets and reconstructing bistatic signatures (RCS, range profile, ISAR imagery) using the bistatic scattering centers.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.