Detection and neutralization of surface-laid and buried landmines has been a slow and dangerous endeavor for military forces and humanitarian organizations throughout the world. In an effort to make the process faster and safer, scientists have begun to exploit the ever-evolving passive electro-optical realm, both from a broadband perspective and a multi or hyperspectral perspective. Carried with this exploitation is the development of mine detection algorithms that take advantage of spectral features exhibited by mine targets, only available in a multi or hyperspectral data set. Difficulty in algorithm development arises from a lack of robust data, which is needed to appropriately test the validity of an algorithm's results. This paper discusses the development of synthetic data using the Digital Imaging and Remote Sensing Image Generation (DIRSIG) model. A synthetic landmine scene has been modeled after data collected at a US Army arid testing site by the University of Hawaii's Airborne Hyperspectral Imager (AHI). The synthetic data has been created and validated to represent the surrogate minefield thermally, spatially, spectrally, and temporally over the 7.9 to 11.5 micron region using 70 bands of data. Validation of the scene has been accomplished by direct comparison to the AHI truth data using qualitative band to band visual analysis, Rank Order Correlation comparison, Principle Components dimensionality analysis, and an evaluation of the R(x) algorithm's performance. This paper discusses landmine detection phenomenology, describes the steps taken to build the scene, modeling methods utilized to overcome input parameter limitations, and compares the synthetic scene to truth data.
Traditionally, synthetic imagery has been constructed to simulate images captured with low resolution, nadirviewing sensors. Advances in sensor design have driven a need to simulate scenes not only at higher resolutions but also from oblique view angles. The primary efforts of this research include: real image capture, scene construction and modeling, and validation of the synthetic imagery in the reflective portion of the spectrum. High resolution imagery was collected of an area named MicroScene at the Rochester Institute of Technology using the Chester F. Carlson Center for Imaging Science's MISI and WASP sensors using an oblique view angle. Three Humvees, the primary targets, were placed in the scene under three different levels of concealment. Following the collection, a synthetic replica of the scene was constructed and then rendered with the Digital Imaging and Remote Sensing Image Generation (DIRSIG) model configured to recreate the scene both spatially and spectrally based on actual sensor characteristics. Finally, a validation of the synthetic imagery against the real images of MicroScene was accomplished using a combination of qualitative analysis, Gaussian maximum likelihood classification, and the RX algorithm. The model was updated following each validation using a cyclical development approach. The purpose of this research is to provide a level of confidence in the synthetic imagery produced by DIRSIG so that it can be used to train and develop algorithms for real world concealed target detection.
Global illumination rendering algorithms are capable of producing images that are visually realistic. However, this typically comes at a large computational expense. The overarching goal of this research was to compare different rendering solutions in order to understand why some yield better results when applied to rendering synthetic objects into real photographs. As rendered images are ultimately viewed by human observers, it was logical to use psychophysics to investigate these differences. A psychophysical experiment was conducted judging the composite images for accuracy to the original photograph. In addition, iCAM, an image color appearance model, was used to calculate image differences for the same set of images. In general it was determined that any full global illumination is better than direct illumination solutions only. Also, it was discovered that the full rendering with all of its artifacts is not necessarily an indicator of judged accuracy for the final composite image. Finally, initial results show promise in using iCAM to predict a relationship similar to the psychophysics, which could eventually be used in-the-rendering-loop to achieve photo-realism. iv Thank you Dr. John Schott for providing me with amazing experience and knowledge, and allowing me to 'make pretty pictures' for my thesis. Thank you Dr. Mark Fairchild, Dr. Garrett Johnson, and Dr. Carl Salvaggio for your knowledge and guidance throughout this process. We got some great work done at the fairway office. Mark and Garrett, I will see you Scotland! Carl, the phrase 'reinvent yourself' will stick with me forever. Thanks to everyone in the DIRS and Munsell groups. I have made such great friends over last four years, but am tired of feeling so dumb around all of you. Thanks especially to
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.