Supervised deep learning algorithms are re-defining the state-of-the-art for object detection and classification. However, training these algorithms requires extensive datasets that are typically expensive and time-consuming to collect. In the field of defence and security, this can become impractical when data is of a sensitive nature, such as infrared imagery of military vessels. Consequently, algorithm development and training are often conducted in synthetic environments, but this brings into question the generalisability of the solution to real world data.In this paper we investigate training deep learning algorithms for infrared automatic target recognition without using real-world infrared data. A large synthetic dataset of infrared images of maritime vessels in the long wave infrared waveband was generated using target-missile engagement simulation software and ten highfidelity computer-aided design models. Multiple approaches to training a YOLOv3 architecture were explored and subsequently evaluated using a video sequence of real-world infrared data. Experiments demonstrated that supplementing the training data with a small sample of semi-labelled pseudo-IR imagery caused a marked improvement in performance. Despite the absence of real infrared training data, high average precision and recall scores of 99% and 93% respectively were achieved on our real-world test data. To further the development and benchmarking of automatic target recognition algorithms this paper also contributes our dataset of photo-realistic synthetic infrared images.
Open-source technologies and solutions have paved the way for making science accessible the world over. Motivated to contribute to the direction of open-source methods, our current research presents a complete workflow of building a microscope using 3D printing and easily accessible optical components to collect images of biological samples. Further, these images are classified using machine learning algorithms to illustrate both the effectiveness of this method and show the disadvantages of classifying images that are visually similar. The second outcome of this research is an openly accessible dataset of the images collected, OPEN-BIOset, and made available to the machine learning community for future research.The research adopts the OpenFlexure Delta Stage microscope (https://openflexure.org/) that allows motorised control and maximum stability of the samples when imaging. A Raspberry Pi camera is used for imaging the samples in a transmission-based illumination setup. The imaging data collected is catalogued and organised for classification using TensorFlow. Using visual interpretation, we have created subsets from amongst the samples to experiment for the best classification results. We found that by removing similar samples, the categorical accuracy achieved was 99.9% and 99.59% for the training and testing sets. Our research shows evidence of the efficacy of open source tools and methods. Future approaches will use improved resolution images for classification and other modalities of microscopy will be realised based on the OpenFlexure microscope.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.