Abstract:Modern militaries rely upon remote image sensors for real-time intelligence. A typical remote system consists of an unmanned aerial vehicle, or UAV, with an attached camera. A video stream is sent from the UAV, through a bandwidth-constrained satellite connection, to an intelligence processing unit. In this research, an upgrade to this remote-video-stream method of collection is proposed. A set of synthetic images of a scene captured by an UAV in a virtual environment is sent to a pipeline of computer vision a… Show more
“…This paper is a direct continuation of work performed by Roeber et al 3 who demonstrated the use of SfM as a method of data compression for aerial imagery. Roeber's results were extremely promising, showing a compression of nearly 60% compared to the original set of imagery.…”
Section: Related Workmentioning
confidence: 63%
“…Building from recent research on this same topic using Structure from Motion (SfM), 3 which demonstrated 49%-60% compression from the original imagery, we propose a new SfM variant that is both faster and more spatially accurate than traditional methods, and most importantly, capable of calculating imagery replacements in real-time at 4 Hz on our test laptop.…”
Section: Application Of Real-time Structure From Motionmentioning
confidence: 99%
“…29,30 These two tools are freely available, and relatively straight-forward to use. These toolsets were chosen over the ones used by Roeber et al 3 because they can be seamlessly integrated through a python script, and can produce a full reconstruction without any manual input. However, the final model seems to be more prone to error, as the algorithm does not properly trim the model to the object of interest due to interference from the virtual world's skybox, which consists of similar textures rendered as a static box around the camera based on the configured view distance in the virtual world.…”
We propose a new algorithm variant for Structure from Motion (SfM) to enable real-time image processing of scenes imaged by aerial drones. Our new SfM variant runs in real-time at 4 Hz equating to an 80× computation time speed-up compared to traditional SfM and is capable of a 90% size reduction of original video imagery, with an added benefit of presenting the original two-dimensional (2D) video data as a three-dimensional (3D) virtual model. This opens many potential applications for a real-time image processing that could make autonomous vision–based navigation possible by completely replacing the need for a traditional live video feed. The 3D reconstruction that is generated comes with the added benefit of being able to generate a spatially accurate representation of a live environment that is precise enough to generate global positioning system (GPS) coordinates from any given point on an imaged structure, even in a GPS-denied environment.
“…This paper is a direct continuation of work performed by Roeber et al 3 who demonstrated the use of SfM as a method of data compression for aerial imagery. Roeber's results were extremely promising, showing a compression of nearly 60% compared to the original set of imagery.…”
Section: Related Workmentioning
confidence: 63%
“…Building from recent research on this same topic using Structure from Motion (SfM), 3 which demonstrated 49%-60% compression from the original imagery, we propose a new SfM variant that is both faster and more spatially accurate than traditional methods, and most importantly, capable of calculating imagery replacements in real-time at 4 Hz on our test laptop.…”
Section: Application Of Real-time Structure From Motionmentioning
confidence: 99%
“…29,30 These two tools are freely available, and relatively straight-forward to use. These toolsets were chosen over the ones used by Roeber et al 3 because they can be seamlessly integrated through a python script, and can produce a full reconstruction without any manual input. However, the final model seems to be more prone to error, as the algorithm does not properly trim the model to the object of interest due to interference from the virtual world's skybox, which consists of similar textures rendered as a static box around the camera based on the configured view distance in the virtual world.…”
We propose a new algorithm variant for Structure from Motion (SfM) to enable real-time image processing of scenes imaged by aerial drones. Our new SfM variant runs in real-time at 4 Hz equating to an 80× computation time speed-up compared to traditional SfM and is capable of a 90% size reduction of original video imagery, with an added benefit of presenting the original two-dimensional (2D) video data as a three-dimensional (3D) virtual model. This opens many potential applications for a real-time image processing that could make autonomous vision–based navigation possible by completely replacing the need for a traditional live video feed. The 3D reconstruction that is generated comes with the added benefit of being able to generate a spatially accurate representation of a live environment that is precise enough to generate global positioning system (GPS) coordinates from any given point on an imaged structure, even in a GPS-denied environment.
“…32 The engine has been used successfully for testing stereo computer vision techniques in simulation for automated aerial refueling applications 33 and for testing structure from motion. [34][35][36]…”
Monocular visual navigation methods have seen significant advances in the last decade, recently producing several real-time solutions for autonomously navigating small unmanned aircraft systems without relying on the Global Positioning System (GPS). This is critical for military operations that may involve environments where GPS signals are degraded or denied. However, testing and comparing visual navigation algorithms remains a challenge since visual data is expensive to gather. Conducting flight tests in a virtual environment is an attractive solution prior to committing to outdoor testing. This work presents a virtual testbed for conducting simulated flight tests over real-world terrain and analyzing the real-time performance of visual navigation algorithms at 31 Hz. This tool was created to ultimately find a visual odometry algorithm appropriate for further GPS-denied navigation research on fixed-wing aircraft, even though all of the algorithms were designed for other modalities. This testbed was used to evaluate three current state-of-the-art, open-source monocular visual odometry algorithms on a fixed-wing platform: Direct Sparse Odometry, Semi-Direct Visual Odometry, and ORB-SLAM2 (with loop closures disabled).
“…The virtual environment was built using the AftrBurner engine, a cross-platform visualization engine 32 . The engine has been used successfully for testing stereo computer vision techniques in simulation for automated aerial refueling applications 33 and for testing structure from motion [34][35][36] .…”
Monocular visual navigation methods have seen significant advances in the last decade, recently producing several real-time solutions for autonomously navigating small unmanned aircraft systems without relying on GPS. This is critical for military operations which may involve environments where GPS signals are degraded or denied. However, testing and comparing visual navigation algorithms remains a challenge since visual data is expensive to gather. Conducting flight tests in a virtual environment is an attractive solution prior to committing to outdoor testing. This work presents a virtual testbed for conducting simulated flight tests over real-world terrain and analyzing the real-time performance of visual navigation algorithms at 31 Hz. This tool was created to ultimately find a visual odometry algorithm appropriate for further GPS-denied navigation research on fixed-wing aircraft, even though all of the algorithms were designed for other modalities. This testbed was used to evaluate three current state-of-the-art, open-source monocular visual odometry algorithms on a fixed-wing platform: Direct Sparse Odometry, Semi-Direct Visual Odometry, and ORB-SLAM2 (with loop closures disabled).
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.