Positioning is a key task in most field robotics applications but can be very challenging in GPS-denied or high-slip environments. A common tactic in such cases is to position visually, and we present a visual odometry implementation with the unusual reliance on optical mouse sensors to report vehicle velocity. Using multiple kilometers of data from a lunar rover prototype, we demonstrate that, in conjunction with a moderate-grade inertial measurement unit, such a sensor can provide an integrated pose stream that is at times more accurate than that achievable by wheel odometry and visibly more desirable for perception purposes than that provided by a high-end GPS-INS system. A discussion of the sensor's limitations and several drift mitigating strategies attempted are presented.
Wide-area coverage is well suited for small unmanned aircraft, however common coverage algorithms are particularly inefficient in many common environments such as urban areas. Classical open-loop spiral or lawnmower patterns typically cannot represent large sub-regions that are unlikely to contain an object of interest that are thus wastefully covered. More general algorithms can maintain arbitrary likelihood distributions but fundamentally solve a complex continuous-space trajectory optimization problem, requiring a trade-off between look-ahead and computational complexity while providing at best asymptotic coverage guarantees. This paper considers generating global coverage patterns in the space of the coverage area, with specific focus on road networks, and mapping these to UAV actions. A new method is proposed for producing and following such patterns with dynamics-constrained vehicles, compared with existing coverage strategies, and shown to be advantageous in many realistic environments through simulations of real-world aircraft. Results suggest environment density as a metric for algorithm selection given a lack of dominance by any one strategy across all environments.
We address the problem of robot localization using ground penetrating radar (GPR) sensors. Current approaches for localization with GPR sensors require a priori maps of the system's environment as well as access to approximate global positioning (GPS) during operation. In this paper, we propose a novel, real-time GPR-based localization system for unknown and GPS-denied environments. We model the localization problem as an inference over a factor graph. Our approach combines 1D single-channel GPR measurements to form 2D image submaps. To use these GPR images in the graph, we need sensor models that can map noisy, highdimensional image measurements into the state space. These are challenging to obtain a priori since image generation has a complex dependency on subsurface composition and radar physics, which itself varies with sensors and variations in subsurface electromagnetic properties. Our key idea is to instead learn relative sensor models directly from GPR data that map non-sequential GPR image pairs to relative robot motion. These models are incorporated as factors within the factor graph with relative motion predictions correcting for accumulated drift in the position estimates. We demonstrate our approach over datasets collected across multiple locations using a custom designed experimental rig. We show reliable, real-time localization using only GPR and odometry measurements for varying trajectories in three distinct GPS-denied environments.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.