Autonomous navigation of unmanned vehicles in forests is a challenging task. In such environments, due to the canopies of the trees, information from Global Navigation Satellite Systems (GNSS) can be degraded or even unavailable. Also, because of the large number of obstacles, a previous detailed map of the environment is not practical. In this paper, we solve the complete navigation problem of an aerial robot in a sparse forest, where there is enough space for the flight and the GNSS signals can be sporadically detected. For localization, we propose a state estimator that merges information from GNSS, Attitude and Heading Reference Systems (AHRS), and odometry based on Light Detection and Ranging (LiDAR) sensors. In our LiDAR-based odometry solution, the trunks of the trees are used in a feature-based scan matching algorithm to estimate the relative movement of the vehicle. Our method employs a robust adaptive fusion algorithm based on the unscented Kalman filter. For motion control, we adopt a strategy that integrates a vector field, used to impose the main direction of the movement for the robot, with an optimal probabilistic planner, which is responsible for obstacle avoidance. Experiments with a quadrotor equipped with a planar LiDAR in an actual forest environment is used to illustrate the effectiveness of our approach.
No abstract
This paper presents experimental results on the localization of a mobile robot equipped with relative frequent and absolute infrequent sensors. The relative sensors used are two: a wheel based odometry and a visual based odometry. The absolute sensor is a vision based landmark detector that computes the pose of the robot relative to a pre-mapped visual beacon. This would be a simple sensor fusion problem, which could be solved using standard recursive estimators, if we would not have considered two extra characteristics of the beacon detector: (1) since we assume a monocular vision system and a planar visual mark, the localization problem presents up to four possible solutions; and (2) the frequency that the robot meets a visual mark is very low (0.01Hz or less). To consider these characteristics, we propose the use of a particle filter with a very precise prediction step (obtained by combining the two odometry sensors available) and a correction step that considers the multi-modal characteristic of the data. Besides, the sensor fusion algorithm, the paper also describes the development of the visual sensors used in the localization process.
In this paper, we present a methodology to make an autonomous quadrotor fly through a sequence of gates only with on-board sensors. Our work is a solution to the AlphaPilot Challenge, proposed by the Lookheed Martin Company. In the challenge, the quadriprotor must be able to compete with human-piloted drones in a race. First, we propose a strategy to generate a smooth trajectory that passes through the gates. Then we developed a localization system, which merges image data from an on-board camera with IMU data. Finally, we present an artificial vector field based strategy used to control the quadrotor. Our results are validated with simulations in the official simulator of the competition. Resumo: Este artigo apresenta uma metodologia para guiar um quadrirrotor autônomo através de uma sequência de portões utilizando apenas sensores onboard. O trabalhoé uma solução para o desafio AlphaPilot, proposto pela companhia Lookheed Martin. No desafio, o quadrirrotor deve ser capaz de competir com drones pilotados por humanos em uma corrida. Primeiramente, uma estratégia para gerar uma trajetória suave que trafega através das janelasé proposta. Então, e desenvolvido um sistema de localização que une informações de imagens de uma câmera frontal com informações da IMU. Finalmente, uma estratégia baseada em campos vetoriais artificiais utilizada para controlar o quadrirrotoré apresentada. Os resultados são validados com simulações no simulador oficial da competição.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.