In this paper 1 we study the Extended Kalman Filter approach to simultaneous localization and mapping (EKF-SLAM), describing its known properties and limitations, and concentrate on the filter consistency issue. We show that linearization of the inherent nonlinearities of both the vehicle motion and the sensor models frequently drives the solution of the EKF-SLAM out of consistency, specially in those situations where uncertainty surpasses a certain threshold. We propose a mapping algorithm, Robocentric Map Joining, which improves consistency of the EKF-SLAM algorithm by limiting the level of uncertainty in the continuous evolution of the stochastic map: (1) by building a sequence of independent local maps, and (2) by using a robot centered representation of each local map. Simulations and a large-scale indoor/outdoor experiment validate the proposed approach.
We address the problem of online path planning for optimal sensing with a mobile robot. The objective of the robot is to learn the most about its pose and the environment given time constraints. We use a POMDP with a utility function that depends on the belief state to model the finite horizon planning problem. We replan as the robot progresses throughout the environment. The POMDP is high-dimensional, continuous, non-differentiable, nonlinear, non-Gaussian and must be solved in real-time. Most existing techniques for stochastic planning and reinforcement learning are therefore inapplicable. To solve this extremely complex problem, we propose a Bayesian optimization method that dynamically trades off exploration (minimizing uncertainty in unknown parts of the policy space) and exploitation (capitalizing on the current best solution). We demonstrate our approach with a visually-guide mobile robot. The solution proposed here is also applicable to other closely-R. Martinez-Cantin ( )
Abstract-This paper proposes a simulation-based active policy learning algorithm for finite-horizon, partially-observed sequential decision processes. The algorithm is tested in the domain of robot navigation and exploration under uncertainty, where the expected cost is a function of the belief state (filtering distribution). This filtering distribution is in turn nonlinear and subject to discontinuities, which arise because constraints in the robot motion and control models. As a result, the expected cost is non-differentiable and very expensive to simulate. The new algorithm overcomes the first difficulty and reduces the number of simulations as follows. First, it assumes that we have carried out previous evaluations of the expected cost for different corresponding policy parameters. Second, it fits a Gaussian process (GP) regression model to these values, so as to approximate the expected cost as a function of the policy parameters. Third, it uses the GP predicted mean and variance to construct a statistical measure that determines which policy parameters should be used in the next simulation. The process is iterated using the new parameters and the newly gathered expected cost observation. Since the objective is to find the policy parameters that minimize the expected cost, this active learning approach effectively trades-off between exploration (where the GP variance is large) and exploitation (where the GP mean is low). In our experiments, a robot uses the proposed method to plan an optimal path for accomplishing a set of tasks, while maximizing the information about its pose and map estimates. These estimates are obtained with a standard filter for SLAM. Upon gathering new observations, the robot updates the state estimates and is able to replan a new path in the spirit of openloop feedback control.
Abstract-This paper presents an experimentally validated alternative to the classical extended Kalman filter approach to the solution of the probabilistic state-space Simultaneous Localization and Mapping (SLAM) problem. Several authors have recently reported the divergence of this classical approach due to the linearization of the inherent non-linear nature of the SLAM problem. Hence, the approach described in this work aims to avoid the analytical linearization based on Taylor-series expansion of both the model and measurement equations by using the unscented filter. An innovation-based consistency checking validates the feasibility and applicability of the unscented SLAM approach to a real large-scale outdoor exploration mission.
We address the robot grasp optimization problem of unknown objects considering uncertainty in the input space. Grasping unknown objects can be achieved by using a trial and error exploration strategy. Bayesian optimization is a sample efficient optimization algorithm that is especially suitable for this setups as it actively reduces the number of trials for learning about the function to optimize. In fact, this active object exploration is the same strategy that infants do to learn optimal grasps. One problem that arises while learning grasping policies is that some configurations of grasp parameters may be very sensitive to error in the relative pose between the object and robot end-effector. We call these configurations unsafe because small errors during grasp execution may turn good grasps into bad grasps. Therefore, to reduce the risk of grasp failure, grasps should be planned in safe areas. We propose a new algorithm, Unscented Bayesian optimization that is able to perform sample efficient optimization while taking into consideration input noise to find safe optima. The contribution of Unscented Bayesian optimization is twofold as if provides a new decision process that drives exploration to safe regions and a new selection procedure that chooses the optimal in terms of its safety without extra analysis or computational cost. Both contributions are rooted on the strong theory behind the unscented transformation, a popular nonlinear approximation method. We show its advantages with respect to the classical Bayesian optimization both in synthetic problems and in realistic robot grasp simulations. The results highlights that our method achieves optimal and robust grasping policies after few trials while the selected grasps remain in safe regions.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.