Abstract:Classical sensor-based control laws are based on the regulation of a set of sensor-based features to a desired reference value. The feature set is generally constant. In this article, we focus on the study of sensor-based control laws whose feature set varies during the servo. In that case, we first show that the classical control laws that use an iterative leastsquare minimization are discontinuous, and cannot be applied to real robots. We then show that these discontinuities are due to the pseudo-inverse operator, which is not continuous at matrix rank change. To solve this problem, we propose a new inversion operator. This operator is equal to the classical pseudo-inverse operator in the continuous cases, and ensures the continuity everywhere. This operator is then used to build a new control law. This general control scheme is applied to visual servoing, to ensure the continuity when some visual features leave the camera field of view. The experiments prove the interest and the validity of our approach.Key-words: Sensor-based control -velocity control -continuity -linear algebra -pseudo inverse -visual servoing -visibility constraint Les lois de commande référencée capteur classiques sont construites pour assurer la régulation d'un ensemble de primitives sensoriellesà une valeur désirée. L'ensemble des primitives de référence est alors généralement constant. Dans cet article, nous nous intéressons aux lois de commande référencées par rapportà un ensemble variable de primitives. Dans ce cas, nous montrons tout d'abord que les lois de commande classiques utilisant une minimisation itérative aux moindres carrés occasionnent des discontinuités importantes dans la loi de commande, ce qui entraîne des pics d'accélération inacceptables sur un robot réel. Nous montrons alors que ces discontinuités sont duesà la non continuité de l'opérateur pseudo inverse lorsque le rang de la matriceà inverser change. Pour résoudre ces discontinuités, nous proposons un nouvel opérateur d'inversion matricielle, qui est identiqueà l'opérateur classique dans les cas continus, et qui garantit la continuité en tout point. Cet opérateur permet ensuite la construction d'une loi de commande originale, qui est ensuite mis en oeuvre dans le cadre de l'asservissement visuel, pour assurer la continuité de la loi de commande lors de la sortie d'un nombre quelconque de points du champ de vision.
To cite this version:A. Remazeilles, François Chaumette. Image-based robot navigation from an image memory. Robotics and Autonomous Systems, Elsevier, 2007, 55 (4) AbstractThis paper addresses the problem of vision-based navigation and proposes an original control law to perform the navigation. The overall approach is based on an appearance-based representation of the environment, where the scene is directly defined in the sensor space, by a database of images acquired during a learning space. Within this context, a path to perform is described by a set of images, or image path extracted from the database. This image path is designed so that it provides enough information to control the robotic system. The central point of this paper is the closed-loop control law that drives the robot to its desired position using this image path. This control does not require neither a global 3D reconstruction, nor a temporal planning step. Furthermore, the robot is not constrqined to converge directly upon each image of the path but chooses automatically its trajectory. We propose a qualitative visual servoing, enabling to enlarge the convergence space towards an interval of confident position. We propose and use specific visual features which ensure that the robot navigates within the visibility path. Experimental simulations are given to show the effectiveness of this method for controlling the motion of a camera in three-dimensional environments (free-flying camera, or camera moving on a plane). Furthermore, experiments realized with a robotic arm observing a planar scene are also presented.
a b s t r a c tThis paper presents a vision framework which enables feature-oriented appearance-based navigation in large outdoor environments containing other moving objects. The framework is based on a hybrid topological-geometrical environment representation, constructed from a learning sequence acquired during a robot motion under human control. At the higher topological layer, the representation contains a graph of key-images such that incident nodes share many natural landmarks. The lower geometrical layer enables to predict the projections of the mapped landmarks onto the current image, in order to be able to start (or resume) their tracking on the fly. The desired navigation functionality is achieved without requiring global geometrical consistency of the underlying environment representation. The framework has been experimentally validated in demanding and cluttered outdoor environments, under different imaging conditions. The experiments have been performed on many long sequences acquired from moving cars, as well as in large-scale real-time navigation experiments relying exclusively on a single perspective vision sensor. The obtained results confirm the viability of the proposed hybrid approach and indicate interesting directions for future work.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.