Most solutions to the SLAM problem in robotics have utilized Range and Bearing sensors as the provided perception data is easy to incorporate, allowing immediate landmark initialization. This is not the case when using Bearing-Only information because the distance to the perceived landmarks is not directly provided. A whole estimate of a landmark position will only be possible via a set of measurements taken from different points of view. The vast majority of contributions to this problem utilize a parallel task to get this estimate, and hence the landmark initialization is delayed. We give a new insight to the problem and present a method to avoid this delay by initializing the whole ray that defines the direction of the landmark. We utilize a minimal and computationally efficient form to represent this ray and a new strategy for the subsequent updates. Simulations have been carried out to validate the proposed algorithms.
This paper presents an edge-based segmentation technique that allows to process quickly very large range images. The proposed technique consists of two stages. First, a binary edge map is generated; then, a contour detection strategy is responsible for the extraction of the dizerent boundaries. The jirst stage generates a binary edge map based on a scan line approximation technique. There is a difference with the previous techniques, as only two orthogonal scan line direction are considered. The planar curves defined by the elements contained in each scan line are approximated by oriented quadratic curves. The representative points from each curve are used to define a binary edge map. The second stage is a new approach to the classical contour extraction problem. It shows a difference with the previous approaches which use the enclosed surface information: with the suggested technique, boundaries are obtained by using only the information contained in the binary edge map. It consists in linking the edge points by applying a graph strategy. Experimental results with large panoramic range images are presented.
This article presents a new open-source C++ implementation to solve the SLAM problem, which is focused on genericity, versatility and high execution speed. It is based on an original object oriented architecture, that allows the combination of numerous sensors and landmark types, and the integration of various approaches proposed in the literature. The system capacities are illustrated by the presentation of an inertial/vision SLAM approach, for which several improvements over existing methods have been introduced, and that copes with very high dynamic motions. Results with a hand-held camera are presented.Comment: 10 page
This paper presents 6-DOF monocular EKF-SLAM with undelayed initialization using linear landmarks with extensible endpoints, based on the Plücker parametrization. A careful analysis of the properties of the Plücker coordinates, defined in the projective space P 5 , permits their direct usage for undelayed initialization. Immediately after detection of a segment in the image, a Plücker line is incorporated in the map. A single Gaussian pdf includes inside its 2-sigma region all possible lines given the observed segment, from arbitrarily close up to the infinity range, and in any orientation. The lines converge to stable 3D configurations as the moving camera gathers observations from new viewpoints. The line's endpoints, maintained out of the map, are constantly retro-projected from the image onto the line's local reference frame. An extendingonly policy is defined to update them. We validate the method via Monte Carlo simulations and with real imagery data.
This paper explores the possibilities of using monocular simultaneous localization and mapping (SLAM) algorithms in systems with more than one camera. The idea is to combine in a single system the advantages of both monocular vision (bearings-only, infinite range observations but no 3-D instantaneous information) and stereovision (3-D information up to a limited range). Such a system should be able to instantaneously map nearby objects while still considering the bearing information provided by the observation of remote ones. We do this by considering each camera as an independent sensor rather than the entire set as a monolithic supersensor. The visual data are treated by monocular methods and fused by the SLAM filter. Several advantages naturally arise as interesting possibilities, such as the desynchronization of the firing of the sensors, the use of several unequal cameras, self-calibration, and cooperative SLAM with several independently moving cameras. We validate the approach with two different applications: a stereovision SLAM system with automatic self-calibration of the rig's main extrinsic parameters and a cooperative SLAM system with two independent free-moving cameras in an outdoor setting.
:This paper presents new techniques for improving the structural quality of automatically acquired architectural 3D models. Common architectural features like parallelism and orthogonality of walls and edges are exploited. The location of these features is extracted from the model by using a probabilistic technique (RANSAC). The relationships among the planes and edges are inferred automatically using a knowledge-based architectural model. A numerical algorithm is used to optimise the orientations of the features. Small irregularities in the model are removed by projecting the triangulation vertices onto the features. Planes and edges in the resulting model are aligned to each other. The techniques produce models with improved appearance. We show results for synthetic and real data with consideration of noise. The authors and the University of Edinburgh retain the right to reproduce and publish this paper for non-commercial purposes.Permission is granted for this report to be reproduced by others for non-commercial purposes as long as this copyright notice is reprinted in full in any reproduction. Applications to make other use of the material should be addressed in the first instance to Copyright Permissions, Division of Informatics, The University of Edinburgh, 80 South Bridge, Edinburgh EH1 1HN, Scotland. This paper presents new techniques for improving the structural quality of automatically acquired architectural 3D models. Common architectural features like parallelism and orthogonality of walls and edges are exploited. The location of these features is extracted from the model by using a probabilistic technique (RANSAC). The relationships among the planes and edges are inferred automatically using a knowledge-based architectural model. A numerical algorithm is used to optimise the orientations of the features. Small irregularities in the model are removed by projecting the triangulation vertices onto the features. Planes and edges in the resulting model are aligned to each other. The techniques produce models with improved appearance. We show results for synthetic and real data with consideration of noise. Improving architectural 3D reconstruction by plane and edge constraining
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.