2015
DOI: 10.3390/s151026368
|View full text |Cite
|
Sign up to set email alerts
|

Position Estimation and Local Mapping Using Omnidirectional Images and Global Appearance Descriptors

Abstract: This work presents some methods to create local maps and to estimate the position of a mobile robot, using the global appearance of omnidirectional images. We use a robot that carries an omnidirectional vision system on it. Every omnidirectional image acquired by the robot is described only with one global appearance descriptor, based on the Radon transform. In the work presented in this paper, two different possibilities have been considered. In the first one, we assume the existence of a map previously built… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
22
0

Year Published

2016
2016
2022
2022

Publication Types

Select...
8
1

Relationship

2
7

Authors

Journals

citations
Cited by 20 publications
(22 citation statements)
references
References 21 publications
0
22
0
Order By: Relevance
“…Several works have demonstrated the validity of these techniques in robot mapping and localization when the movement of the robot is restricted to the ground plane. For example, in [23], different 2D localization and mapping tasks have been carried out using global appearance descriptors, and they have been compared with some descriptors based on landmark extraction to compare the effectiveness and the computational cost. Ranganathan et al [28] presented a probabilistic topological mapping method that uses information of panoramic scenes captured by a ring of cameras mounted on the robot, and they are described using Fourier signature.…”
Section: State-of-the-art On Altitude Estimation and Global Appearancmentioning
confidence: 99%
See 1 more Smart Citation
“…Several works have demonstrated the validity of these techniques in robot mapping and localization when the movement of the robot is restricted to the ground plane. For example, in [23], different 2D localization and mapping tasks have been carried out using global appearance descriptors, and they have been compared with some descriptors based on landmark extraction to compare the effectiveness and the computational cost. Ranganathan et al [28] presented a probabilistic topological mapping method that uses information of panoramic scenes captured by a ring of cameras mounted on the robot, and they are described using Fourier signature.…”
Section: State-of-the-art On Altitude Estimation and Global Appearancmentioning
confidence: 99%
“…We also consider that the map of the environment was constructed from a set of images captured while the robot had a planar movement and using global-appearance techniques. Previous works have shown that it is possible to estimate the pose (position and orientation) of the robot in this plane using this kind of technique [23]. In this work, Berenguer et al used a set of omnidirectional images captured from different poses in the ground plane (reference images) and obtained a holistic descriptor per image, using a combination of the Radon transform and gist.…”
Section: Introductionmentioning
confidence: 99%
“…They make use of the properties that this transform presents when it is used with panoramic images, and they also propose a Monte Carlo algorithm for robust localization. In a similar way, Berenguer et al [136] propose two methods to estimate the position of the robot by means of the omnidirectional image captured. On the one hand, the first method represents the environment through a sequence of omnidirectional images, and the Radon transform is used to describe the global appearance of the scenes.…”
Section: Mapless Navigation Systemsmentioning
confidence: 99%
“…Third, hybrid maps try to gather the advantages of the two previous approaches. They arrange the information into several layers with different levels of detail, containing topological models in the top layers that permit a rough localization and metric models in the bottom layers to refine this localization [21][22][23].…”
Section: Introductionmentioning
confidence: 99%