2010 20th International Conference on Pattern Recognition 2010
DOI: 10.1109/icpr.2010.94
|View full text |Cite
|
Sign up to set email alerts
|

Visual SLAM with an Omnidirectional Camera

Abstract: Abstract-In this work we integrate the Spherical Camera Model for catadioptric systems in a Visual-SLAM application. The Spherical Camera Model is a projection model that unifies central catadioptric and conventional cameras. To integrate this model into the Extended Kalman Filter-based SLAM we require to linearize the direct and the inverse projection. We have performed an initial experimentation with omnidirectional and conventional real sequences including challenging trajectories. The results confirm that … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
26
0

Year Published

2012
2012
2023
2023

Publication Types

Select...
5
4
1

Relationship

1
9

Authors

Journals

citations
Cited by 52 publications
(26 citation statements)
references
References 11 publications
0
26
0
Order By: Relevance
“…While there exists a large body of literature on omnidirectional camera calibration [15], [16], [17], [18], localization [19], [20] and sparse structure-from-motion / SLAM [21], [22], [23], surprisingly little research has been carried out towards dense 3D reconstruction with catadioptric cameras.…”
Section: Related Workmentioning
confidence: 99%
“…While there exists a large body of literature on omnidirectional camera calibration [15], [16], [17], [18], localization [19], [20] and sparse structure-from-motion / SLAM [21], [22], [23], surprisingly little research has been carried out towards dense 3D reconstruction with catadioptric cameras.…”
Section: Related Workmentioning
confidence: 99%
“…In [17] a SLAM with an omnidirectional sensor has been proposed; in contrast with our approach they do not estimate the depth. In [18] the previous approach is extended by considering a patch formulation for data association which is invariant to rotation and scale.…”
Section: Related Workmentioning
confidence: 99%
“…Most VO algorithms for omnidirectional cameras [7], [8], [9], [10] rely on robust feature descriptors (e.g., SIFT [11]) to establish feature correspondence. To cope with the significant distortion of large FoV images, special descriptors were developed that model the distortion effects to improve feature matching [12], [13], [14], [15].…”
Section: A Related Workmentioning
confidence: 99%