Object Recognition Supported by User Interaction for Service Robots
DOI: 10.1109/icpr.2002.1048010
|View full text |Cite
|
Sign up to set email alerts
|

Differential epipolar constraint in mobile robot egomotion estimation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Publication Types

Select...
2
2
1

Relationship

0
5

Authors

Journals

citations
Cited by 6 publications
(4 citation statements)
references
References 9 publications
0
4
0
Order By: Relevance
“…[4]), where as the monocular approach accomplishes this using sequential images and evaluating the optical flow. The monocular approach additionally requires the knowledge about the ego-motion of the camera which can be obtained either by an inertial measurement unit (IMU) or by the optical flow itself [1].…”
Section: Tracking Of Image Featuresmentioning
confidence: 99%
“…[4]), where as the monocular approach accomplishes this using sequential images and evaluating the optical flow. The monocular approach additionally requires the knowledge about the ego-motion of the camera which can be obtained either by an inertial measurement unit (IMU) or by the optical flow itself [1].…”
Section: Tracking Of Image Featuresmentioning
confidence: 99%
“…The stereoscopic approach accomplishes this using a pair of stereo images by estimating the disparity and using triangulation, where as the monocular approach accomplishes this using sequential images and evaluating the optical flow. The monocular approach additionally requires the knowledge about the ego-motion of the camera which can be obtained either by an inertial measurement unit (IMU) [7] or based on optical flow [2,14].…”
Section: Motion Analysismentioning
confidence: 99%
“…However, in the minimization of these epipolar errors it is necessary to find a way of avoiding local minima that are dependent on an initialization value. In comparison, although methods using differential epipolar constraints [11,12] are able to robustly estimate the camera motion from image sequences, they cannot simultaneously recover camera motion and three-dimensional structure from omnidirectional image sequences.…”
Section: Introductionmentioning
confidence: 97%