2015 IEEE Intelligent Vehicles Symposium (IV) 2015
DOI: 10.1109/ivs.2015.7225730
|View full text |Cite
|
Sign up to set email alerts
|

Robust scale estimation for monocular visual odometry using structure from motion and vanishing points

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
35
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
4
2
2

Relationship

1
7

Authors

Journals

citations
Cited by 42 publications
(36 citation statements)
references
References 13 publications
1
35
0
Order By: Relevance
“…In [42], the global scale of the 3D reconstruction is recovered by a set of pre-defined classes of objects. Other scene knowledge is also used to recover the absolute scale, such as the average pedestrian's height [38], vanishing lines [33] etc.…”
Section: B Monocular-based Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…In [42], the global scale of the 3D reconstruction is recovered by a set of pre-defined classes of objects. Other scene knowledge is also used to recover the absolute scale, such as the average pedestrian's height [38], vanishing lines [33] etc.…”
Section: B Monocular-based Methodsmentioning
confidence: 99%
“…For robust estimation, RANSAC (Random Sample Consensus) technique is used to help against outliers. In [33], two different methods are used to compute the normal vector of ground plane: 1) 3-points based RANSAC together with least-squares optimization; 2) vanishing point estimated from special scene structure is also applied for n estimation. Then the two normal vectors are fused and tracked by a Kalman Filter for the scale estimation in next frame.…”
Section: B 3d Plane Fitting Based Scale Estimationmentioning
confidence: 99%
“…For example, a fixed height of the camera above the ground plane is useful to estimate scale and therefore avoid scale drift [10], [11], [12], [13]. Alternatively, the size of known objects in the environment can be used as a depth cue [14], [15].…”
Section: A Scale Drift On Monocular Slammentioning
confidence: 99%
“…The input is a temporal image sequence, output are the camera poses and a sparse reconstruction of the environment. Scale information can be retained from a second, calibrated camera with known baseline [6], structure inherent information, such as the distance to the ground plane [19], IMU [20] or in our case depth measurements from a LIDAR. Scale can optionally be used for estimating the frame to frame motion as well.…”
Section: Introduction and Related Workmentioning
confidence: 99%