2017 IEEE 20th International Conference on Intelligent Transportation Systems (ITSC) 2017
DOI: 10.1109/itsc.2017.8317679
|View full text |Cite
|
Sign up to set email alerts
|

Momo: Monocular motion estimation on manifolds

Abstract: Knowledge about the location of a vehicle is indispensable for autonomous driving. In order to apply global localisation methods, a pose prior must be known which can be obtained from visual odometry. The quality and robustness of that prior determine the success of localisation. Momo is a monocular frame-to-frame motion estimation methodology providing a high quality visual odometry for that purpose. By taking into account the motion model of the vehicle, reliability and accuracy of the pose prior are signifi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
5
1

Relationship

1
5

Authors

Journals

citations
Cited by 8 publications
(4 citation statements)
references
References 17 publications
0
4
0
Order By: Relevance
“…In [14], a visual odometry method was developed to calculate the movement of a monocular camera with six degrees of freedom, assuming that the ground plane is known. In [15], a robust new visual odometry framework was presented by considering the motion model of the vehicle. In [16], an end-to-end deep learning framework was used to train a regressor for visual odometry.…”
Section: Related Workmentioning
confidence: 99%
“…In [14], a visual odometry method was developed to calculate the movement of a monocular camera with six degrees of freedom, assuming that the ground plane is known. In [15], a robust new visual odometry framework was presented by considering the motion model of the vehicle. In [16], an end-to-end deep learning framework was used to train a regressor for visual odometry.…”
Section: Related Workmentioning
confidence: 99%
“…We use semantics and cheirality for a preliminary outlier rejection as mentioned in Section V-C. Though most of the outliers can be detected by these methods, some still remain such as moving shadows of vehicles, non classified moving objects, etc.. As shown in our previous work [25], loss functions improve estimation accuracy and robustness drastically. Therefore, we employ the Cauchy function as loss functions ρ φ (x), ρ ξ (x) in order to reduce the influence of large residuals in both, the depth and the reprojection error cost functions.…”
Section: E Robustification and Problem Formulationmentioning
confidence: 99%
“…b) Based on Epipolar Geometry: Having obtained a set of 2D correspondences of the scene, the ego motion can be estimated up to scale. Therefore, the motion estimate lies on a manifold with five degrees of freedom [12] which are the three angles of rotation and the two translation vector directions. Classically, an error metric that is based on the epipolar geometry is used for optimization (see [13], [14]).…”
Section: B Odometry Estimationmentioning
confidence: 99%
“…c) Transformation Estimation: To reliably estimate a transformation between two frames, we use the weighted least squares method presented by Marx [11] which was introduced in Section II-B. This method yields an orthonormal rotation matrix R2←1 and a translation t2←1 so that we can estimate the motion flow fmotion,2←1 (x 1 ) = R2←1 x 1 + t2←1 (12) as defined in Section III-B. Based on this scene flow, a rigid-body transformation is estimated in the training process that we transform into a motion flow field (see Fig.…”
Section: A Implementationmentioning
confidence: 99%