2016 IEEE International Conference on Robotics and Automation (ICRA) 2016
DOI: 10.1109/icra.2016.7487206
|View full text |Cite
|
Sign up to set email alerts
|

Fast, robust, continuous monocular egomotion computation

Abstract: Abstract-We propose robust methods for estimating camera egomotion in noisy, real-world monocular image sequences in the general case of unknown observer rotation and translation with two views and a small baseline. This is a difficult problem because of the nonconvex cost function of the perspective camera motion equation and because of non-Gaussian noise arising from noisy optical flow estimates and scene non-rigidity. To address this problem, we introduce the expected residual likelihood method (ERL), which… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
5

Citation Types

0
28
0

Year Published

2017
2017
2022
2022

Publication Types

Select...
4
4

Relationship

0
8

Authors

Journals

citations
Cited by 23 publications
(28 citation statements)
references
References 37 publications
0
28
0
Order By: Relevance
“…Joint estimation of object and ego motion from monocular RGB frames can be ambiguous [4]. However, the estimation of ego-and object-motion components from their composite optic flow could be improved by using the geometric constraints of the motion field to regularize a deep neural network-based predictor [19], [29].…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…Joint estimation of object and ego motion from monocular RGB frames can be ambiguous [4]. However, the estimation of ego-and object-motion components from their composite optic flow could be improved by using the geometric constraints of the motion field to regularize a deep neural network-based predictor [19], [29].…”
Section: Introductionmentioning
confidence: 99%
“…Compared with the existing approaches, our method does not assume a static scene [15], [16], [31] and does not require dynamic segment mask [2], [6], [26] or depth [11], [27], [28] for ego-motion prediction from monocular RGB frames. This is achieved by using continuous ego-motion constraints to train a neural network-based predictor, which allows the network to remove variations due to depth and moving objects in the input frames [19], [29]. Fig.…”
Section: Introductionmentioning
confidence: 99%
“…The images of surrounding environment captured by the vehicle vision system provide abundant surrounding traffic information, such as static inliers (correct feature matches on static scenes) and dynamic inliers (correct feature matches on moving vehicles). Recent works have extensively leveraged various visual feature attributes, (e.g., lane detection [9], feature matching [10], semantic segmentation [11]) to compute vehicle dynamics and simultaneously guide the vehicle on road. It is notable that Scaramuzza [12] has applied the nonholonomic constraint model to estimate ego motion of ground vehicles.…”
Section: Introductionmentioning
confidence: 99%
“…Vision sensors have been commonly used for vehicle and traffic dynamics analysis such as lane detection [3], feature matching [4], and semantic segmentation [5]. It is remarkable that Scaramuzza et al [6] has applied the nonholonomic constraint model to estimate ground vehicle motions.…”
Section: Introductionmentioning
confidence: 99%