2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) 2017
DOI: 10.1109/iros.2017.8202305
|View full text |Cite
|
Sign up to set email alerts
|

Robust edge-based visual odometry using machine-learned edges

Abstract: In this work, we present a real-time robust edgebased visual odometry framework for RGBD sensors (REVO). Even though our method is independent of the edge detection algorithm, we show that the use of state-of-the-art machinelearned edges gives significant improvements in terms of robustness and accuracy compared to standard edge detection methods. In contrast to approaches that heavily rely on the photo-consistency assumption, edges are less influenced by lighting changes and the sparse edge representation off… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
25
0

Year Published

2017
2017
2022
2022

Publication Types

Select...
5
2

Relationship

1
6

Authors

Journals

citations
Cited by 32 publications
(26 citation statements)
references
References 23 publications
0
25
0
Order By: Relevance
“…The information about the environment is introduced in our system as camera frames, one colour image and one depth image. Next, each frame is processed by the Robust Edge-Based Visual Odometry (REVO) algorithm [15], which extracts edges, detects keyframes and estimates the transformation between keyframes. When a new keyframe is generated, we extract a coloured point cloud by combining the information from the colour and the depth images, detect [16] and describe the existing planes, and match them using our plane matcher.…”
Section: General View Of the Systemmentioning
confidence: 99%
“…The information about the environment is introduced in our system as camera frames, one colour image and one depth image. Next, each frame is processed by the Robust Edge-Based Visual Odometry (REVO) algorithm [15], which extracts edges, detects keyframes and estimates the transformation between keyframes. When a new keyframe is generated, we extract a coloured point cloud by combining the information from the colour and the depth images, detect [16] and describe the existing planes, and match them using our plane matcher.…”
Section: General View Of the Systemmentioning
confidence: 99%
“…In contrast to [7,14], LSD-SLAM [4] is a semi-dense method that only relies on high-gradient regions to minimize the photoconsistency error. However, the quality of photoconsistency-based pose estimation suffers under motion blur and as demonstrated in [8,13] is limited to small inter-frame motions. Instead of high-gradient regions, many methods use the more robust edges.…”
Section: Related Workmentioning
confidence: 99%
“…Wang et al [17] jointly minimize edge distance and a photometric error at high-gradient pixels. In [13], Schenk and Fraundorfer study the influence of different edge detectors and demonstrate how to efficiently remove outliers to increase accuracy and robustness. Similar to feature-based methods, the distribution of edges can influence pose estimation.…”
Section: Related Workmentioning
confidence: 99%
See 2 more Smart Citations