Procedings of the British Machine Vision Conference 2017 2017
DOI: 10.5244/c.31.149
|View full text |Cite
|
Sign up to set email alerts
|

Combining Edge Images and Depth Maps for Robust Visual Odometry

Abstract: In this work, we propose a robust visual odometry system for RGBD sensors. The core of our method is a combination of edge images and depth maps for joint camera pose estimation. Edges are more stable under varying lighting conditions than raw intensity values and depth maps further add stability in poorly textured environments. This leads to higher accuracy and robustness in scenes, where feature-or photoconsistency-based approaches often fail. We demonstrate the robustness of our method under challenging con… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
7
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
4
1
1

Relationship

1
5

Authors

Journals

citations
Cited by 9 publications
(8 citation statements)
references
References 12 publications
0
7
0
Order By: Relevance
“…[21] and our visual odometer Pl-EVO. In the table, column REVO corresponds to the algorithm available online, which only register two consecutive frames, whereas column REVO E+D+Opt corresponds to the algorithm described in [13], where they optimize for the N previous frames to improve camera motion estimation. Table I suggests that the Pl-EVO obtains better results than the state-of-the-art odometers and SLAM systems for indoor environments lacking texture, without the need for relocalization or loop closing strategies and only performing a register between two keyframes.…”
Section: Resultsmentioning
confidence: 99%
See 2 more Smart Citations
“…[21] and our visual odometer Pl-EVO. In the table, column REVO corresponds to the algorithm available online, which only register two consecutive frames, whereas column REVO E+D+Opt corresponds to the algorithm described in [13], where they optimize for the N previous frames to improve camera motion estimation. Table I suggests that the Pl-EVO obtains better results than the state-of-the-art odometers and SLAM systems for indoor environments lacking texture, without the need for relocalization or loop closing strategies and only performing a register between two keyframes.…”
Section: Resultsmentioning
confidence: 99%
“…† The results come from [21]. ‡ The results come from [13] * It shows a possible loop closing dataset.…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…In [64], direct edge alignment is proposed that minimizes the sum of squared distances between the reprojected and the nearest edge point using the distance transform of the edge-map. Other works propose to jointly minimize this edge distance and other errors, e.g., a photometric error [114] and an ICP-based point-to-plane distance [93]. Later works such as [124] and [56] take the image gradient direction also into account for the direct edge alignment.…”
Section: Edge-based Methodsmentioning
confidence: 99%
“…To achieve the vision-based localization against the model, the authors propose an edge alignment scheme for the current image against a virtual image extracted from the model that is used as a reference or keyframe. A visual odometry framework (Schenk & Fraundorfer, 2017) using a similar algorithm for RGB-D sensors provides a better evaluation on the approach and produces estimates with a drift accumulation on par with state-of-the-art visual odometry (VO) methods.…”
Section: State Of the Artmentioning
confidence: 99%