2018
DOI: 10.1016/j.robot.2018.06.009
|View full text |Cite
|
Sign up to set email alerts
|

Robust RGB-D visual odometry based on edges and points

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
17
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 20 publications
(17 citation statements)
references
References 24 publications
0
17
0
Order By: Relevance
“…The RMSE values of the translational drift and rotational drift are shown in Table 2. And from the analyses and discussion in [27], it is clear that the EP-BASED VO achieves the state-of-the-art results. However, our proposed approach integrated with ORB SLAM2 outperforms the other two approaches for most dynamic sequences, which can be found in the Table 2.…”
Section: B Comparison With State-of-the-art Vo Systemmentioning
confidence: 94%
See 3 more Smart Citations
“…The RMSE values of the translational drift and rotational drift are shown in Table 2. And from the analyses and discussion in [27], it is clear that the EP-BASED VO achieves the state-of-the-art results. However, our proposed approach integrated with ORB SLAM2 outperforms the other two approaches for most dynamic sequences, which can be found in the Table 2.…”
Section: B Comparison With State-of-the-art Vo Systemmentioning
confidence: 94%
“…And an iterative closest point (ICP) algorithm is implemented by adding the information composed of static-point's weight, intensity weight and geometric weight. Yao et al [27] proposed a real-time visual odometry combining the sparse edge alignment and minimizing reprojection error to obtain robust state estimation. In addition, the edge with the large reprojection error in the dynamic area will be discarded.…”
Section: Related Workmentioning
confidence: 99%
See 2 more Smart Citations
“…One of the most used methods is Simultaneous Localization and Mapping (SLAM) [2] where the environment is not known, so a vision based system is placed on a robot to observe the area and create a map of it, eventually to perform trajectory planning on top of it. There were many enhancements of the SLAM methods proposed, some of them are compared by Santos [3] and the most common sensors used there are lidars [4][5][6][7] and depth cameras [4,[8][9][10]. However, when the environment is known (indoor areas frequently), the advanced SLAM systems may become redundant and if more robots are deployed, it becomes costly when every single robot has to carry all sensors.…”
Section: Introductionmentioning
confidence: 99%