2016
DOI: 10.1515/amcs-2016-0005
|View full text |Cite
|
Sign up to set email alerts
|

Efficient RGB–D data processing for feature–based self–localization of mobile robots

Abstract: The problem of position and orientation estimation for an active vision sensor that moves with respect to the full six degrees of freedom is considered. The proposed approach is based on point features extracted from RGB-D data. This work focuses on efficient point feature extraction algorithms and on methods for the management of a set of features in a single RGB-D data frame. While the fast, RGB-D-based visual odometry system described in this paper builds upon our previous results as to the general architec… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
7
0

Year Published

2017
2017
2020
2020

Publication Types

Select...
3
3

Relationship

1
5

Authors

Journals

citations
Cited by 9 publications
(7 citation statements)
references
References 44 publications
0
7
0
Order By: Relevance
“…2. The depth calculation (4) assumes that the IR signal comes from a specific 3D point in the scene during the IT. If there is any motion in the period, the resulted depth will be corrupted.…”
Section: J Motion Blurmentioning
confidence: 99%
See 1 more Smart Citation
“…2. The depth calculation (4) assumes that the IR signal comes from a specific 3D point in the scene during the IT. If there is any motion in the period, the resulted depth will be corrupted.…”
Section: J Motion Blurmentioning
confidence: 99%
“…This technology has been applied in many fields for research and engineering solutions. Some practical applications of this sensing modality include robot navigation [3], [4], collision and obstacle detection for robot-assisted surgery [5], 3D reconstruction [6], measurement of structural deformation [7], [8], simultaneous localization and mapping (SLAM) [9], [10], human-computer interaction [11], 3D television (3DTV) [12], plant phenotype [13], [14], debris monitoring [15], etc.…”
Section: Introductionmentioning
confidence: 99%
“…From the RGB images, it only extracts the keypoints with the use of the ORB detector [26], which was chosen due to the good trade-off between the performance in visual navigation and the computational ef iciency [28]. The keypoints are extracted with respect to the scene depth data availability at the given point, avoiding artifacts in depth images [18]. In comparison with our earlier localization systems [3,18], an entirely new SLAM architecture has been introduced in [4].…”
Section: Put Slammentioning
confidence: 99%
“…The keypoints are extracted with respect to the scene depth data availability at the given point, avoiding artifacts in depth images [18]. In comparison with our earlier localization systems [3,18], an entirely new SLAM architecture has been introduced in [4]. The typical approach to graph-based SLAM, which dates back to 2D SLAM systems employing laser scanners [13], is based on the optimization of a graph of sensor poses and explicit detection of loop closures.…”
Section: Put Slammentioning
confidence: 99%
See 1 more Smart Citation