2018 IEEE-RAS 18th International Conference on Humanoid Robots (Humanoids) 2018
DOI: 10.1109/humanoids.2018.8625019
|View full text |Cite
|
Sign up to set email alerts
|

Dense RGB-D SLAM for Humanoid Robots in the Dynamic Humans Environment

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 12 publications
(6 citation statements)
references
References 17 publications
0
4
0
Order By: Relevance
“…The absolute trajectory error is mainly evaluated by calculating indicators such as the root mean square error (RMSE), mean error (mean), median error (median), and standard deviation (STD). RMSE can reflect the difference between the real value and the observed value, and STD can reflect the dispersion between the trajectory estimated by the camera and the real trajectory, demonstrating the robustness and stability of the system, which can adequately reflect SLAM performance (de Croon et al, 2021; Zhang et al, 2018).…”
Section: Experiments and Results Analysismentioning
confidence: 99%
“…The absolute trajectory error is mainly evaluated by calculating indicators such as the root mean square error (RMSE), mean error (mean), median error (median), and standard deviation (STD). RMSE can reflect the difference between the real value and the observed value, and STD can reflect the dispersion between the trajectory estimated by the camera and the real trajectory, demonstrating the robustness and stability of the system, which can adequately reflect SLAM performance (de Croon et al, 2021; Zhang et al, 2018).…”
Section: Experiments and Results Analysismentioning
confidence: 99%
“…While the most popular approach remains sensor fusion [53,54], other purely visual approaches have also been proposed, such as, [55] which introduced a dense RGB-D SLAM solution that utilized optical flow residuals to achieve accurate and efficient dynamic/ static segmentation for camera tracking and background reconstruction. Zhang et al [56] took a more direct approach which employed deep learning based human detection, and used graph-based segmentation to separate moving humans from the static environment. They further presented a SLAM benchmark dedicated to dynamic environment SLAM solutions [57].…”
Section: Localization Mapping and Slammentioning
confidence: 99%
“…Their method is able to construct a semi-dense map and handle slightly dynamic environments. ZHANG et al [28] propose a method to handle image blur from robot motion, dynamic scene elements and tracking failures, based on a surfel fusion system [20]. Dynamic scene parts are assumed to come from human motion.…”
Section: B Visual Perception For Humanoidsmentioning
confidence: 99%