2019
DOI: 10.1109/access.2019.2960798
|View full text |Cite
|
Sign up to set email alerts
|

Refining the Fusion of Pepper Robot and Estimated Depth Maps Method for Improved 3D Perception

Abstract: As it is well known, some versions of the Pepper robot provide poor depth perception due to the lenses it has in front of the tridimensional sensor. In this paper, we present a method to improving that faulty 3D perception. Our proposal is based on a combination of the actual depth readings of Pepper and a deep learning-based monocular depth estimation. As shown, the combination of both of them provides a better 3D representation of the scene. In previous works we made an initial approximation of this fusion t… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
7
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
3
3

Relationship

0
6

Authors

Journals

citations
Cited by 6 publications
(7 citation statements)
references
References 30 publications
0
7
0
Order By: Relevance
“…It is also interesting to mention the work of Bauer et al on improving the vision of the Pepper robot [14]. In their work, they seek to improve 3D perception by fusion method between the depth image raw data with the monocular depth prediction from the RGB image.…”
Section: A Related Workmentioning
confidence: 99%
“…It is also interesting to mention the work of Bauer et al on improving the vision of the Pepper robot [14]. In their work, they seek to improve 3D perception by fusion method between the depth image raw data with the monocular depth prediction from the RGB image.…”
Section: A Related Workmentioning
confidence: 99%
“…The first one computes visual information from 2D monocular cameras and 3D Depth sensors to build a 3D map. In [1], experiments have been conducted on Pepper's depth camera (the Asus Xtion) that lead to a Root Mean Square Error (RMSE) of 20.36mm for a 1m distance and 79.15mm for 3m distance. This error is found to stem from the robot's lenses [1].…”
Section: Navigation Modulementioning
confidence: 99%
“…In [1], experiments have been conducted on Pepper's depth camera (the Asus Xtion) that lead to a Root Mean Square Error (RMSE) of 20.36mm for a 1m distance and 79.15mm for 3m distance. This error is found to stem from the robot's lenses [1]. Other issues such as depth shadow or monocular depth estimation are also covered in this paper.…”
Section: Navigation Modulementioning
confidence: 99%
See 1 more Smart Citation
“…with humans, social robots require robust visual perception [2], which remains challenging especially in real-world scenarios and public spaces with diverse user and environmental contexts [5,6]. This study aims to improve human perception capabilities of social robots, demonstrated on the Pepper humanoid robot as a representative commonly adopted in research and application [24,28,30].…”
mentioning
confidence: 99%