2017 IEEE-RAS 17th International Conference on Humanoid Robotics (Humanoids) 2017
DOI: 10.1109/humanoids.2017.8246973
|View full text |Cite
|
Sign up to set email alerts
|

Interactive data collection for deep learning object detectors on humanoid robots

Abstract: Deep Learning (DL) methods are notoriously data hungry. Their adoption in robotics is challenging due to the cost associated with data acquisition and labeling. In this paper we focus on the problem of object detection, i.e. the simultaneous localization and recognition of objects in the scene, for which various DL architectures have been proposed in the literature. We propose to use an automatic annotation procedure, which leverages on human-robot interaction and depth-based segmentation, for the acquisition … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
18
0

Year Published

2018
2018
2022
2022

Publication Types

Select...
7
3

Relationship

2
8

Authors

Journals

citations
Cited by 21 publications
(18 citation statements)
references
References 40 publications
0
18
0
Order By: Relevance
“…The dataset acquisition pipeline shown in Pasquale et al ( 2016 ) and used in Maiettini et al ( 2017 ) was developed for the iCub robot using images acquired from stereo vision system and then, using the interfaces resulting from the work presented, the pipeline was easily integrated on the R1 robot, that mounts a RGBD sensor.…”
Section: Discussionmentioning
confidence: 99%
“…The dataset acquisition pipeline shown in Pasquale et al ( 2016 ) and used in Maiettini et al ( 2017 ) was developed for the iCub robot using images acquired from stereo vision system and then, using the interfaces resulting from the work presented, the pipeline was easily integrated on the R1 robot, that mounts a RGBD sensor.…”
Section: Discussionmentioning
confidence: 99%
“…A possibility to address this issue is to design the learning model to handle multiple components of the occluded objects, e.g., Peng et al (2020). Another option is to exploit existing Human-Robot Interaction pipelines for automatic image annotation of handheld objects for the object detection task (Maiettini et al, 2017). Finally, refining a pretrained model on the target hand-object scenario by exploiting unlabeled images from the robot cameras and weakly-supervised learning (Hernández-González et al, 2016;Zhou, 2018) can achieve state-of-the-art accuracy with only a fraction of the required annotated data (Maiettini et al, 2019).…”
Section: Data and Sensingmentioning
confidence: 99%
“…Animals often act to gather information about the world through an active perception approach [8]. This approach, applied to robotics, was employed to improve mobile robot localization [9], and for better object tracking in data acquisition [10]. Active vision is also connected to multiview classification or pose estimation.…”
Section: Active Perceptionmentioning
confidence: 99%