2011 IEEE International Conference on Robotics and Automation 2011
DOI: 10.1109/icra.2011.5979793
|View full text |Cite
|
Sign up to set email alerts
|

Transparent object detection and reconstruction on a mobile platform

Abstract: In this paper we propose a novel approach to detect and reconstruct transparent objects. This approach makes use of the fact that many transparent objects, especially the ones consisting of usual glass, absorb light in certain wavelengths [1]. Given a controlled illumination, this absorption is measurable in the intensity response by comparison to the background. We show the usage of a standard infrared emitter and the intensity sensor of a time of flight (ToF) camera to reconstruct the structure given we have… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
29
0

Year Published

2011
2011
2023
2023

Publication Types

Select...
4
4

Relationship

0
8

Authors

Journals

citations
Cited by 65 publications
(29 citation statements)
references
References 14 publications
0
29
0
Order By: Relevance
“…A robot is able to grasp 80% of known transparent objects with the proposed algorithm and this result is robust across nonspecular backgrounds behind the objects. Our approach and other existing algorithms for pose estimation (Klank et al [12], Phillips et al [22]) can not handle overlapping transparent objects so this is the main direction for future work.…”
Section: Discussionmentioning
confidence: 99%
See 2 more Smart Citations
“…A robot is able to grasp 80% of known transparent objects with the proposed algorithm and this result is robust across nonspecular backgrounds behind the objects. Our approach and other existing algorithms for pose estimation (Klank et al [12], Phillips et al [22]) can not handle overlapping transparent objects so this is the main direction for future work.…”
Section: Discussionmentioning
confidence: 99%
“…Our main focus is accurate pose estimation of transparent objects which is used to enable robotic grasping. Previous works by Klank et al [12], Phillips et al [22] are the most relevant here because other papers don't deal with pose estimation of transparent objects. These works require two views of a test scene, in contrary we use a single image of Kinect at the test stage to recognize and estimate pose of transparent objects.…”
Section: Proposed Approachmentioning
confidence: 99%
See 1 more Smart Citation
“…Others have eschewed analytic models and instead proposed methods based on learning from an examplar set [7,19,18,10,16,13]. Additional key distinctions between approaches include whether the sensor used is passive (e.g., CCD/CMOS camera) or active (e.g., timeof-flight sensor [12]) and whether a single view or multiple views are considered.…”
Section: Related Workmentioning
confidence: 99%
“…[3] learns object models from data and [8] detect transparent objects employing a second time of flight camera. For increasing robustness of visual category recognition across different sources such as from the web, high quality DSLRs or webcams [12] the metric learning formulation proved to be a successful.…”
Section: Related Workmentioning
confidence: 99%