Robotics: Science and Systems XII
DOI: 10.15607/rss.2016.xii.021
|View full text |Cite
|
Sign up to set email alerts
|

Seeing Glassware: from Edge Detection to Pose Estimation and Shape Recovery

Abstract: Abstract-Perception of transparent objects has been an open challenge in robotics despite advances in sensors and datadriven learning approaches. In this paper, we introduce a new approach that combines recent advances in learnt object detectors with perceptual grouping in 2D, and projective geometry of apparent contours in 3D. We train a state of the art structured edge detector on an annotated set of foreground glassware. We assume that we deal with surfaces of revolution (SOR) and apply perceptual symmetry … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
19
0

Publication Types

Select...
5
3
2

Relationship

0
10

Authors

Journals

citations
Cited by 40 publications
(21 citation statements)
references
References 29 publications
0
19
0
Order By: Relevance
“…Current robot vision systems have become possible to recognize mirrors and transparent objects using aformentioned previous methods. Robot manipulators have been able to grasp them and mobile robots could avoid them [24]- [27]. Recent studies have used deep learning to extract latent characteristics of the appearance of transparent objects from large amounts of training data [28]- [32].…”
Section: Related Work a Vision-based Approachmentioning
confidence: 99%
“…Current robot vision systems have become possible to recognize mirrors and transparent objects using aformentioned previous methods. Robot manipulators have been able to grasp them and mobile robots could avoid them [24]- [27]. Recent studies have used deep learning to extract latent characteristics of the appearance of transparent objects from large amounts of training data [28]- [32].…”
Section: Related Work a Vision-based Approachmentioning
confidence: 99%
“…It is worth noting that other sensors can also be used for transparent object pose estimation. For example, transparent object pose can be estimated through a monocular color camera [43,44], but the translation estimation along the z-axis tends to be inaccurate due to lack of 3D depth information. Stereo camera [45,46], light field camera [47], single pixel camera [48], and microscope-camera system [49] can be used for object pose estimation, but these works are very different from this paper and are not discussed further.…”
Section: Related Workmentioning
confidence: 99%
“…In view-based methods, the comparison result among precomputed 2D views of the object and the query image defines the position of the object. To accelerate the running time in these methods, the similar viewpoints will be merged [15,17,24,28,36]. Some of the view-based methods fixed the distance between the camera and the object (like the object exists on a table or conveyor belt) with a known constant distance from the camera.…”
Section: Related Workmentioning
confidence: 99%