2017
DOI: 10.1155/2017/9796127
|View full text |Cite
|
Sign up to set email alerts
|

Fusing Depth and Silhouette for Scanning Transparent Object with RGB-D Sensor

Abstract: 3D reconstruction based on structured light or laser scan has been widely used in industrial measurement, robot navigation, and virtual reality. However, most modern range sensors fail to scan transparent objects and some other special materials, of which the surface cannot reflect back the accurate depth because of the absorption and refraction of light. In this paper, we fuse the depth and silhouette information from an RGB-D sensor (Kinect v1) to recover the lost surface of transparent objects. Our system i… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
8
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
6
2
1

Relationship

0
9

Authors

Journals

citations
Cited by 20 publications
(9 citation statements)
references
References 24 publications
0
8
0
Order By: Relevance
“…Second, while our system is able to handle partially transparent kits, it has trouble handling fully transparent ones like the deodorant blister pack (we spray-paint it to support stereo matching for our 3D camera). Exploring the use of external vision algorithms like [33], [34], [35], [36] to estimate the geometry of the transparent kits before using the visual data would be a promising direction for future research.…”
Section: Discussion and Future Workmentioning
confidence: 99%
“…Second, while our system is able to handle partially transparent kits, it has trouble handling fully transparent ones like the deodorant blister pack (we spray-paint it to support stereo matching for our 3D camera). Exploring the use of external vision algorithms like [33], [34], [35], [36] to estimate the geometry of the transparent kits before using the visual data would be a promising direction for future research.…”
Section: Discussion and Future Workmentioning
confidence: 99%
“…Works in this area generally fall in four main schemes: constraints driven, monocular depth estimation, depth completion from sparse point cloud, and depth completion given noisy RGB-D. Our work falls in the last scheme. Constraint driven approaches assume a specific setup method with fixed viewpoint(s) and capturing procedure [12,13,14,15,16], sensor type [17,18,11], or known object models [19,20,21]. Our proposed depth completion method does not apply any assumptions.…”
Section: Related Workmentioning
confidence: 99%
“…3D geometry can be reconstructed from multiple views or 3D scanning. Ji et al [37] conducted a volumetric reconstruction of transparent objects by fusing depth and silhouette from multiple images with known poses. Li et al [38] presented a physically based network to recover 3D shape of transparent object from multiple color images.…”
Section: Related Workmentioning
confidence: 99%