2019
DOI: 10.1007/978-3-030-20873-8_41
|View full text |Cite
|
Sign up to set email alerts
|

Depth Reconstruction of Translucent Objects from a Single Time-of-Flight Camera Using Deep Residual Networks

Abstract: We propose a novel approach to recovering the translucent objects from a single time-of-flight (ToF) depth camera using deep residual networks. When recording the translucent objects using the ToF depth camera, their depth values are severely contaminated due to complex light interactions with the surrounding environment. While existing methods suggested new capture systems or developed the depth distortion models, their solutions were less practical because of strict assumptions or heavy computational complex… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
4
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
3
3
1

Relationship

0
7

Authors

Journals

citations
Cited by 7 publications
(5 citation statements)
references
References 32 publications
0
4
0
Order By: Relevance
“…Works in this area generally fall in four main schemes: constraints driven, monocular depth estimation, depth completion from sparse point cloud, and depth completion given noisy RGB-D. Our work falls in the last scheme. Constraint driven approaches assume a specific setup method with fixed viewpoint(s) and capturing procedure [12,13,14,15,16], sensor type [17,18,11], or known object models [19,20,21]. Our proposed depth completion method does not apply any assumptions.…”
Section: Related Workmentioning
confidence: 99%
“…Works in this area generally fall in four main schemes: constraints driven, monocular depth estimation, depth completion from sparse point cloud, and depth completion given noisy RGB-D. Our work falls in the last scheme. Constraint driven approaches assume a specific setup method with fixed viewpoint(s) and capturing procedure [12,13,14,15,16], sensor type [17,18,11], or known object models [19,20,21]. Our proposed depth completion method does not apply any assumptions.…”
Section: Related Workmentioning
confidence: 99%
“…Further, several works on improving on various aspects and applications using machine learning followed, e.g. online calibration using RGB information [26], frame rate optimization [5], power efficiency [6], robotic arm setups [27] or translucent materials [28], to name a few. The aforementioned approaches all use standard 2D CNNs and thus consider the denoising problem as an image task.…”
Section: Related Workmentioning
confidence: 99%
“…However, existing methods treat the task of ToF denoising as a 2D image problem and do not take into account the explicit 3D information in their computations. In these works, the depth information is usually used as an input to standard Convolutional Neural Networks (CNN) for images [1,2,20,27,28], while the underlying 3D structure is not analyzed. In this work instead, we propose a new neural network architecture that projects the problem into the 3D domain and makes use of point convolutional neural networks [13] to analyze the noisy reconstruction and adjust the point positions along the view direction, see Fig.…”
Section: Introductionmentioning
confidence: 99%
“…In [32,33], transparent objects are reconstructed with known background patterns. In [34][35][36], the depth of transparent objects is reconstructed using time of flight camera, since glass absorbs light of certain wavelengths. 3D geometry can be reconstructed from multiple views or 3D scanning.…”
Section: Related Workmentioning
confidence: 99%