2020
DOI: 10.3390/s20236790
|View full text |Cite
|
Sign up to set email alerts
|

6DoF Pose Estimation of Transparent Object from a Single RGB-D Image

Abstract: 6DoF object pose estimation is a foundation for many important applications, such as robotic grasping, automatic driving, and so on. However, it is very challenging to estimate 6DoF pose of transparent object which is commonly seen in our daily life, because the optical characteristics of transparent material lead to significant depth error which results in false estimation. To solve this problem, a two-stage approach is proposed to estimate 6DoF pose of transparent object from a single RGB-D image. In the fir… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
12
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
4
2
2

Relationship

0
8

Authors

Journals

citations
Cited by 17 publications
(12 citation statements)
references
References 54 publications
0
12
0
Order By: Relevance
“…Rather than reconstructing a depth map, [4], [15] generate the grasp proposal with only RGB images or noisy depth maps as input. In [4], transfer learning was used to transfer the grasping model trained on depth maps to transparent object grasping with RGB images.…”
Section: A Transparent Object Graspingmentioning
confidence: 99%
See 1 more Smart Citation
“…Rather than reconstructing a depth map, [4], [15] generate the grasp proposal with only RGB images or noisy depth maps as input. In [4], transfer learning was used to transfer the grasping model trained on depth maps to transparent object grasping with RGB images.…”
Section: A Transparent Object Graspingmentioning
confidence: 99%
“…In [4], transfer learning was used to transfer the grasping model trained on depth maps to transparent object grasping with RGB images. In [15], a two-stage approach was proposed to estimate 6-DoF pose of transparent objects from a single RGB-D image, which can be used to assist transparent object grasping. Nonetheless, there has been no work on transparent object grasping with both visual and tactile information.…”
Section: A Transparent Object Graspingmentioning
confidence: 99%
“…Phillips et al [8] trained a random forest to detect the contours of transparent objects for the purpose of pose estimation and shape recovery. Xu et al [9] proposed a two-stage method for estimating the 6-degrees-of-freedom (DOF) pose of a transparent object with a single RGBD image by replacing the noisy depth values with estimated values and training a DenseFusion-like network structure [10] to predict the object's 6-DOF pose. Sajjan et al [11] extend this and incorporate a neural network trained for 3D pose estimation of transparent objects in a robotic picking pipeline, while Zhou et al [12,13] train a grasp planner directly on raw images from a light-field camera.…”
Section: Related Workmentioning
confidence: 99%
“…Data-driven approaches learn a prior using labeled data [25,26] or through self-supervision over many trials in a simulated or physical environment [27,28] and generalize to grasping novel objects with unknown geometry. Both approaches rely on RGB and depth sensors to generate a sufficiently accurate observation of the target object surface, such as depth maps [29,30,31], point clouds [32,33,34,9], octrees [35], or a truncated signed distance function (TSDF) [36,37] from which it can compute the grasp pose. While various grasp-planning methods use different input geometry to compute grasps, in this paper we propose a method to render a high-quality depth map from a NeRF model to then pass to Dex-Net [29] to compute a grasp.…”
Section: Related Workmentioning
confidence: 99%
“…Works in this area generally fall in four main schemes: constraints driven, monocular depth estimation, depth completion from sparse point cloud, and depth completion given noisy RGB-D. Our work falls in the last scheme. Constraint driven approaches assume a specific setup method with fixed viewpoint(s) and capturing procedure [12,13,14,15,16], sensor type [17,18,11], or known object models [19,20,21]. Our proposed depth completion method does not apply any assumptions.…”
Section: Related Workmentioning
confidence: 99%