2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW) 2019
DOI: 10.1109/iccvw.2019.00338
|View full text |Cite
|
Sign up to set email alerts
|

HomebrewedDB: RGB-D Dataset for 6D Pose Estimation of 3D Objects

Abstract: Among the most important prerequisites for creating and evaluating 6D object pose detectors are datasets with labeled 6D poses. With the advent of deep learning, demand for such datasets is growing continuously. Despite the fact that some of exist, they are scarce and typically have restricted setups, such as a single object per sequence, or they focus on specific object types, such as textureless industrial parts. Besides, two significant components are often ignored: training using only available 3D models i… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
84
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 112 publications
(87 citation statements)
references
References 30 publications
(64 reference statements)
0
84
0
Order By: Relevance
“…In T-LESS [ 25 ], this error was comparable to ours with 2.5 mm for a structured light RGB-D sensor and 5.6 mm for a ToF sensor, respectively. The same sensors were employed in HomebrewedDB [ 24 ] and comparable errors of 2.56 mm (structured light) and 9.12 mm (ToF) were reported.…”
Section: Discussionmentioning
confidence: 96%
See 2 more Smart Citations
“…In T-LESS [ 25 ], this error was comparable to ours with 2.5 mm for a structured light RGB-D sensor and 5.6 mm for a ToF sensor, respectively. The same sensors were employed in HomebrewedDB [ 24 ] and comparable errors of 2.56 mm (structured light) and 9.12 mm (ToF) were reported.…”
Section: Discussionmentioning
confidence: 96%
“…This was attributed to the errors in the estimated intrinsic and extrinsic RGB-D camera parameters. Therefore, similar to the practice in RGB-D datasets reported in [ 24 , 25 ], a depth correction was deemed necessary. For each specimen and each sensor, the median z -coordinate deviation was calculated over all 90 datapoints (15 push-pins × 6 viewpoints).…”
Section: Materials and Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…The objects in the 3D scene were randomly sampled from publicly available datasets [40][41][42][43] and each scene was rendered by Blender to have 21 varifocal images [44]. The textures of the objects used in the training stage were randomly sampled from the CC0 texture library and the textures of the objects used in the evaluation stage were sampled from the "Benchmark for 6D Object Pose Estimation" datasets [40][41][42][43]. The colors, orientations, and intensities of the light sources were randomly sampled while the maximum intensity was restricted to prevent overexposure.…”
Section: Generation Of the Training Datasetmentioning
confidence: 99%
“…With the development of data-driven methods designed for robotics applications [33], the importance of synthetic data has been highlighted. Recent works [8,9,[34][35][36] combine real and synthetic data to generate 3D object datasets, which render 3D object models on real backgrounds in order to produce synthesized images. YCB-Video [9] dataset is the mostly used 3D object datasets for 6D object pose estimation.…”
Section: Related Workmentioning
confidence: 99%