CAD'21 Proceedings 2021
DOI: 10.14733/cadconfp.2021.319-323
|View full text |Cite
|
Sign up to set email alerts
|

labelCloud: A Lightweight Domain-Independent Labeling Tool for 3D Object Detection in Point Clouds

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
3
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
5
3

Relationship

1
7

Authors

Journals

citations
Cited by 8 publications
(4 citation statements)
references
References 4 publications
0
3
0
Order By: Relevance
“…Further, methods of active or reinforcement learning might, in the long run, reduce the amount of manual labelling. However, new media types, such as depth images or point cloud data, could lead to new challenges in labelling tasks (e.g., Sager et al, 2021). Economic considerations of labour costs (crowdsourcing) and opportunity costs (misclassification) could allow the calculation of an optimal cost-accuracy tradeoff.…”
Section: Discussionmentioning
confidence: 99%
“…Further, methods of active or reinforcement learning might, in the long run, reduce the amount of manual labelling. However, new media types, such as depth images or point cloud data, could lead to new challenges in labelling tasks (e.g., Sager et al, 2021). Economic considerations of labour costs (crowdsourcing) and opportunity costs (misclassification) could allow the calculation of an optimal cost-accuracy tradeoff.…”
Section: Discussionmentioning
confidence: 99%
“…To train and evaluate pose estimation on AVD we first provide pose annotation for the main object categories of sofa, table, desk, bed, and chair. We first get the dense 3D dense point-cloud of each scene using each scene RGB and depth images and annotate the 3D bounding boxes for objects using LabelCloud tool [21], the example is in Figure 4c. We then project corners of 3D bounding-boxes in world coordinate are projected back to the image plane using the transformation matrix from world to camera, Figure 4d, base on the following equation: [X c ,Y c , Z c ] T is a point in the camera coordinate frame and X w = [X w ,Y w , Z w ] T is the point in the world coordinate frame.…”
Section: Active Vision Dataset Pose Labelingmentioning
confidence: 99%
“…To obtain the 3D bounding box, region growing on the projected depth points of 2D image detection followed by box estimation was used to create ground truth automatically. This step was followed by a 3D orientation and manual error correction of pedestrians using LabelCloud (Sager et al, 2021). A total of 855 pedestrian instances (3D bounding boxes) were annotated using the above mentioned procedure with the pedestrian orientation distribution shown in Figure 4 for the complete dataset.…”
Section: Annotationmentioning
confidence: 99%