2021
DOI: 10.1109/mcg.2021.3082267
|View full text |Cite
|
Sign up to set email alerts
|

Collaborative VR-Based 3D Labeling of Live-Captured Scenes by Remote Users

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
4
3

Relationship

1
6

Authors

Journals

citations
Cited by 9 publications
(3 citation statements)
references
References 13 publications
0
3
0
Order By: Relevance
“…Knowledge about the current state in turn would allow an adaptive acquisition in terms of guiding the user about where to look to acquire data for still missing scene parts or to densify data in areas of low point density (or large triangles, respectively). Besides an in-situ visualisation of virtual contents like the progress of geometry acquisition, information directly derived from the acquired data, or BIM data for the user on-site, it might be interesting to allow remotely immersed users to conduct distance measurements, select objects or perform annotations in the acquired data similar to the work of Zingsheim et al (2021) and to additionally stream these information to the user on-site performing data acquisition.…”
Section: Discussionmentioning
confidence: 99%
“…Knowledge about the current state in turn would allow an adaptive acquisition in terms of guiding the user about where to look to acquire data for still missing scene parts or to densify data in areas of low point density (or large triangles, respectively). Besides an in-situ visualisation of virtual contents like the progress of geometry acquisition, information directly derived from the acquired data, or BIM data for the user on-site, it might be interesting to allow remotely immersed users to conduct distance measurements, select objects or perform annotations in the acquired data similar to the work of Zingsheim et al (2021) and to additionally stream these information to the user on-site performing data acquisition.…”
Section: Discussionmentioning
confidence: 99%
“…Saran et al (2018) created an iOS application for simultaneous scanning and user-defined bounding box annotation. Furthermore, a collaborative VR system has been developed by Zingsheim et al (2021), that enables the labeling of live-captured scenes by remote users with sparse labels. Ramirez et al (2019) convert the tedious task of annotating into a playful first person shooter game, which we adopt as a partial basis for our system.…”
Section: Related Workmentioning
confidence: 99%
“…Since 2010, when Microsoft, in cooperation with PrimeSense, released the first Kinect, consumer RGB-D cameras have been through a democratization process, becoming very appealing to many areas of application, such as robotics [ 1 , 2 ], automotive [ 3 ], industrial [ 4 ], augmented reality (AR) [ 5 ], object detection [ 6 ], 3D reconstruction [ 7 ], and biomedical field [ 8 ]. All these applications thrived by receiving depth information in addition to color.…”
Section: Introductionmentioning
confidence: 99%