2020
DOI: 10.1016/j.aei.2020.101170
|View full text |Cite
|
Sign up to set email alerts
|

Intuitive robot teleoperation for civil engineering operations with virtual reality and deep learning scene reconstruction

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
12
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
7
2

Relationship

0
9

Authors

Journals

citations
Cited by 85 publications
(12 citation statements)
references
References 87 publications
0
12
0
Order By: Relevance
“…A future research direction in robotics is multirobot coordination between the base and adhesion robots [29], [30], which enables navigation and path planning of the climbing robot, similar to the proposal in Section V. Moreover, the proposed robot system can be used in industrial applications, such as the crack inspection process for social infrastructure. Considering the characteristics of the inspection process, human-robot interaction can be proposed, such as a teleoperation methodology to perform precise operations on air [31], [32], inspection methods using various deep learning technologies, or multimodal data fusion [33], [34]. Moreover, they can be applied to the proposed robot system to verify their effectiveness.…”
Section: Discussionmentioning
confidence: 99%
“…A future research direction in robotics is multirobot coordination between the base and adhesion robots [29], [30], which enables navigation and path planning of the climbing robot, similar to the proposal in Section V. Moreover, the proposed robot system can be used in industrial applications, such as the crack inspection process for social infrastructure. Considering the characteristics of the inspection process, human-robot interaction can be proposed, such as a teleoperation methodology to perform precise operations on air [31], [32], inspection methods using various deep learning technologies, or multimodal data fusion [33], [34]. Moreover, they can be applied to the proposed robot system to verify their effectiveness.…”
Section: Discussionmentioning
confidence: 99%
“…They proposed a novel approach for rendering a PCVE from robot sensor data. A number of authors have extended this idea using different approaches to render PVCEs ( Schwarz et al, 2017 ; Lesniak and Tucker, 2018 ; Valenzuela-Urrutia et al, 2019 ), Voxel Virtual Environments (VVEs) ( Mossel and Kroeter, 2017 ; Zhou and Tuzel, 2018 ; Stotko et al, 2019 ; Li et al, 2022 ), and VEs composed of precomposed objects based on automated recognition of objects from the point cloud data ( Zhou et al, 2020 ).…”
Section: Literature Reviewmentioning
confidence: 99%
“…The sensors typically used to capture environmental features that can be used for digital twins, laser scanners and RGB-D cameras, generate point clouds from which digital twins can be composed. The significant size of point cloud data makes the transmission difficult and slow ( Zhou et al, 2020 ). Further, there are many factors that affect the density of the resultant point clouds, for example, scanner specifications, data gathering time, communication bandwidth, and environmental features (smoke, radiation etc .).…”
Section: Introductionmentioning
confidence: 99%
“…Recently, 3D virtual environments have been built and used in fields that require learning or verification in various situations, such as autonomous driving or remote monitoring [1][2][3][4]. The virtual environment can be reconstructed to provide situations that can occur in the real environment using the 3D point cloud measured by light detection and range (LiDAR) [5][6][7][8].…”
Section: Introductionmentioning
confidence: 99%