2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN) 2020
DOI: 10.1109/ro-man47096.2020.9223445
|View full text |Cite
|
Sign up to set email alerts
|

Virtual Reality based Telerobotics Framework with Depth Cameras

Abstract: This work describes a virtual reality (VR) based robot teleoperation framework which relies on scene visualization from depth cameras and implements human-robot and human-scene interaction gestures. We suggest that mounting a camera on a slave robot's end-effector (an in-hand camera) allows the operator to achieve better visualization of the remote scene and improve task performance. We compared experimentally the operator's ability to understand the remote environment in different visualization modes: single … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
18
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
3

Relationship

1
6

Authors

Journals

citations
Cited by 21 publications
(19 citation statements)
references
References 19 publications
0
18
0
Order By: Relevance
“…Future research will focus on integrating a multi-modal approach with visual patches and implementation of the proposed system on a teleoperated mobile manipulation system. We plan to demonstrate how automatic fracture characterization will be efficiently integrated with the mobile manipulator controller (Farkhatdinov and Ryu, 2008 ) and how the obtained tactile data can be visualized in a dedicated virtual reality-based human-operator interface (Omarali et al, 2020 ). Further studies will be performed on dimensionality reduction with principal component analysis which may increase the classification accuracy.…”
Section: Discussionmentioning
confidence: 99%
“…Future research will focus on integrating a multi-modal approach with visual patches and implementation of the proposed system on a teleoperated mobile manipulation system. We plan to demonstrate how automatic fracture characterization will be efficiently integrated with the mobile manipulator controller (Farkhatdinov and Ryu, 2008 ) and how the obtained tactile data can be visualized in a dedicated virtual reality-based human-operator interface (Omarali et al, 2020 ). Further studies will be performed on dimensionality reduction with principal component analysis which may increase the classification accuracy.…”
Section: Discussionmentioning
confidence: 99%
“…Augmented remote operation, or Augmented Telepresence (AT) [14], denotes applications where video-mediated communication is the enabling technology, but where additional data can be superimposed on or merged with the captured camera view as in AR. Such augmentation can be achieved on-site via using a see-through display and rendering only 3-Dimensional (3D) approximations of specific scene elements [15], or off-site via partial and full view rendering [16,17,9] for non-transparent displays. Unlike computer-generated imagery, this view rendering is at least partly based on the content of camera views.…”
Section: Related Workmentioning
confidence: 99%
“…View enhancement, or augmentation, is what distinguishes augmented remote operation from conventional VR headset based remote operation. Bejczy et al [10], Yun et al [5] and Omarali et al [16] are examples of conventional and augmented operation, all aiming to improve the remote control of a robotic arm. In [10,5], camera views of the scene are rendered to virtual display panels in a VR environment, without any change to the camera view content.…”
Section: A Augmented Remote Operation In Non-entertainment Contextsmentioning
confidence: 99%
See 2 more Smart Citations