2020
DOI: 10.1007/s41233-020-0031-7
|View full text |Cite
|
Sign up to set email alerts
|

Joint effects of depth-aiding augmentations and viewing positions on the quality of experience in augmented telepresence

Abstract: Virtual and augmented reality is increasingly prevalent in industrial applications, such as remote control of industrial machinery, due to recent advances in head-mounted display technologies and low-latency communications via 5G. However, the influence of augmentations and camera placement-based viewing positions on operator performance in telepresence systems remains unknown. In this paper, we investigate the joint effects of depth-aiding augmentations and viewing positions on the quality of experience for o… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
6
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
5
2

Relationship

2
5

Authors

Journals

citations
Cited by 8 publications
(6 citation statements)
references
References 67 publications
0
6
0
Order By: Relevance
“…Participants can rate the sequences using a controller or touchpad of the HMD [50], avoiding removing the goggles. Additionally, it can be recorded on a paper [30], verbally [35], [36], [51], [52], or online using a web application [53].…”
Section: Assessment Of Audiovisual Quality With 360°videosmentioning
confidence: 99%
“…Participants can rate the sequences using a controller or touchpad of the HMD [50], avoiding removing the goggles. Additionally, it can be recorded on a paper [30], verbally [35], [36], [51], [52], or online using a web application [53].…”
Section: Assessment Of Audiovisual Quality With 360°videosmentioning
confidence: 99%
“…In other non-entertainment contexts, augmented remote operation appears to benefit the operators in terms of their Quality of Experience (QoE) and task accomplishment, as demonstrated in [6,8,9,10,11]. The augmentation covered by our system is a relatively simple view manipulation, meant to explore the technical feasibility more than the specific operator experience.…”
Section: Discussionmentioning
confidence: 99%
“…1a shows an example of a remote system in a mine, where operators are shown direct video feeds from on-machinery cameras, under various connection methods (5G, wired ethernet) and video compression levels. However, going beyond the direct presentation of camera views, there is potential in using AR, view synthesis, and range sensing from Time-of-Flight (ToF) sensors such as Light Detection And Ranging (lidar) to further enhance the operator awareness of the on-site environment [7,8]. The benefits of augmented and indirect views have been investigated in other contexts such as underwater robot operation [9], forestry [6], hazard exploration [10], and satellite repair [11], and are likely to be beneficial for mining as well.…”
Section: Introductionmentioning
confidence: 99%
“…Entity Locations highlight the locations of entities within the robot's environment through rings, arrows, bounding boxes, etc. This VDE is especially useful when the location of an entity is occluded by walls or containers or outside of the user's field-of-view [19]. These can also be used to highlight task-and dialogue-relevant entities, either by allowing [49], E: Sensed Spatial Regions [69], F: Robot Inherent Spatial Regions [23], G: User-Defined Spatial Regions [78], H: Entity Labels, Entity Locations, and Task Status [8].…”
Section: Entity-based Robot Comprehension Visualizationmentioning
confidence: 99%