2019
DOI: 10.3389/frobt.2019.00005
|View full text |Cite
|
Sign up to set email alerts
|

The Effects of Sharing Awareness Cues in Collaborative Mixed Reality

Abstract: Augmented and Virtual Reality provide unique capabilities for Mixed Reality collaboration. This paper explores how different combinations of virtual awareness cues can provide users with valuable information about their collaborator's attention and actions. In a user study (n = 32, 16 pairs), we compared different combinations of three cues: Field-of-View (FoV) frustum, Eye-gaze ray, and Head-gaze ray against a baseline condition showing only virtual representations of each collaborator's head and hands. Throu… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

3
89
1
1

Year Published

2019
2019
2023
2023

Publication Types

Select...
6
2
1

Relationship

1
8

Authors

Journals

citations
Cited by 117 publications
(94 citation statements)
references
References 46 publications
(50 reference statements)
3
89
1
1
Order By: Relevance
“…Enabling multiple simultaneous users to communicate and collaborate is useful in many applications (e.g., see Section 4.3) Collaborative scenarios require design reconsiderations both for visualization (e.g., who should see what when, can one share a view, point at objects, show the other person(s) something), and interaction (e.g., can two people have eye contact, can one user hand something to another user, can two people carry a virtual table together). These are not trivial challenges, and proposed solutions are largely experimental [122,123]. Ideally, in a collaborative XR, people should (1) experience the presence of others (e.g., using avatars of the full body, or parts of the body such as hands) [124,125]; (2) be able to detect the gaze direction of others [126], and eventually, experience 'eye contact' [127]; (3) have on-demand access to what the others see ('shared field of view') [128,129]; (4) be able to share spatial context [123], especially in the case of remote collaboration (i.e., does it 'rain or shine' in one person's location, are they on the move, is it dark or light, are they looking at a water body?…”
Section: Interaction Designmentioning
confidence: 99%
See 1 more Smart Citation
“…Enabling multiple simultaneous users to communicate and collaborate is useful in many applications (e.g., see Section 4.3) Collaborative scenarios require design reconsiderations both for visualization (e.g., who should see what when, can one share a view, point at objects, show the other person(s) something), and interaction (e.g., can two people have eye contact, can one user hand something to another user, can two people carry a virtual table together). These are not trivial challenges, and proposed solutions are largely experimental [122,123]. Ideally, in a collaborative XR, people should (1) experience the presence of others (e.g., using avatars of the full body, or parts of the body such as hands) [124,125]; (2) be able to detect the gaze direction of others [126], and eventually, experience 'eye contact' [127]; (3) have on-demand access to what the others see ('shared field of view') [128,129]; (4) be able to share spatial context [123], especially in the case of remote collaboration (i.e., does it 'rain or shine' in one person's location, are they on the move, is it dark or light, are they looking at a water body?…”
Section: Interaction Designmentioning
confidence: 99%
“…); (5) be able to use virtual gestures (handshake, wave, nod, other nonverbal communication) [129,130]; (6) be able to add proper annotations to scenes and objects and see others' annotations; and last but not least (7), be able to 'read' the emotional reactions of their collaboration partner [131]. To respond to these needs, a common paradigm that is currently used in the human-computer interaction (HCI) community for collaborative XR is the so-called awareness cues [132] (i.e., various visual elements added to the scene to signal what the other parties are doing (e.g., a representation of their hands), or what they are looking at (e.g., a cursor that shows their gaze) [122]).…”
Section: Interaction Designmentioning
confidence: 99%
“…First, using pointer, gaze, or/and sketch cues rather than using HiA gestures. The pointer [31], gaze [32] and sketch [33] cues are displayed on the surface of the task objects and the gaze ray shows a line to the object [32] [15], so a viewer could easily know which object they are referring to. Additionally, the pointer cue is simple and can show a precise point information [31].…”
Section: B Previous Solutionsmentioning
confidence: 99%
“…The CoVAR [28] introduces a remote collaborative system supporting VR and MR technologies. Participants can collaborate within the same local real-world environment or remotely.…”
Section: Collaborative Mixed Realitymentioning
confidence: 99%