25th ACM Symposium on Virtual Reality Software and Technology 2019
DOI: 10.1145/3359996.3364267
|View full text |Cite
|
Sign up to set email alerts
|

Exploring the Use of a Robust Depth-sensor-based Avatar Control System and its Effects on Communication Behaviors

Abstract: Figure 1: The virtual interview experiment in different avatar control conditions with first-person view (FPV), third-person view (TPV), and real-world view (RWV): (a) Controller-based interview session, task 1: Answer the questions, (b) Controllerbased interview session, task 2: Route planning task, (c) Depth-sensor-based interview session, task 1: Answer the questions, (d) Depth-sensor-based interview session, task 2: Route planning task

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
15
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
6
1
1

Relationship

2
6

Authors

Journals

citations
Cited by 17 publications
(16 citation statements)
references
References 35 publications
(30 reference statements)
1
15
0
Order By: Relevance
“…The ndings in our rst experiment indicate the impact of the control method for full upper-limbs on the interaction between the user and the virtual environment. Using the depth sensor based avatar control system has been proved to result in a higher senses of body ownership and agency compared to controller based avatar control [20]. Here, we show that our control method for full upper-limbs can elicit higher feelings of body ownership, agency, location of the body as well as system usability which supports the above hypothesis.…”
Section: Discussionsupporting
confidence: 81%
See 1 more Smart Citation
“…The ndings in our rst experiment indicate the impact of the control method for full upper-limbs on the interaction between the user and the virtual environment. Using the depth sensor based avatar control system has been proved to result in a higher senses of body ownership and agency compared to controller based avatar control [20]. Here, we show that our control method for full upper-limbs can elicit higher feelings of body ownership, agency, location of the body as well as system usability which supports the above hypothesis.…”
Section: Discussionsupporting
confidence: 81%
“…In the experiment, the system usabilities were measured with the System Usability Scale [20] and the game performance was de ned as the sum of scores of normal motion mode and mirror motion mode.…”
Section: System Usability and Game Performancementioning
confidence: 99%
“…To compensate for these constraints, camera-based tracking devices, such as the Leap Motion controller (LMC), can capture natural hand gestures without using any controller. For example, Wu et al (2019a) developed a multisensor system that integrates multiple Kinects and an LMC to control an avatar. Other nonverbal cues for avatar control are eye gaze and facial expression.…”
Section: Avatar Control Systems and Representationmentioning
confidence: 99%
“…In this case, the right elbow, right wrist, right hand, right thumb and right hand tip key points were almost aligned on the same vector (directed toward the sensor) causing a possible occlusion issue. This may have altered the tracking of the hand/arm position during the pointing gesture and resulted in imprecise movements as observed in previous studies using the Kinect [32]. This in turn, may have made the interpretation of the pointing direction difficult for these participants.…”
Section: Interpreting Pointing Gestures Of the Partnermentioning
confidence: 88%