CHI Conference on Human Factors in Computing Systems 2022
DOI: 10.1145/3491102.3502045
|View full text |Cite
|
Sign up to set email alerts
|

HybridTrak: Adding Full-Body Tracking to VR Using an Off-the-Shelf Webcam

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
1
1

Relationship

0
7

Authors

Journals

citations
Cited by 14 publications
(2 citation statements)
references
References 27 publications
0
2
0
Order By: Relevance
“…ž In addition, it has been argued for many years that HCI research should reposition itself away from traditional modes of interaction that focus on verbal, text-based, keyboard and mouse driven conversational interaction to embodied interaction where the context of human physicality and physical objects supplement cognitive approaches [5]. Virtual reality systems are designed to support multimodal activities such as head and hand movements [6,7] and video capture systems process full body motion [8ś10]. While these and other prototypes continue to be evaluated for how they incorporate gesture and head movements [11,12], support for full body interaction remains elusive.…”
Section: Literature Reviewmentioning
confidence: 99%
“…ž In addition, it has been argued for many years that HCI research should reposition itself away from traditional modes of interaction that focus on verbal, text-based, keyboard and mouse driven conversational interaction to embodied interaction where the context of human physicality and physical objects supplement cognitive approaches [5]. Virtual reality systems are designed to support multimodal activities such as head and hand movements [6,7] and video capture systems process full body motion [8ś10]. While these and other prototypes continue to be evaluated for how they incorporate gesture and head movements [11,12], support for full body interaction remains elusive.…”
Section: Literature Reviewmentioning
confidence: 99%
“…Various user input data from consumer‐grade devices can be used to synthesize full‐body animation for characters in real time. For example, some studies used optical data from egocentric cameras mounted in a baseball cap [XCZ*19], a HMD [TAP*20; YCQ*22], controllers [ASF*22], or glasses [ZWMF21] to estimate the body pose. Egocentric cameras suffer from extreme perspective distortion and self‐occlusion that lead to inadequate tracking information for the lower body.…”
Section: Related Workmentioning
confidence: 99%