2022 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR) 2022
DOI: 10.1109/aivr56993.2022.00010
|View full text |Cite
|
Sign up to set email alerts
|

Comparison of Data Encodings and Machine Learning Architectures for User Identification on Arbitrary Motion Sequences

Abstract: Recently emerged solutions demonstrate that the movements of users interacting with extended reality (XR) applications carry identifying information and can be leveraged for identification. While such solutions can identify XR users within a few seconds, current systems require one or the other trade-off: either they apply simple distance-based approaches that can only be used for specific predetermined motions. Or they use classification-based approaches that use more powerful machine learning models and thus… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
0
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
5
1

Relationship

1
5

Authors

Journals

citations
Cited by 7 publications
(13 citation statements)
references
References 45 publications
0
0
0
Order By: Relevance
“…Following this work, Moore et al (2021) explored an e-learning scenario where VR users learned to troubleshoot medical robots in several stages. And recently, Rack et al (2022) compared different classification architectures using a dataset of participants talking to each other for longer periods. In each of these cases, the general context of the scenario still limits possible user actions (e.g., to "listening" and "talking"), but the exact action, its starting, and its ending point become uncertain.…”
Section: Specific Vs Non-specific Actionsmentioning
confidence: 99%
See 4 more Smart Citations
“…Following this work, Moore et al (2021) explored an e-learning scenario where VR users learned to troubleshoot medical robots in several stages. And recently, Rack et al (2022) compared different classification architectures using a dataset of participants talking to each other for longer periods. In each of these cases, the general context of the scenario still limits possible user actions (e.g., to "listening" and "talking"), but the exact action, its starting, and its ending point become uncertain.…”
Section: Specific Vs Non-specific Actionsmentioning
confidence: 99%
“…Each repetition took between 5 and 10 s on average, resulting in a total recording time of 4 min per user per action. Rack et al (2022) use the public "Talking With Hands" dataset of Lee et al (2019) to identify 34 users. The dataset provides fullbody motion tracking recordings from participants talking to each other about previously watched movies.…”
Section: Existing Datasetsmentioning
confidence: 99%
See 3 more Smart Citations