2023
DOI: 10.1038/s41597-023-01932-7
|View full text |Cite
|
Sign up to set email alerts
|

A Non-Laboratory Gait Dataset of Full Body Kinematics and Egocentric Vision

Abstract: In this manuscript, we describe a unique dataset of human locomotion captured in a variety of out-of-the-laboratory environments captured using Inertial Measurement Unit (IMU) based wearable motion capture. The data contain full-body kinematics for walking, with and without stops, stair ambulation, obstacle course navigation, dynamic movements intended to test agility, and negotiating common obstacles in public spaces such as chairs. The dataset contains 24.2 total hours of movement data from a college student… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
7
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3
2
1

Relationship

0
6

Authors

Journals

citations
Cited by 7 publications
(7 citation statements)
references
References 35 publications
0
7
0
Order By: Relevance
“…We used the multimodal dataset by [6] that includes fullbody kinematics from inertial sensors and RGB images with gaze from smart glasses. A detailed description is provided in [7]. Data was collected with 23 healthy young adults walking in 4 different annotated environments, including transitions, for a total of ∼12 hours.…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…We used the multimodal dataset by [6] that includes fullbody kinematics from inertial sensors and RGB images with gaze from smart glasses. A detailed description is provided in [7]. Data was collected with 23 healthy young adults walking in 4 different annotated environments, including transitions, for a total of ∼12 hours.…”
Section: Methodsmentioning
confidence: 99%
“…Computer vision systems like smart glasses can inform about the walking environment and sense obstacles before they are encountered, which other technologies like inertial measurement units (IMUs) are not fully capable of. Recent research [6]- [7] has shown that adding computer vision data can result in 7.9% and 7.0% improvement in root mean squared error (RMSE) of knee and ankle joint angle predictions, respectively, compared to using only inertial sensors; this was possible by pairing the inertial data with optical flow [8] mappings from the smart glasses. However, due to the high computational requirements of the proposed long short-term memory (LSTM)-based networks, their model was not capable of running real-time inference on an embedded system.…”
Section: Introductionmentioning
confidence: 99%
“…Subjects provided verbal and written informed consent approved by the University of Nebraska Medical Center's Institutional Review Board (# 0762-21-EP). This study does not contain sensitive data (i.e., from children) and all participants were between 19 to 31 years old [72][73][74][75][76][77][78][79][80][81][82][83][84][85] .…”
Section: Methodsmentioning
confidence: 99%
“…However, the methods are application agnostic. With the recent rise in multimodal gait datasets [8], [14]- [19], we can also learn representations of hard to define factors in a similar way the representations of style were learned in this manuscript. Similarly, we could encode terrain in an appropriate latent space and then compose different representations using decoder.…”
Section: Beyond Personalizationmentioning
confidence: 97%