The difficulty of estimating joint kinematics remains a critical barrier toward widespread use of inertial measurement units in biomechanics. Traditional sensor-fusion filters are largely reliant on magnetometer readings, which may be disturbed in uncontrolled environments. Careful sensor-to-segment alignment and calibration strategies are also necessary, which may burden users and lead to further error in uncontrolled settings. We introduce a new framework that combines deep learning and top-down optimization to accurately predict lower extremity joint angles directly from inertial data, without relying on magnetometer readings. We trained deep neural networks on a large set of synthetic inertial data derived from a clinical marker-based motion-tracking database of hundreds of subjects. We used data augmentation techniques and an automated calibration approach to reduce error due to variability in sensor placement and limb alignment. On leftout subjects, lower extremity kinematics could be predicted with a mean (± STD) root mean squared error of less than 1.27 • (± 0.38 • ) in flexion/extension, less than 2.52 • (± 0.98 • ) in ad/abduction, and less than 3.34 • (± 1.02 • ) internal/external rotation, across walking and running trials. Errors decreased exponentially with the amount of training data, confirming the need for large datasets when training deep neural networks. While this framework remains to be validated with true inertial measurement unit (IMU) data, the results presented here are a promising advance toward convenient estimation of gait kinematics in natural environments. Progress in this direction could enable large-scale studies and offer an unprecedented view into disease progression, patient recovery, and sports biomechanics.
The difficulty of estimating joint kinematics remains a critical barrier toward widespread use of inertial measurement units in biomechanics. Traditional sensor-fusion filters are largely reliant on magnetometer readings, which may be disturbed in uncontrolled environments. Careful sensor-to-segment alignment and calibration strategies are also necessary, which may burden users and lead to further error in uncontrolled settings. We introduce a new framework that combines deep learning and top-down optimization to accurately predict lower extremity joint angles directly from inertial data, without relying on magnetometer readings. We trained deep neural networks on a large set of synthetic inertial data derived from a clinical marker-based motion-tracking database of hundreds of subjects. We used data augmentation techniques and an automated calibration approach to reduce error due to variability in sensor placement and limb alignment. On left-out subjects, lower extremity kinematics could be predicted with a mean (± STD) root mean squared error of less than 1.27 ° (± 0.38 °) in flexion/extension, less than 2.52 ° (± 0.98 °) in ad/abduction, and less than 3.34 ° (± 1.02 °) internal/external rotation, across walking and running trials. Errors decreased exponentially with the amount of training data, confirming the need for large datasets when training deep neural networks. While this framework remains to be validated with true inertial measurement unit (IMU) data, the results presented here are a promising advance toward convenient estimation of gait kinematics in natural environments. Progress in this direction could enable large-scale studies and offer an unprecedented view into disease progression, patient recovery, and sports biomechanics.
The ageing society causes an increasing need for gait monitoring in daily life to preserve the mobility of older people. Therefore, flexible low-cost and easy-to-apply sensor networks are necessary. This study evaluates two in-house developed inertial sensors for their capability of detecting spatial and temporal parameters of gait by using a time-warping algorithm for stride segmentation. Thereby, impaired or wheeled walker supported gait can be analysed using a patient specific template. The results prove the capability to exclude sequences from the motion that do not correspond to strides and to detect spatiotemporal parameters reliably. For the application to geriatric patients, however, further research is necessary.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.