Our system is currently under heavy load due to increased usage. We're actively working on upgrades to improve performance. Thank you for your patience.
2022
DOI: 10.1109/jbhi.2022.3165383
|View full text |Cite
|
Sign up to set email alerts
|

DeepBBWAE-Net: A CNN-RNN Based Deep SuperLearner for Estimating Lower Extremity Sagittal Plane Joint Kinematics Using Shoe-Mounted IMU Sensors in Daily Living

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

2
11
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
6
1
1

Relationship

0
8

Authors

Journals

citations
Cited by 23 publications
(29 citation statements)
references
References 41 publications
2
11
0
Order By: Relevance
“…However, to ensure practicality and users' comfort, we need to minimize the number of IMU sensors. More specifically, if we can implement only shoe-mounted sensors similar to kinematics estimation in [54], it would be more helpful to maintain sensors. Since single limb joint moments and GRFs are affected by contralateral limb during early stance phase and terminal stance phase, incorporating IMU information on both limbs would gather more meaningful knowledge of walking dynamics, and this can improve the prediction further.…”
Section: Discussionmentioning
confidence: 99%
“…However, to ensure practicality and users' comfort, we need to minimize the number of IMU sensors. More specifically, if we can implement only shoe-mounted sensors similar to kinematics estimation in [54], it would be more helpful to maintain sensors. Since single limb joint moments and GRFs are affected by contralateral limb during early stance phase and terminal stance phase, incorporating IMU information on both limbs would gather more meaningful knowledge of walking dynamics, and this can improve the prediction further.…”
Section: Discussionmentioning
confidence: 99%
“…In [69], LSTMs are combined with CNN to estimate the joint angles. CNNs are also used to obtain the joint angles only using the accelerometer data [21] or fusing the gyroscope and accelerometer data [56], [111], [114]. In [34], Mundt et al made a comparison of these previous methods, CNNs and LSTMs, together with MLPs for the estimation of joint orientation.…”
Section: Adopted Algorithmsmentioning
confidence: 99%
“…the joint or segment orientation or location. In this review, we found 26 works that use reference data, that can be obtained from a stereophotogrammetric system (17/26) [21], [27], [34], [49], [53], [54], [56], [58], [68], [69], [86], [90], [91], [103], [111], [114], [115], electro-goniometer and encoders (2/26) [71], [84] or inertial sensors (7/26) [29], [100], [109], [134], [147], [148], [155]. Fig.…”
Section: Adopted Algorithmsmentioning
confidence: 99%
“…Therefore, our aim is to create an IMU-based motion analysis approach that is as unobtrusive as possible by reducing the number of necessary sensors. Such systems have been developed to provide spatiotemporal (e.g., [1]), sagittal plane kinematic [24][25][26], three dimensional kinematic [27] or sagittal plane kinetic [28] gait variables. These methods do not provide a comprehensive analysis considering spatiotemporal, kinematic, and kinetics characteristics of gait.…”
Section: Introductionmentioning
confidence: 99%
“…These methods do not provide a comprehensive analysis considering spatiotemporal, kinematic, and kinetics characteristics of gait. On the one hand, neural networks have been investigated to estimate specific quantities of interest using only two IMUs, mounted on the left and right shank [24] or the feet [1, 26] without considering physical correctness. On the other hand, physics-based optimization has been proposed to track the orientation of shank-mounted sensors with a torque-driven model [28] or to track the signals of six IMUs with a human body shape model [27].…”
Section: Introductionmentioning
confidence: 99%