2022
DOI: 10.20944/preprints202212.0167.v1
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Inertia-Constrained Reinforcement Learning to Enhance Human Motor Control Modeling

Abstract: Locomotor impairment is a high-prevalent and significant source of disability and significantly impacts a large population’s quality of life. Despite decades of research in human locomotion, the challenges of simulating human movement to study the features of musculoskeletal drivers and clinical conditions remain. Most recent efforts in utilizing reinforcement learning (RL) techniques are promising to simulate human locomotion and reveal musculoskeletal drives. However, these simulations often failed to mimic … Show more

Help me understand this report
View published versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(3 citation statements)
references
References 50 publications
0
3
0
Order By: Relevance
“…The physics based reward component (= evolution of population density/ mean-field state) is approximated using PINN. To better mimic natural human locomotion [68], designed reward function based on physical and experimental information: trajectory optimization rewards, and bio-inspired rewards. In a similar task of imitation of human motion but from motion clip, [20] proposes a physics-based controller using DRL.…”
Section: R(s ϕ)mentioning
confidence: 99%
See 1 more Smart Citation
“…The physics based reward component (= evolution of population density/ mean-field state) is approximated using PINN. To better mimic natural human locomotion [68], designed reward function based on physical and experimental information: trajectory optimization rewards, and bio-inspired rewards. In a similar task of imitation of human motion but from motion clip, [20] proposes a physics-based controller using DRL.…”
Section: R(s ϕ)mentioning
confidence: 99%
“…A well-defined reward function is crucial for successful reinforcement learning, PIRL approaches also seek to incorporate physical constraints into the design for safe learning and more efficient reward functions. For example, in [68] the designed reward incorporates IMU sensor data, imbibing inertial constraints, while in [75] the physics informed reward is designed to satisfy explicit operational targets. To ensure safe exploration during training and deployment, works such as [133,141] learn a data-driven barrier certificate based on physical property-based losses and a set of unsafe state vectors.…”
Section: Introductionmentioning
confidence: 99%
“…However, the field is not without its limitations. Previous research has predominantly adopted a transient, cross‐sectional approach, often sidelining the significance of real‐world tasks and their implications on motor learning (Krakauer et al., 2019; Korivand et al., 2023). Our study seeks to mitigate these shortcomings through a longitudinal exploration, aiming to unravel the temporal dynamics of cortical activation across various stages of motor learning, with a pronounced emphasis on real‐world tasks.…”
Section: Introductionmentioning
confidence: 99%