Abstract-This paper presents the first method for full-body trajectory optimization of physics-based human motion that does not rely on motion capture, specified key-poses, or periodic motion. Optimization is performed using a small set of simple goals, e.g., one hand should be on the ground, or the center-of-mass should be above a particular height. These objectives are applied to short spacetime windows which can be composed to express goals over an entire animation. Specific contact locations needed to achieve objectives are not required by our method. We show that the method can synthesize many different kinds of movement, including walking, hand walking, breakdancing, flips, and crawling. Most of these movements have never been previously synthesized by physics-based methods.
Background The ability to measure joint kinematics in natural environments over long durations using inertial measurement units (IMUs) could enable at-home monitoring and personalized treatment of neurological and musculoskeletal disorders. However, drift, or the accumulation of error over time, inhibits the accurate measurement of movement over long durations. We sought to develop an open-source workflow to estimate lower extremity joint kinematics from IMU data that was accurate and capable of assessing and mitigating drift. Methods We computed IMU-based estimates of kinematics using sensor fusion and an inverse kinematics approach with a constrained biomechanical model. We measured kinematics for 11 subjects as they performed two 10-min trials: walking and a repeated sequence of varied lower-extremity movements. To validate the approach, we compared the joint angles computed with IMU orientations to the joint angles computed from optical motion capture using root mean square (RMS) difference and Pearson correlations, and estimated drift using a linear regression on each subject’s RMS differences over time. Results IMU-based kinematic estimates agreed with optical motion capture; median RMS differences over all subjects and all minutes were between 3 and 6 degrees for all joint angles except hip rotation and correlation coefficients were moderate to strong (r = 0.60–0.87). We observed minimal drift in the RMS differences over 10 min; the average slopes of the linear fits to these data were near zero (− 0.14–0.17 deg/min). Conclusions Our workflow produced joint kinematics consistent with those estimated by optical motion capture, and could mitigate kinematic drift even in the trials of continuous walking without rest, which may obviate the need for explicit sensor recalibration (e.g. sitting or standing still for a few seconds or zero-velocity updates) used in current drift-mitigation approaches when studying similar activities. This could enable long-duration measurements, bringing the field one step closer to estimating kinematics in natural environments.
A long-standing challenge in motor neuroscience is to understand the relationship between movement speed and accuracy, known as the speed-accuracy tradeoff. Here, we introduce a biomechanically realistic computational model of three-dimensional upper extremity movements that reproduces well-known features of reaching movements. This model revealed that the speed-accuracy tradeoff, as described by Fitts’ law, emerges even without the presence of motor noise, which is commonly believed to underlie the speed-accuracy tradeoff. Next, we analyzed motor cortical neural activity from monkeys reaching to targets of different sizes. We found that the contribution of preparatory neural activity to movement duration (MD) variability is greater for smaller targets than larger targets, and that movements to smaller targets exhibit less variability in population-level preparatory activity, but greater MD variability. These results propose a new theory underlying the speed-accuracy tradeoff: Fitts’ law emerges from greater task demands constraining the optimization landscape in a fashion that reduces the number of ‘good’ control solutions (i.e., faster reaches). Thus, contrary to current beliefs, the speed-accuracy tradeoff could be a consequence of motor planning variability and not exclusively signal-dependent noise.
Determining effective control strategies and solutions for high degree-offreedom humanoid characters has been a difficult, ongoing problem. A controller is only valid for a certain set of states of the character, known as the domain of attraction (DOA). This paper shows how states that are initially outside the DOA can be brought inside it. Our first contribution is to show how DOA expansion can be performed for a high-dimensional simulated character. Our second contribution is to present an algorithm that efficiently increases the DOA using random trees that provide denser coverage than the trees produced by typical sampling-based motion planning algorithms. The trees are constructed offline, but can be queried fast enough for near realtime control. We show the effect of DOA expansion on getting up, crouchto-stand, jumping, and standing-twist controllers. We also show how DOA expansion can be used to connect controllers together.
Synthesizing controllers for rotational movements in feature space is an open research problem and is particularly challenging because of the need to precisely regulate the character's global orientation, angular momentum and inertia. This paper presents feature-based controllers for a wide variety of rotational movements, including cartwheels, dives and flips. We show that the controllers can be made robust to large external disturbances by using a time-invariant control scheme. The generality of the control laws is demonstrated by providing examples of the flip controller with different apexes, the diving controller with different heights and styles, the cartwheel controller with different speeds and straddle widths, etc. The controllers do not rely on any input motion or offline optimization.
Determining effective control strategies and solutions for high degree-offreedom humanoid characters has been a difficult, ongoing problem. A controller is only valid for a certain set of states of the character, known as the domain of attraction (DOA). This paper shows how states that are initially outside the DOA can be brought inside it. Our first contribution is to show how DOA expansion can be performed for a high-dimensional simulated character. Our second contribution is to present an algorithm that efficiently increases the DOA using random trees that provide denser coverage than the trees produced by typical sampling-based motion planning algorithms. The trees are constructed offline, but can be queried fast enough for near realtime control. We show the effect of DOA expansion on getting up, crouchto-stand, jumping, and standing-twist controllers. We also show how DOA expansion can be used to connect controllers together.
Motion capture is often retargeted to new, and sometimes drastically different, characters. When the characters take on realistic human shapes, however, we become more sensitive to the motion looking right. This means adapting it to be consistent with the physical constraints imposed by different body shapes. We show how to take realistic 3D human shapes, approximate them using a simplified representation, and animate them so that they move realistically using physically‐based retargeting. We develop a novel spacetime optimization approach that learns and robustly adapts physical controllers to new bodies and constraints. The approach automatically adapts the motion of the mocap subject to the body shape of a target subject. This motion respects the physical properties of the new body and every body shape results in a different and appropriate movement. This makes it easy to create a varied set of motions from a single mocap sequence by simply varying the characters. In an interactive environment, successful retargeting requires adapting the motion to unexpected external forces. We achieve robustness to such forces using a novel LQR‐tree formulation. We show that the simulated motions look appropriate to each character's anatomy and their actions are robust to perturbations.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
334 Leonard St
Brooklyn, NY 11211
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.