One major challenge for 3D pose estimation from a single RGB image is the acquisition of sufficient training data. In particular, collecting large amounts of training data that contain unconstrained images and are annotated with accurate 3D poses is infeasible. We therefore propose to use two independent training sources. The first source consists of images with annotated 2D poses and the second source consists of accurate 3D motion capture data. To integrate both sources, we propose a dual-source approach that combines 2D pose estimation with efficient and robust 3D pose retrieval. In our experiments, we show that our approach achieves state-of-the-art results and is even competitive when the skeleton structure of the two sources differ substantially.
The development of methods and tools for the generation of visually appealing motion sequences using prerecorded motion capture data has become an important research area in computer animation. In particular, data-driven approaches have been used for reconstructing high-dimensional motion sequences from low-dimensional control signals. In this article, we contribute to this strand of research by introducing a novel framework for generating full-body animations controlled by only four 3D accelerometers that are attached to the extremities of a human actor. Our approach relies on a knowledge base that consists of a large number of motion clips obtained from marker-based motion capturing. Based on the sparse accelerometer input a cross-domain retrieval procedure is applied to build up a lazy neighborhood graph in an online fashion. This graph structure points to suitable motion fragments in the knowledge base, which are then used in the reconstruction step. Supported by a kd-tree index structure, our procedure scales to even large datasets consisting of millions of frames. Our combined approach allows for reconstructing visually plausible continuous motion streams, even J. Tautges and T. Helten were financially supported by grants from Deutsche Forschungsgemeinschaft (WE 1945/5-1 and MU 2686/3-1).
Freezing of gait (FoG) is a disabling symptom associated with falls, with little or no responsiveness to pharmacological treatment. Current protocols used for rehabilitation are based on the use of external sensory cues. However, cued strategies might generate an important dependence on the environment. Teaching motor strategies without cues [i.e., action observation (AO) plus Sonification] could represent an alternative/innovative approach to rehabilitation that matters most on appropriate allocation of attention and lightening cognitive load. We aimed to test the effects of a novel experimental protocol to treat patients with Parkinson’s disease (PD) and FoG, using functional, and clinical scales. The experimental protocol was based on AO plus Sonification. 12 patients were treated with 8 motor gestures. They watched eight videos showing an actor performing the same eight gestures, and then tried to repeat each gesture. Each video was composed by images and sounds of the gestures. By means of the Sonification technique, the sounds of gestures were obtained by transforming kinematic data (velocity) recorded during gesture execution, into pitch variations. The same 8 motor gestures were also used in a second group of 10 patients; which were treated with a standard protocol based on a common sensory stimulation method. All patients were tested with functional and clinical scales before, after, at 1 month, and 3 months after the treatment. Data showed that the experimental protocol have positive effects on functional and clinical tests. In comparison with the baseline evaluations, significant performance improvements were seen in the NFOG questionnaire, and the UPDRS (parts II and III). Importantly, all these improvements were consistently observed at the end, 1 month, and 3 months after treatment. No improvement effects were found in the group of patients treated with the standard protocol. These data suggest that a multisensory approach based on AO plus Sonification, with the two stimuli semantically related, could help PD patients with FoG to relearn gait movements, to reduce freezing episodes, and that these effects could be prolonged over time.
We present MotionExplorer, an exploratory search and analysis system for sequences of human motion in large motion capture data collections. This special type of multivariate time series data is relevant in many research fields including medicine, sports and animation. Key tasks in working with motion data include analysis of motion states and transitions, and synthesis of motion vectors by interpolation and combination. In the practice of research and application of human motion data, challenges exist in providing visual summaries and drill-down functionality for handling large motion data collections. We find that this domain can benefit from appropriate visual retrieval and analysis support to handle these tasks in presence of large motion data. To address this need, we developed MotionExplorer together with domain experts as an exploratory search system based on interactive aggregation and visualization of motion states as a basis for data navigation, exploration, and search. Based on an overview-first type visualization, users are able to search for interesting sub-sequences of motion based on a query-by-example metaphor, and explore search results by details on demand. We developed MotionExplorer in close collaboration with the targeted users who are researchers working on human motion synthesis and analysis, including a summative field study. Additionally, we conducted a laboratory design study to substantially improve MotionExplorer towards an intuitive, usable and robust design. MotionExplorer enables the search in human motion capture data with only a few mouse clicks. The researchers unanimously confirm that the system can efficiently support their work.
A number of previous works have shown that information about a subject is encoded in sparse kinematic information, such as the one revealed by so-called point light walkers. With the work at hand, we extend these results to classifications of soft biometrics from inertial sensor recordings at a single body location from a single step. We recorded accelerations and angular velocities of 26 subjects using integrated measurement units (IMUs) attached at four locations (chest, lower back, right wrist and left ankle) when performing standardized gait tasks. The collected data were segmented into individual walking steps. We trained random forest classifiers in order to estimate soft biometrics (gender, age and height). We applied two different validation methods to the process, 10-fold cross-validation and subject-wise cross-validation. For all three classification tasks, we achieve high accuracy values for all four sensor locations. From these results, we can conclude that the data of a single walking step (6D: accelerations and angular velocities) allow for a robust estimation of the gender, height and age of a person.
Abstract-We introduce a method for automated temporal segmentation of human motion data into distinct actions and compositing motion primitives based on self-similar structures in the motion sequence. We use neighbourhood graphs for the partitioning and the similarity information in the graph is further exploited to cluster the motion primitives into larger entities of semantic significance. The method requires no assumptions about the motion sequences at hand and no user interaction is required for the segmentation or clustering. In addition, we introduce a feature bundling preprocessing technique to make the segmentation more robust to noise, as well as a notion of motion symmetry for more refined primitive detection. We test our method on several sensor modalities, including markered and markerless motion capture as well as on electromyograph and accelerometer recordings. The results highlight our system's capabilities for both segmentation and for analysis of the finer structures of motion data, all in a completely unsupervised manner.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.