2017
DOI: 10.1016/j.jbiomech.2017.01.028
|View full text |Cite
|
Sign up to set email alerts
|

A learning-based markerless approach for full-body kinematics estimation in-natura from a single image

Abstract: We present a supervised machine learning approach for markerless estimation of human full-body kinematics for a cyclist from an unconstrained colour image. This approach is motivated by the limitations of existing marker-based approaches restricted by infrastructure, environmental conditions, and obtrusive markers. By using a discriminatively learned mixture-of-parts model, we construct a probabilistic tree representation to model the configuration and appearance of human body joints. During the learning stage… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3

Citation Types

0
3
0

Year Published

2018
2018
2022
2022

Publication Types

Select...
6

Relationship

0
6

Authors

Journals

citations
Cited by 9 publications
(3 citation statements)
references
References 48 publications
0
3
0
Order By: Relevance
“…That study used a numerical mannequin with well-controlled poses to evaluate the tracking accuracy of human limb motion, with an error that increased to exceed 40° for shoulder angle measurement during a few specific body configurations when the mannequin’s body was partially occluded by other segments in the Kinect axis [ 27 ]. Another study [ 46 ] also showed that it was challenging to estimate human kinematics through Kinect sensors when occlusions were present, due to human–object interactions. By using a Kinect-based system to compute major joint angles during various tasks with/without intended occlusions, the mean error values were 13.4° and 18.3° for the tasks without intended occlusions and the tasks with intended occlusions, respectively [ 30 ].…”
Section: Discussionmentioning
confidence: 99%
“…That study used a numerical mannequin with well-controlled poses to evaluate the tracking accuracy of human limb motion, with an error that increased to exceed 40° for shoulder angle measurement during a few specific body configurations when the mannequin’s body was partially occluded by other segments in the Kinect axis [ 27 ]. Another study [ 46 ] also showed that it was challenging to estimate human kinematics through Kinect sensors when occlusions were present, due to human–object interactions. By using a Kinect-based system to compute major joint angles during various tasks with/without intended occlusions, the mean error values were 13.4° and 18.3° for the tasks without intended occlusions and the tasks with intended occlusions, respectively [ 30 ].…”
Section: Discussionmentioning
confidence: 99%
“…Additional cameras were not necessary to overcome this issue, and a simple filtering procedure combined with a robust deep neural network was sufficient to produce consistent kinematic results. Nonetheless, implementing this method in 3D may help to reduce the effect of marker occlusion, due to the redundancy provided by additional cameras (see Drory et al, 2017 for a similar approach based on single images). Other difficulties included the occasional placement of a marker on the wrong limb by the neural network.…”
Section: Discussionmentioning
confidence: 99%
“…To overcome this issue, other studies have used information about spatial relations between markers (e.g. the hip is always an approximately constant distance from the knee) to better inform predictions (Drory et al, 2017), and these techniques could have helped to improve accuracy in the present study. It should also be noted that camera-based methods are not the only possible solution for kinematic analysis.…”
Section: Discussionmentioning
confidence: 99%