2012
DOI: 10.1007/978-3-642-34014-7_14
|View full text |Cite
|
Sign up to set email alerts
|

Automatic Imitation Assessment in Interaction

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
22
0

Year Published

2012
2012
2021
2021

Publication Types

Select...
4
2
1

Relationship

1
6

Authors

Journals

citations
Cited by 19 publications
(22 citation statements)
references
References 26 publications
0
22
0
Order By: Relevance
“…We consider face and head movements tracked by the state-of-the-arttrackers (i.e. the face tracker described in [25] and the head pose estimator described in [17]) and report on binary classification of video sequences into mimicry and non-mimicry categories based on the following widely-used methodology: two similarity-based methods (cross correlation as used in [22] and Generalised Time Warping [40]), and the state-of-the-art temporal classifier, Long Short Term Memory Recurrent Neural Network (LSTM-RNN) [32]. Performance of the methods is evaluated against the ground truth, representing human annotations of motor mimicry behaviour.…”
Section: A C C E P T E D Mmentioning
confidence: 99%
See 3 more Smart Citations
“…We consider face and head movements tracked by the state-of-the-arttrackers (i.e. the face tracker described in [25] and the head pose estimator described in [17]) and report on binary classification of video sequences into mimicry and non-mimicry categories based on the following widely-used methodology: two similarity-based methods (cross correlation as used in [22] and Generalised Time Warping [40]), and the state-of-the-art temporal classifier, Long Short Term Memory Recurrent Neural Network (LSTM-RNN) [32]. Performance of the methods is evaluated against the ground truth, representing human annotations of motor mimicry behaviour.…”
Section: A C C E P T E D Mmentioning
confidence: 99%
“…None of the works mentioned above attempt to compare their methods to ground truth of mimicry behaviour. Michelet et al [22] used a proprietary dataset for sequence classification into mimicry and non-mimicry classes. Their dataset contains 256 clips of posed, gross body movements, set against a uniform, static background.…”
Section: A C C E P T E D Mmentioning
confidence: 99%
See 2 more Smart Citations
“…In the literature, we have encountered only few methods for automatic action quality assessment. These prior works have used Motion Capture (MoCap) data [6,10] or RGB video sequences [13]. A few approaches have been recently proposed based on skeleton tracking.…”
Section: Introductionmentioning
confidence: 99%