Proceedings 10th IEEE International Workshop on Robot and Human Interactive Communication. ROMAN 2001 (Cat. No.01TH8591)
DOI: 10.1109/roman.2001.981889
|View full text |Cite
|
Sign up to set email alerts
|

Method of generating coded description of human body motion from motion-captured data

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
27
0
1

Publication Types

Select...
7

Relationship

0
7

Authors

Journals

citations
Cited by 37 publications
(28 citation statements)
references
References 1 publication
0
27
0
1
Order By: Relevance
“…In the same context, the Labanwriter graphical user interface has been developed in (Wilke et al, 1932). To address the limitations of the manual annotation, the work of (Hachimura and Nakamura, 2001) introduces an automatic generation of Laban notation, exploiting motion data properties, while the work of (Chen et al, 2005) proposes a scoring system using a marker-based motion capturing architecture.…”
Section: Related Workmentioning
confidence: 99%
“…In the same context, the Labanwriter graphical user interface has been developed in (Wilke et al, 1932). To address the limitations of the manual annotation, the work of (Hachimura and Nakamura, 2001) introduces an automatic generation of Laban notation, exploiting motion data properties, while the work of (Chen et al, 2005) proposes a scoring system using a marker-based motion capturing architecture.…”
Section: Related Workmentioning
confidence: 99%
“…The rest of this section summarizes some of the outstanding research in the field. Hachimura and Nakamura [11] segmented motion data and quantize motion direction and duration for Labanotation. A Laban editor with limited scope was developed for dance education.…”
Section: Literature Reviewmentioning
confidence: 99%
“…In an effort to convert between Labanotation and motion capture data, Hachimura and Nakamura [11] attempted to extract a fundamental unit of physical motion based on Labanotation from motion capture data. They were able to convert the horizontal and vertical motion of a child joint but failed to convert the twisting motion.…”
Section: Related Workmentioning
confidence: 99%
“…In our previous work [11], we introduced a method of key-frame selection using a threshold of the magnitude of the joint speed. We have an assumption that the key-frame pose is a frame in which some body parts are paused.…”
Section: Motion Capture Data Acquisition From Microsoft Kinectmentioning
confidence: 99%