2011
DOI: 10.1163/016918611x563328
|View full text |Cite
|
Sign up to set email alerts
|

Learning, Generation and Recognition of Motions by Reference-Point-Dependent Probabilistic Models

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
30
0

Year Published

2011
2011
2018
2018

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 35 publications
(30 citation statements)
references
References 17 publications
0
30
0
Order By: Relevance
“…An option is to encode both static and dynamic features in the mixture model to retrieve continuous behaviors [51,39,22]. An alternative option is to encode time as additional feature in the GMM, and use Gaussian mixture regression (GMR) [18] to retrieve continuous behaviors.…”
Section: Example With a Single Gaussianmentioning
confidence: 99%
“…An option is to encode both static and dynamic features in the mixture model to retrieve continuous behaviors [51,39,22]. An alternative option is to encode time as additional feature in the GMM, and use Gaussian mixture regression (GMR) [18] to retrieve continuous behaviors.…”
Section: Example With a Single Gaussianmentioning
confidence: 99%
“…In speech processing, these parameters usually correspond to the evolution of mel-frequency cepstral coefficients characterizing the power spectrum of a sound, but the same approach can be used with any form of continuous signals. In robotics, this approach has rarely been exploited, at the exception of the work from Sugiura et al (2011) employing it to represent object manipulation movements. We take advantage of this formulation for retrieving a reference trajectory with associated covariance that will govern the robot motions according to the behavior determined by the ADHSMM.…”
Section: Trajectory Retrieval Using Dynamic Featuresmentioning
confidence: 99%
“…There has been a lot of work in the field of intelligent systems on developing formalisms for learning and representing actions, ranging from task-space representations [1] to high-level symbolic description of actions and their effects on objects [2]. Many approaches work directly on raw video streams, which is a challenging problem due to high variance in the video emanating from e.g.…”
Section: Related Workmentioning
confidence: 99%