2012 IEEE International Conference on Robotics and Biomimetics (ROBIO) 2012
DOI: 10.1109/robio.2012.6491172
|View full text |Cite
|
Sign up to set email alerts
|

Context-GMM: Incremental learning of sparse priors for Gaussian mixture regression

Abstract: Abstract-Gaussian Mixture Models have been widely used in robotic control and in sensory anticipation applications. A mixture model is learnt from demonstrations and later used to infer the most likely control signals, or is also used as a forward model to predict the change in sensory signals over time. However, such models often are too big to be tractable in real-time applications. In this paper we introduce the Context-GMM, a method to learn sparse priors over the mixture components. Such priors are stable… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
4
0

Year Published

2019
2019
2020
2020

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(4 citation statements)
references
References 17 publications
0
4
0
Order By: Relevance
“…For the creating step, if (22) does not hold, then a new Gaussian component should be created to accommodate the new information carried by the new data. The θ new j,m of this new component are initialized by ( 27)- (30), where ∆ ini is a preset parameter.…”
Section: Modified Igmm Algorithmmentioning
confidence: 99%
See 3 more Smart Citations
“…For the creating step, if (22) does not hold, then a new Gaussian component should be created to accommodate the new information carried by the new data. The θ new j,m of this new component are initialized by ( 27)- (30), where ∆ ini is a preset parameter.…”
Section: Modified Igmm Algorithmmentioning
confidence: 99%
“…The first step is to decide whether a piece of new data belongs to an existing Gaussian component in the GMM. Different criteria are utilized: the likelihood value of a component when substituting the latest data in it [30], the distance between the latest data and the expected value of the components [29], or the Mahalanobis distance between the latest data and the components [28], [31], [32]. If the judgment passed, i.e., the new data belongs to an existing Gaussian component, then the parameters of the GMM will be updated based on the new data in the second step.…”
Section: Introductionmentioning
confidence: 99%
See 2 more Smart Citations