2010 20th International Conference on Pattern Recognition 2010
DOI: 10.1109/icpr.2010.177
|View full text |Cite
|
Sign up to set email alerts
|

Aggregation of Probabilistic PCA Mixtures with a Variational-Bayes Technique Over Parameters

Abstract: This paper proposes a solution to the problem of aggregating versatile probabilistic models, namely mixtures of probabilistic principal component analyzers. These models are a powerful generative form for capturing high-dimensional, non Gaussian, data. They simultaneously perform mixture adjustment and dimensionality reduction. We demonstrate how such models may be advantageously aggregated by accessing mixture parameters only, rather than original data. Aggregation is carried out through Bayesian estimation w… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2013
2013
2015
2015

Publication Types

Select...
2

Relationship

1
1

Authors

Journals

citations
Cited by 2 publications
(5 citation statements)
references
References 8 publications
0
5
0
Order By: Relevance
“…This is the case of [9,11], contrarily to [6,10] which require multiple access to the X i because of their iterative optimization scheme. Temporary access to data is frequently considered in online approaches like [4] to handle stream data under short-time ergodicity and stationarity assumptions.…”
Section: Multiple Passes Over the Data Vs Model Aggregationmentioning
confidence: 99%
See 1 more Smart Citation
“…This is the case of [9,11], contrarily to [6,10] which require multiple access to the X i because of their iterative optimization scheme. Temporary access to data is frequently considered in online approaches like [4] to handle stream data under short-time ergodicity and stationarity assumptions.…”
Section: Multiple Passes Over the Data Vs Model Aggregationmentioning
confidence: 99%
“…In [11], the authors propose an operator to aggregate Mixtures of Probabilistic PCA models (MPPCA, [12]) in a maximum-likelihood sense, without resorting to the original data used to train the models. Multiple models can thus be trained in a first phase, and aggregated in a second phase.…”
Section: Multiple Passes Over the Data Vs Model Aggregationmentioning
confidence: 99%
“…The algorithm starts with the initialization of the model parameters. All component weights ω k are initialized to 1 K ,t h ec o m p o n e n tm e a n sµ are scattered randomly in the data domain, and the factor matrices Λ are set with random orthogonal vectors.…”
Section: Em Algorithm For Maximum Likelihood Solutionmentioning
confidence: 99%
“…redundant, with regard to a model estimated on the union of the data sets. In this section, we show first how the addition of input MPPCA models can be seen as the limit representation of a virtual data set [1]. This representation is then inserted into the model described in section 3, as a substitution for an ordinary data set.…”
Section: A G G R E G a T I N G M I X T U R E S O F P P C Amentioning
confidence: 99%
See 1 more Smart Citation