2009
DOI: 10.1109/tnn.2009.2025888
|View full text |Cite
|
Sign up to set email alerts
|

Probabilistic PCA Self-Organizing Maps

Abstract: In this paper, we present a probabilistic neural model, which extends Kohonen's self-organizing map (SOM) by performing a probabilistic principal component analysis (PPCA) at each neuron. Several SOMs have been proposed in the literature to capture the local principal subspaces, but our approach offers a probabilistic model while it has a low complexity on the dimensionality of the input space. This allows to process very high-dimensional data to obtain reliable estimations of the probability densities which a… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
9
0

Year Published

2011
2011
2018
2018

Publication Types

Select...
4
3
2

Relationship

1
8

Authors

Journals

citations
Cited by 31 publications
(9 citation statements)
references
References 57 publications
0
9
0
Order By: Relevance
“…can also be seen in [17] which proposed an online PPCA. Since the all parameters can be achieved by summing information from each data in the summing variant, we can develop a DEM for the MPPCA model.…”
Section: A Summing Variant Of Em In Mppcamentioning
confidence: 90%
See 1 more Smart Citation
“…can also be seen in [17] which proposed an online PPCA. Since the all parameters can be achieved by summing information from each data in the summing variant, we can develop a DEM for the MPPCA model.…”
Section: A Summing Variant Of Em In Mppcamentioning
confidence: 90%
“…The basic idea of DEM is that in node m, ( , ) t m Q θ θ will be increased or kept unchanged (when convergence is reached) and all the other parts of the summation ( , ), [17]. Concretely, the expectation of latent variable should be firstly computed as well as the conditional secondary moment:…”
Section: A Summing Variant Of Em In Mppcamentioning
confidence: 99%
“…3) Pattern Recognition: Our last set of experiments is devoted to the well-known problem of handwritten digit recognition, which is also managed with the help of selforganizing models [7], [43], [50], [67], [69]. We selected a standard benchmark, namely the MNIST handwritten digit database [35].…”
Section: B Real Datamentioning
confidence: 99%
“…Let i = π i , μ i , C i be a vector comprising the parameters for mixture component i. The following derivation relies on the methodology proposed in [41] and [42]. Let ϕ ( i , t) be an arbitrary function of i and the input sample t. Then we define the weighted mean of ϕ ( i , t) as…”
Section: Appendix Stochastic Approximationmentioning
confidence: 99%