2009
DOI: 10.1142/s0129065709001860
|View full text |Cite
|
Sign up to set email alerts
|

Dynamic Competitive Probabilistic Principal Components Analysis

Abstract: We present a new neural model which extends the classical competitive learning (CL) by performing a Probabilistic Principal Components Analysis (PPCA) at each neuron. The model also has the ability to learn the number of basis vectors required to represent the principal directions of each cluster, so it overcomes a drawback of most local PCA models, where the dimensionality of a cluster must be fixed a priori. Experimental results are presented to show the performance of the network with multispectral image da… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
7
0

Year Published

2010
2010
2022
2022

Publication Types

Select...
7
1

Relationship

1
7

Authors

Journals

citations
Cited by 15 publications
(7 citation statements)
references
References 19 publications
0
7
0
Order By: Relevance
“…The Probabilistic Neural Network (PPN) has been used in many applications as a simple and efficient classifier (Lopez-Rubio and Ortiz-de-Lazcano-Lobato, 2009;Adeli and Panakkat, 2009;. PNN assigns the test data to the class with maximum likelihood compared with other classes.…”
Section: Introductionmentioning
confidence: 99%
“…The Probabilistic Neural Network (PPN) has been used in many applications as a simple and efficient classifier (Lopez-Rubio and Ortiz-de-Lazcano-Lobato, 2009;Adeli and Panakkat, 2009;. PNN assigns the test data to the class with maximum likelihood compared with other classes.…”
Section: Introductionmentioning
confidence: 99%
“…Using PCA, we reduce the dimensionality to 500, which for this problem typically retains around 95% of the variance. PCA has been successfully applied in a variety of domains [20][21][22] and a large number of biologically plausible learning rules can be used to perform it. 23,24 For simplicity, and due to the high dimensionality of the population response, we use the memory-efficient iterative implicitly restarted Lanczos method 25 that is provided by Matlab's eigs function.…”
Section: Decorrelation and Dimensionality Reduction Stagementioning
confidence: 99%
“…In PCA approach, a lot of information in a dataset is placed into a reduced dimension data structure by projecting the entire dataset onto a sub-space generated by an orthonormal axes system (Ye & Li, 2004). The optimal axes system can be evaluated using Singular Values Decomposition (SVD) (Golub & Van Loan, 1996;Wu, Warwick, Ma, Gasson, Burgess, Pan, & Aziz, 2010;López-Rubio & Lazcano-Lobato, 2009;Tipping & Bishop, 1999a, 1999b. The reduced dimensions data structure are chosen so that important features of the data are captured with low-loss of information (Ye & Li, 2004).…”
Section: Data Dimensionality Reduction With Principals Component Anal...mentioning
confidence: 99%