1994
DOI: 10.1016/0893-6080(94)90060-4
|View full text |Cite
|
Sign up to set email alerts
|

Representation and separation of signals using nonlinear PCA type learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
149
0
2

Year Published

1999
1999
2014
2014

Publication Types

Select...
7
3

Relationship

0
10

Authors

Journals

citations
Cited by 328 publications
(151 citation statements)
references
References 15 publications
0
149
0
2
Order By: Relevance
“…This enables the use of non-linear generative models, such as the Helmholtz machine for binary stochastic systems and non-linear PCA for parametric deterministic models (e.g. Dong & McAvoy, 1996;Friston et al, 2000;Karhunen & Joutsensalo, 1994;Kramer, 1991;Taleb & Jutten, 1997). The latter schemes typically employ a 'bottleneck' architecture that forces the inputs through a small number of nodes.…”
Section: Non-invertible Modelsmentioning
confidence: 99%
“…This enables the use of non-linear generative models, such as the Helmholtz machine for binary stochastic systems and non-linear PCA for parametric deterministic models (e.g. Dong & McAvoy, 1996;Friston et al, 2000;Karhunen & Joutsensalo, 1994;Kramer, 1991;Taleb & Jutten, 1997). The latter schemes typically employ a 'bottleneck' architecture that forces the inputs through a small number of nodes.…”
Section: Non-invertible Modelsmentioning
confidence: 99%
“…This enables the use of nonlinear generative models, such as nonlinear PCA (e.g. Kramer, 1991;Karhunen and Joutsensalo, 1994;Dong and McAvoy, 1996;Taleb and Jutten, 1997). These schemes typically employ a 'bottleneck' architecture that forces the inputs through a small number of nodes.…”
Section: Information Theorymentioning
confidence: 99%
“…-Preprocessing: we first calculate and subtract the average pattern to obtain a zero mean process (Karhunen & Joutsensalo 1994. -Neural computing: the fundamental learning parameters are: i) the initial weight matrix; ii) the number of input neurons L, the number of output neurons p, which is the number of principal eigenvectors that we need, and therefore is equal to twice the number of signal periodicities (for real signals); iii) α, the nonlinear learning function parameter; iv) the learning rate µ.…”
Section: Autocorrelation Matrix Based Analysismentioning
confidence: 99%