2005
DOI: 10.1007/s11063-005-5265-0
|View full text |Cite
|
Sign up to set email alerts
|

On the Effect of the Form of the Posterior Approximation in Variational Learning of ICA Models

Abstract: We show that the choice of posterior approximation of sources affects the solution found in Bayesian variational learning of linear independent component analysis models. Assuming the sources to be independent a posteriori favours a solution which has an orthogonal mixing matrix. A linear dynamic model which uses second-order statistics is considered but the analysis extends to nonlinear mixtures and non-Gaussian source models as well.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

2
15
0

Year Published

2007
2007
2017
2017

Publication Types

Select...
6
1

Relationship

3
4

Authors

Journals

citations
Cited by 15 publications
(17 citation statements)
references
References 13 publications
2
15
0
Order By: Relevance
“…The main reason for using this suboptimal process is practical: even with a non-Gaussian source prior the factorial source posterior approximation would not allow determining the proper rotation of the independent sources [34]. This could be corrected by more advanced approximation techniques [5].…”
Section: Discussionmentioning
confidence: 99%
See 2 more Smart Citations
“…The main reason for using this suboptimal process is practical: even with a non-Gaussian source prior the factorial source posterior approximation would not allow determining the proper rotation of the independent sources [34]. This could be corrected by more advanced approximation techniques [5].…”
Section: Discussionmentioning
confidence: 99%
“…The disadvantage of HNFA is that the approximation of the posterior density is farther away from the true posterior density. This may occasionally lead to inferior performance [34]. The linear shortcut is included in the model to partially help this, because representing nearly linear mappings would otherwise be significantly more difficult than with the MLP model.…”
Section: S(t) H(t) X(t)mentioning
confidence: 99%
See 1 more Smart Citation
“…To elaborate more, regardless of whatever the priors are, stochastic variables have independent representational aptitude and thus can add information at the time of reconstruction. In stark contrast, deterministic variables cannot add any information at reconstruction time [16].…”
Section: B Stochastic Latent Variable Models and Deterministic Autoementioning
confidence: 99%
“…For instance Ilin and Valpola (2003) have shown that it can compromise the quality of separation in ICA. This finding is relevant since we are using similar types of latent variable models with linear mappings.…”
Section: Consequenses Of the Posterior Approximationmentioning
confidence: 99%