2007
DOI: 10.1016/j.csda.2007.01.011
|View full text |Cite
|
Sign up to set email alerts
|

On Bayesian principal component analysis

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
33
0

Year Published

2008
2008
2017
2017

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 38 publications
(33 citation statements)
references
References 10 publications
0
33
0
Order By: Relevance
“…Tipping and Bishop [21] showed that the result was equivalent to standard PCA under certain choices of hyperparameters, and generalized PPCA to incorporate priors on the weights. Šmídl and Quinn [19] extended this work to a full Bayesian treatment which included priors on both components and weights, and considered the use of appropriate priors on the components to enforce orthogonality. Beyond standard PCA, several authors have proposed additional priors to encourage sparsity or non-negativity on the components [3,18].…”
Section: Background and Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Tipping and Bishop [21] showed that the result was equivalent to standard PCA under certain choices of hyperparameters, and generalized PPCA to incorporate priors on the weights. Šmídl and Quinn [19] extended this work to a full Bayesian treatment which included priors on both components and weights, and considered the use of appropriate priors on the components to enforce orthogonality. Beyond standard PCA, several authors have proposed additional priors to encourage sparsity or non-negativity on the components [3,18].…”
Section: Background and Related Workmentioning
confidence: 99%
“…We refer to prior work in the Bayesian literature evaluated on data with dimensionality < O(10 3 ) e.g. [19] used data of dimensionality 35, while deterministic approaches are routinely applied to > O(10 4 ) data. Our proposal pushes the scalability of probabilistic methods on par with their deterministic counterparts.…”
Section: Introductionmentioning
confidence: 99%
“…Proof: Since the covariance of vec(Z) is I, the covariance ofX is given by (17), where r = RR and c = CC . Conversely, if the covariance matrix ofX is separable as in (17), there exist C and R such that c = CC and r = RR . Then (16) holds with Z = C −1X R −1 .…”
Section: B Bilinear Transformation and Separable Covariancementioning
confidence: 99%
“…Note that a similar modeling assumption is also used in (13) for the PSOPCA model. Proposition 1: The use of the bilinear transformation (16) is equivalent to the assumption of a separable (Kronecker product) covariance matrix onX cov(vec(X)) = r ⊗ c (17) where r ∈ R d r ×d r and c ∈ R d c ×d c are the row and column covariance matrices ofX, respectively. Proof: Since the covariance of vec(Z) is I, the covariance ofX is given by (17), where r = RR and c = CC .…”
Section: B Bilinear Transformation and Separable Covariancementioning
confidence: 99%
See 1 more Smart Citation