2017
DOI: 10.5351/csam.2017.24.2.143
|View full text |Cite
|
Sign up to set email alerts
|

Probabilistic penalized principal component analysis

Abstract: A variable selection method based on probabilistic principal component analysis (PCA) using penalized likelihood method is proposed. The proposed method is a two-step variable reduction method. The first step is based on the probabilistic principal component idea to identify principle components. The penalty function is used to identify important variables in each component. We then build a model on the original data space instead of building on the rotated data space through latent variables (principal compon… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
2
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(2 citation statements)
references
References 25 publications
(26 reference statements)
0
2
0
Order By: Relevance
“…The estimation can deal with missing data, while still being computationally efficient (Tipping & Bishop, 1999; Zheng et al, 2016). Moreover, covariates can be included in the model (Chiquet, Mariadassou, & Robin, 2017), and penalized versions of the likelihood can be used to encourage sparsity or structured sparsity, for example, in a high‐dimensional framework (Guan & Dy, 2009; Park, Wang, & Mo, 2017; Zeng, Liu, Huang, & Liang, 2017). Finally, the probabilistic formulation is very versatile, and turns several complex settings into natural extensions of the simple Gaussian ones mentioned above.…”
Section: Introductionmentioning
confidence: 99%
“…The estimation can deal with missing data, while still being computationally efficient (Tipping & Bishop, 1999; Zheng et al, 2016). Moreover, covariates can be included in the model (Chiquet, Mariadassou, & Robin, 2017), and penalized versions of the likelihood can be used to encourage sparsity or structured sparsity, for example, in a high‐dimensional framework (Guan & Dy, 2009; Park, Wang, & Mo, 2017; Zeng, Liu, Huang, & Liang, 2017). Finally, the probabilistic formulation is very versatile, and turns several complex settings into natural extensions of the simple Gaussian ones mentioned above.…”
Section: Introductionmentioning
confidence: 99%
“…The estimation can be computationally more efficient, and can deal with missing data (Tipping andBishop, 1999, Zheng et al, 2016). Moreover, covariates can be included in the model (Chiquet et al, 2017), and penalized versions of the likelihood can be used to encourage sparsity or structured sparsity, in particular in a high-dimensional framework (Guan and Dy, 2009, Park et al, 2017, Zeng et al, 2017. Finally, the probabilistic formulation is very versatile, and turns several complex settings into natural extensions of the simple Gaussian ones mentioned above.…”
Section: Introductionmentioning
confidence: 99%