2016
DOI: 10.1080/01621459.2015.1100620
|View full text |Cite
|
Sign up to set email alerts
|

Fast Bayesian Factor Analysis via Automatic Rotations to Sparsity

Abstract: Rotational transformations have traditionally played a key role in enhancing the interpretability of factor analysis via post-hoc modifications of the factor model orientation. Regularization methods also serve to achieve this goal by prioritizing sparse loading matrices. In this work, we cross-fertilize these two paradigms within a unifying Bayesian framework. Our approach deploys intermediate factor rotations throughout the learning process, greatly enhancing the effectiveness of sparsity inducing priors. Th… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

1
155
0

Year Published

2016
2016
2024
2024

Publication Types

Select...
6
1
1

Relationship

0
8

Authors

Journals

citations
Cited by 92 publications
(156 citation statements)
references
References 39 publications
1
155
0
Order By: Relevance
“…For comparison, we implemented inference also under the sparse latent factor model (SLFM, Ročková & George 2016) to { Z, Y }. SLFM assumes a sparse loading matrix and unstructured latent factors.…”
Section: Simulation Studymentioning
confidence: 99%
See 1 more Smart Citation
“…For comparison, we implemented inference also under the sparse latent factor model (SLFM, Ročková & George 2016) to { Z, Y }. SLFM assumes a sparse loading matrix and unstructured latent factors.…”
Section: Simulation Studymentioning
confidence: 99%
“…Latent factor models assume that the variables are continuous and follow independent normal distributions centered on latent factors multiplied by factor loadings. Imposing sparsity constraints (Bhattacharya & Dunson, 2011;Ročková & George, 2016), latent factor models can be potentially adopted for the discovery of latent causes. However, the assumptions of normality and linear structure are often violated in practice and it is not straightforward to incorporate known diagnostic information into latent factor models.…”
Section: Introductionmentioning
confidence: 99%
“…Importantly, the latter authors assume that every observed variable has a nonzero loading associated with only one latent trait; this assumption is restrictive and results in a model that, using the definitions in this paper, would be called "confirmatory" instead of "exploratory." Finally, Rocková and George (2014) propose an alternative method whereby m is set to a large value, and "spike-and-slab" priors are used to enforce simplicity of the matrix. These prior distributions fix weak loadings to zero, so that smaller values of m are obtained by fixing to zero all loadings associated with a latent trait.…”
Section: Discussionmentioning
confidence: 99%
“…One of the most commonly used assumptions to remedy this issue is to assume that the precision matrix is sparse, i.e., a large majority of its entries are zero (Dempster, 1972), which turns out to be quite useful in practice in the aforementioned GGM owing to its interpretability. Another possibility is to assume a sparse structure on the covariance matrix through, for example, a sparse factor model (Carvalho et al, 2008;Fan et al, 2008Fan et al, , 2011Bühlmann and Van De Geer, 2011;Pourahmadi, 2013;Ročková and George, 2016a), to obtain a sparse covariance matrix estimator, and invert it to estimate the precision matrix. However, the precision matrix estimator obtained from this strategy is not guaranteed to be sparse, which is important for interpretability in our context.…”
Section: Introductionmentioning
confidence: 99%
“…In this paper, we propose a new Bayesian approach for estimation and structure recovery for GGMs. Specifically, to achieve adaptive shrinkage, we model the off-diagonal elements of Θ using a continuous spike-and-slab prior with a mixture of two Laplace distributions, which is known as the spike-and-slab Lasso prior in Ročková (2016), Ročková and George (2016a) and Ročková and George (2016b). Continuous spike-and-slab priors are commonly used for high dimensional regression (George and McCulloch, 1993;Ishwaran and Rao, 2005;Narisetty and He, 2014) and a Gibbs sampling algorithm is often used for posterior computation.…”
Section: Introductionmentioning
confidence: 99%