2011
DOI: 10.1007/s11460-011-0150-2
|View full text |Cite
|
Sign up to set email alerts
|

Parameterizations make different model selections: Empirical findings from factor analysis

Abstract: How parameterizations affect model selection performance is an issue that has been ignored or seldom studied since traditional model selection criteria, such as Akaike's information criterion (AIC), Schwarz's Bayesian information criterion (BIC), difference of negative log-likelihood (DNLL), etc., perform equivalently on different parameterizations that have equivalent likelihood functions. For factor analysis (FA), in addition to one traditional model (shortly denoted by FA-a), it was previously found that th… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
10
0

Year Published

2011
2011
2015
2015

Publication Types

Select...
7
1

Relationship

4
4

Authors

Journals

citations
Cited by 12 publications
(10 citation statements)
references
References 24 publications
0
10
0
Order By: Relevance
“…Beyond this procedure, there are numerous automated search algorithms that can determine the optimal number of latent variables and system of relations between them (e.g., Landsheer, 2010;Marcoulides & Ing, 2012;Shimizu et al, 2011;Spirtes, Glymour, Scheines, & Tillman, 2010;Tu & Xu, 2011;Xu, 2010Xu, , 2012Zheng & Pavlou, 2010). These techniques will almost invariably return several models that provide a good fit to the data.…”
Section: Concerning Henseler Et Al's Claim That Pls-pm Is An Exploramentioning
confidence: 99%
“…Beyond this procedure, there are numerous automated search algorithms that can determine the optimal number of latent variables and system of relations between them (e.g., Landsheer, 2010;Marcoulides & Ing, 2012;Shimizu et al, 2011;Spirtes, Glymour, Scheines, & Tillman, 2010;Tu & Xu, 2011;Xu, 2010Xu, , 2012Zheng & Pavlou, 2010). These techniques will almost invariably return several models that provide a good fit to the data.…”
Section: Concerning Henseler Et Al's Claim That Pls-pm Is An Exploramentioning
confidence: 99%
“…• Algorithms and applications see the roadmaps in Figure three (2011) and Sect.5 of Xu (2012a), plus recent applications in (Pang et al 2013;Shi et al 2011aShi et al ,b,c, 2014Tu and Xu 2011a;Tu et al 2011Tu et al , 2012aTu and Xu 2014;Wang et al 2011). …”
Section: Resultsmentioning
confidence: 99%
“…Such a BYY harmony sparse learning comes from q(A|ρ) that takes a dual role of q(Y |φ). Being different from the existing sparse learning studies (Shi et al 2011a(Shi et al , 2014Tu and Xu 2011a;Xu 2012b) that consider either q(A|ρ) in a long tail distribution with extensive computing cost or q(A|ρ) in Equation 56 with help of one additional q(ρ) (see Sect.III of Xu (2012b) Of course, we may progress to consider a priori q(ρ) and also some priories about , , which will lead to another layer of integral about q(ρ), , . Readers are referred to Sect.2.3 in Xu (2011) .…”
Section: First It Follows From the Second Line In Equation 55 With Hmentioning
confidence: 91%
“…(8) and (9), proper priors can be incorporated under a general guideline of BYY learning in [12]. Such efforts have been made on factor analysis in [16], Gaussian mixture model in [17]. The prior term ln qðΘjΞÞ in Eq.…”
Section: Priors Over Parameters Affect Model Selectionmentioning
confidence: 99%
“…Such comparisons have been made on factor analysis in [16] and Gaussian mixture model in [17], but not on BFA yet. We simplify the VB-ICA algorithm [18,19] to obtain a VB algorithm on BFA.…”
Section: Introductionmentioning
confidence: 99%