Our system is currently under heavy load due to increased usage. We're actively working on upgrades to improve performance. Thank you for your patience.
2010
DOI: 10.1007/s11222-010-9175-2
|View full text |Cite
|
Sign up to set email alerts
|

Extending mixtures of multivariate t-factor analyzers

Abstract: Model-based clustering typically involves the development of a family of mixture models and the imposition of these models upon data. The best member of the family is then chosen using some criterion and the associated parameter estimates lead to predicted group memberships, or clusterings. This paper describes the extension of the mixtures of multivariate t-factor analyzers model to include constraints on the degrees of freedom, the factor loadings, and the error variance matrices. The result is a family of s… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
46
0

Year Published

2014
2014
2018
2018

Publication Types

Select...
6
4

Relationship

4
6

Authors

Journals

citations
Cited by 109 publications
(46 citation statements)
references
References 41 publications
(63 reference statements)
0
46
0
Order By: Relevance
“…Among the related works, we may cite the robust versions [2,66] of the MFA models which relies on t-distributions and the GMMDR technique [89] which iteratively looks for the subspace containing the most relevant information for clustering. Let us finally notice that some subspace clustering methods have been modified for performing variable selection through ℓ 1 penalization.…”
Section: Discussion On Subspace Clustering Methods and Related Workmentioning
confidence: 99%
“…Among the related works, we may cite the robust versions [2,66] of the MFA models which relies on t-distributions and the GMMDR technique [89] which iteratively looks for the subspace containing the most relevant information for clustering. Let us finally notice that some subspace clustering methods have been modified for performing variable selection through ℓ 1 penalization.…”
Section: Discussion On Subspace Clustering Methods and Related Workmentioning
confidence: 99%
“…As such, the teigen package will scale variables to have mean 0 and variance 1 by default (see Section 3 for controlling this option). In addition, constraints are also imposed on the degrees of freedom ν g , following Andrews and McNicholas (2011a). Thus, taking all possible combinations of these constraints into consideration would result in a 28-model family (see Table 1), 24 of which were originally developed for the tEIGEN family (Andrews and McNicholas 2012).…”
Section: The Teigen Familymentioning
confidence: 99%
“…The mixture of t-distributions model has only G more free parameters than the mixture of Gaussian distributions. Andrews and McNicholas (2011a) consider the option to constrain degrees of freedom to be equal across components, i.e., ν g = ν, which can lead to improved classification performance by effectively allowing the borrowing of information across components to estimate the degrees of freedom. Andrews and McNicholas (2012) introduce a t-analogue of 12 members of the GPCM family of models by imposing the constraints in Table 1 on the component scale matrices Σ g , while also allowing the constraint ν g = ν.…”
Section: Mixtures Of Components With Varying Tailweightmentioning
confidence: 99%