2015
DOI: 10.1016/j.stamet.2015.02.004
|View full text |Cite
|
Sign up to set email alerts
|

A macro-DAG structure based mixture model

Abstract: h i g h l i g h t s• A two-level DAG structure is proposed for modeling multidimensional mixture. • Unsupervised classification is revisited for this two-level DAG structure. • A dedicated EM algorithm, called EM-mDAG, is described. • This algorithm favors the selection of a small number of classes.• This method provides a help for semantic interpretation of the classes. a b s t r a c tIn the context of unsupervised classification of multidimensional data, we revisit the classical mixture model in the case whe… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2018
2018
2018
2018

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 22 publications
0
1
0
Order By: Relevance
“…Within the context of clustering and Gaussian mixture models, seminal work can be found in Thiesson et al (1997), where the authors parameterize each component density in terms of conditional distributions and a related directed acyclic graph (DAG, Whittaker, 1990). Recent work on mixtures of DAGs is in Chalmond (2015). Rodríguez et al (2011) and Talluri et al (2014) develop a Bayesian framework for estimating infinite mixtures of sparse Gaussian graphical models where different prior distributions on the inverse covariance are employed.…”
Section: Discussionmentioning
confidence: 99%
“…Within the context of clustering and Gaussian mixture models, seminal work can be found in Thiesson et al (1997), where the authors parameterize each component density in terms of conditional distributions and a related directed acyclic graph (DAG, Whittaker, 1990). Recent work on mixtures of DAGs is in Chalmond (2015). Rodríguez et al (2011) and Talluri et al (2014) develop a Bayesian framework for estimating infinite mixtures of sparse Gaussian graphical models where different prior distributions on the inverse covariance are employed.…”
Section: Discussionmentioning
confidence: 99%