Machine Learning and Data Mining in Pattern Recognition
DOI: 10.1007/3-540-45065-3_15
|View full text |Cite
|
Sign up to set email alerts
|

Novel Mixtures Based on the Dirichlet Distribution: Application to Data and Image Classification

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
18
0

Publication Types

Select...
4
3
3

Relationship

3
7

Authors

Journals

citations
Cited by 42 publications
(18 citation statements)
references
References 7 publications
0
18
0
Order By: Relevance
“…Although model-based classification is usually enjoyed for its multiple advantages, model-based discriminant analysis methods is mostly limited to quantitative data. Some extensions exist to handle categorical data using multinomial (Celeux and Govaert 1991) or Dirichlet (Bouguila et al 2003) distributions for instance. In addition, even in the case of quantitative data, the Gaussian assumption may not be well-suited for the data at hand.…”
Section: Model-based Classificationmentioning
confidence: 99%
“…Although model-based classification is usually enjoyed for its multiple advantages, model-based discriminant analysis methods is mostly limited to quantitative data. Some extensions exist to handle categorical data using multinomial (Celeux and Govaert 1991) or Dirichlet (Bouguila et al 2003) distributions for instance. In addition, even in the case of quantitative data, the Gaussian assumption may not be well-suited for the data at hand.…”
Section: Model-based Classificationmentioning
confidence: 99%
“…Indeed, many researchers consider finite Gaussian mixtures for data modeling. The Dirichlet mixture, however, could offer better modeling capabilities as shown in [3][4][5]9,10], where it was used as a parent distribution and not as a prior for different image-processing tasks. In [3,10] finite Dirichlet mixture models were introduced as a flexible approach for data modeling, where a deterministic approach for the estimation of its parameters was proposed.…”
Section: Introductionmentioning
confidence: 98%
“…, d − 1 and γ jd = β jd − 1. This distribution is a generalization of the Dirichlet distribution widely studied in [25,26,21]. In [20], the authors developed a hybrid stochastic expectation maximization algorithm (HSEM) to estimate the parameters of the generalized Dirichlet mixture.…”
Section: Introductionmentioning
confidence: 99%