2011
DOI: 10.1007/s10044-011-0236-8
|View full text |Cite
|
Sign up to set email alerts
|

On the smoothing of multinomial estimates using Liouville mixture models and applications

Abstract: There has been major progress in recent years in statistical model-based pattern recognition, data mining and knowledge discovery. In particular, generative models are widely used and are very reliable in terms of overall performance. Success of these models hinges on their ability to construct a representation which captures the underlying statistical distribution of data. In this article we focus on count data modeling. Indeed, this kind of data is naturally generated in many contexts and in different applic… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
1
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
3
3
1

Relationship

0
7

Authors

Journals

citations
Cited by 9 publications
(4 citation statements)
references
References 68 publications
(72 reference statements)
0
1
0
Order By: Relevance
“…The strength of the push is proportional to the value of α. The process of Bayesian probability estimation is sometimes called smoothing or flattening (Flach, 2012;Fienberg and Holland, 1972;Bouguila, 2013). The main role of smoothing is to push the estimated probability away from the extreme value obtained with relative frequency (1) towards some sort of average value (1/2 for Laplace's and Piegat's formulae, p a for the m-estimate).…”
Section: Methods For Probability Estimationmentioning
confidence: 99%
“…The strength of the push is proportional to the value of α. The process of Bayesian probability estimation is sometimes called smoothing or flattening (Flach, 2012;Fienberg and Holland, 1972;Bouguila, 2013). The main role of smoothing is to push the estimated probability away from the extreme value obtained with relative frequency (1) towards some sort of average value (1/2 for Laplace's and Piegat's formulae, p a for the m-estimate).…”
Section: Methods For Probability Estimationmentioning
confidence: 99%
“…Differently from the k-means, a hard-clustering algorithm, GMMs [MMA20] are softclustering probabilistic models, that describe each cluster in a dataset by a Gaussian distribution characterized by its mean and variance. Notice that, when we are working two independent random variables, GMMs output bivariate Gaussian distributions characterized by its mean, variance, and covariance.…”
Section: Feature Engineeringmentioning
confidence: 99%
“…In addition, by forcing the constellation points to be modeled as bivariate Gaussian distributions and initializing the GMM fitting with the expected constellation 50 PhD Thesis -Diogo Gonçalo Sequeira centroids, the obtained features are strongly conditioned. For this reason, we compute the loglikelihood value ℒ as a goodness-of-fit metric to estimate GMM fitting accuracy [MMA20].…”
Section: Iq Oc Features Engineeringmentioning
confidence: 99%
See 1 more Smart Citation