Proceedings of the 1998 IEEE International Conference on Acoustics, Speech and Signal Processing, ICASSP '98 (Cat. No.98CH36181
DOI: 10.1109/icassp.1998.675347
|View full text |Cite
|
Sign up to set email alerts
|

Clustering via the Bayesian information criterion with applications in speech recognition

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
11
0

Publication Types

Select...
5
2
1
1

Relationship

0
9

Authors

Journals

citations
Cited by 91 publications
(19 citation statements)
references
References 4 publications
0
11
0
Order By: Relevance
“…This tool for identifying the optimal number of clusters has already been used in many applications and more theoretical studies (e.g. Chen and Gopalakrishnan, 1998;Fraley and Raftery, 1998;Cobos et al, 2014). In addition to allowing to identify the optimal number of clusters k, an information criterion also allows for finding the best constraint value C when persistence is incorporated in the clustering procedure.…”
Section: Information Criteriamentioning
confidence: 99%
“…This tool for identifying the optimal number of clusters has already been used in many applications and more theoretical studies (e.g. Chen and Gopalakrishnan, 1998;Fraley and Raftery, 1998;Cobos et al, 2014). In addition to allowing to identify the optimal number of clusters k, an information criterion also allows for finding the best constraint value C when persistence is incorporated in the clustering procedure.…”
Section: Information Criteriamentioning
confidence: 99%
“…Hence, the problem of detecting 0-indegree genes can be solved efficiently by performing a set of transformations to convert a Power-Law distribution into a Gaussian distribution. It was shown in [48,49] that given a univariate distribution, i.e. samples generated from a function of a single variable (f (b) = e −λb in our case), it can be turned into a normal distribution.…”
Section: Main Regularization Stepsmentioning
confidence: 95%
“…This method provided a score that was used to choose the best number of Gaussians N. The lower the score, the better the model fitness was [28].…”
Section: Know-how Transferringmentioning
confidence: 99%