IEEE International Conference on Acoustics Speech and Signal Processing 1993
DOI: 10.1109/icassp.1993.319405
|View full text |Cite
|
Sign up to set email alerts
|

Vector quantization for the efficient computation of continuous density likelihoods

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
78
0
3

Year Published

1997
1997
2012
2012

Publication Types

Select...
4
4

Relationship

0
8

Authors

Journals

citations
Cited by 120 publications
(81 citation statements)
references
References 3 publications
0
78
0
3
Order By: Relevance
“…If we could combine them efficiently, further process acceleration could be expected. Based on the similar idea, the combination of Gaussian selection [4] and batch calculation [10] is proposed in [12], and the complemental process acceleration is reported. However, to the best of our knowledge, there have been no studies investigating the combination of recycling [5], [6] and batch calculation [10].…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…If we could combine them efficiently, further process acceleration could be expected. Based on the similar idea, the combination of Gaussian selection [4] and batch calculation [10] is proposed in [12], and the complemental process acceleration is reported. However, to the best of our knowledge, there have been no studies investigating the combination of recycling [5], [6] and batch calculation [10].…”
Section: Introductionmentioning
confidence: 99%
“…The first category consists of purely algorithmic techniques, such as Gaussian reduction [1], model parameter tying [2], scalar quantization of feature vectors [3], Gaussian selection [4], and state selection (or state likelihood recycling) [5], [6]. All of these techniques are based on approx- imations, i.e.…”
Section: Introductionmentioning
confidence: 99%
“…For small and medium vocabulary tasks, the state likelihood computation can represent a significant portion of the overall computation. One way to speed up this computation is to reduce the number of Gaussians needing to be considered to compute the likelihood for a state by preparing a Gaussian short list for each HMM state and each region of the quantified feature space [5]. Doing so, only a fraction of the Gaussians of each mixture is considered during decoding.…”
Section: Decodermentioning
confidence: 99%
“…Another way to reduce the search area is to cluster the Gaussians after the training and using the obtained cluster centers to determine the exact search area [2]. This can also be done during the training by organizing the Gaussians initially into a tree structure and maintaining the topological ordering in the structure by training the models with the Tree-search SOM [6].…”
Section: Introductionmentioning
confidence: 99%
“…A remarkable part of the whole recognition time (clearly over 50% for the current system) is consumed in the search for the best-matching mixtures and for larger systems the portion increases even more. With large mixture density codebooks it is often a sufficient density approximation to compute only a few mixtures (Kbest) closest to the current observation, because the other mixtures will produce such a low and inaccurate likelihood values that they have practically no effect on the search for the optimal path [7,1,2].…”
Section: Introductionmentioning
confidence: 99%