2001
DOI: 10.1109/89.906000
|View full text |Cite
|
Sign up to set email alerts
|

Subspace distribution clustering hidden Markov model

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
5

Citation Types

2
49
0

Year Published

2006
2006
2013
2013

Publication Types

Select...
5
3
1

Relationship

0
9

Authors

Journals

citations
Cited by 87 publications
(51 citation statements)
references
References 33 publications
2
49
0
Order By: Relevance
“…Despite the size of the mixture weight arrays, in our experience, evaluation of semi-continuous models is more efficient than evaluation of similar types of compact acoustic models, such as subspace distribution clustered HMMs [3]. There are two reasons for this.…”
Section: Introductionmentioning
confidence: 99%
“…Despite the size of the mixture weight arrays, in our experience, evaluation of semi-continuous models is more efficient than evaluation of similar types of compact acoustic models, such as subspace distribution clustered HMMs [3]. There are two reasons for this.…”
Section: Introductionmentioning
confidence: 99%
“…In most cases, however, the choice of the sub-vectors uses knowledge only of the type of features used; for example clustering Mel-frequency Cepstral Coefficients (MFCC) as one subvector, the first derivatives as a second sub-vector, or grouping each MFCC with its 1st and 2nd derivatives. In [2], 2-dimensional sub-vectors are formed using a greedy algorithm that chooses pairs that are most strongly correlated. In [3], the approach was expanded to higher dimensional sub-vectors using a multiple correlation coefficient.…”
Section: Introductionmentioning
confidence: 99%
“…In [2], 2-dimensional sub-vectors are formed using a greedy algorithm that chooses pairs that are most strongly correlated. In [3], the approach was expanded to higher dimensional sub-vectors using a multiple correlation coefficient. There is clearly a trade-off between the extent of parameter quantization and how much recognition performance degrades because of the quantization noise, but there is also another trade-off involving computation time and memory that arises from the possibility to pre-compute quantities such as Mahanalabolis distances and state log-likelihoods (see for example [16,20,3]).…”
Section: Introductionmentioning
confidence: 99%
“…By sharing the distribution of multiple subspaces of the parameters, the dictionary of the MQDF classifier can be compressed greatly with a slight recognition accuracy loss. Similar technique called split vector quantization is originally developed in the automatic speech recognition (ASR) [5] [6], and also has been successfully used in compressing parameters of the continuous-density hidden Markov model (CDHMM) for handwritten Chinese character recognition [7]. From the experimental results in this paper, it is shown that even by using a shared distribution model for all subspaces of the parameters, the recognition accuracy only decreased by less than 0.2% for handwritten Chinese characters.…”
Section: Introductionmentioning
confidence: 99%