Proceedings of the 1998 IEEE International Conference on Acoustics, Speech and Signal Processing, ICASSP '98 (Cat. No.98CH36181
DOI: 10.1109/icassp.1998.675351
|View full text |Cite
|
Sign up to set email alerts
|

Maximum likelihood modeling with Gaussian distributions for classification

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
98
0
1

Publication Types

Select...
5
4

Relationship

0
9

Authors

Journals

citations
Cited by 222 publications
(102 citation statements)
references
References 4 publications
0
98
0
1
Order By: Relevance
“…The number of classes was 43, corresponding to the number of monophones. MLLT [15] was applied after LDA, HDA, ODA and HLDAC. For PLDA, we assumed that projected class covariance matrices in Eq.…”
Section: Feature Transformation Proceduresmentioning
confidence: 99%
“…The number of classes was 43, corresponding to the number of monophones. MLLT [15] was applied after LDA, HDA, ODA and HLDAC. For PLDA, we assumed that projected class covariance matrices in Eq.…”
Section: Feature Transformation Proceduresmentioning
confidence: 99%
“…The two transformed feature vectors, u t and v t , are concatenated for feature-level association. Since continuous HMMs having Gaussian mixture models with diagonal covariance matrices are used for recognition, the concatenated features are transformed further by using the maximum likelihood linear transform (MLLT) [18] so that each component of the feature vector is uncorrelated with each other.…”
Section: Feature-level Associationmentioning
confidence: 99%
“…Linear discriminant analysis (LDA) [13] for instance is a standard technique to minimize intra-class discriminability, to maximize inter-class discriminability and to extract relevant informations from high-dimensional features spanning larger contexts. Maximum likelihood linear transforms (MLLT) [14], [15] is a common method to de-correlate feature components, and feature-space maximum likelihood linear regression (fM-LLR) [16], [17] is widely used for speaker adaptation. Class discriminating properties are critical for clustering methods like the one used in [12], and adaptive feature transformations can help reduce variability.…”
Section: Introductionmentioning
confidence: 99%