3rd International Conference on Spoken Language Processing (ICSLP 1994) 1994
DOI: 10.21437/icslp.1994-124
|View full text |Cite
|
Sign up to set email alerts
|

Minimum error rate training of inter-word context dependent acoustic model units in speech recognition

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
2
0

Year Published

1995
1995
2011
2011

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 28 publications
(3 citation statements)
references
References 0 publications
0
2
0
Order By: Relevance
“…In the first experiment, we used two CDMA wireless databases named "Handset" and "Lapel", respectively, collected in a moving car. set of HBT digit models [7] using 21 databases with a total of 44,123 utterances. The HBT model is context-dependent across the "head" (first 3 states) and "tail" (last 3 states) while the "body" (4 states) is context independent.…”
Section: Connected-digit Recognitionmentioning
confidence: 99%
“…In the first experiment, we used two CDMA wireless databases named "Handset" and "Lapel", respectively, collected in a moving car. set of HBT digit models [7] using 21 databases with a total of 44,123 utterances. The HBT model is context-dependent across the "head" (first 3 states) and "tail" (last 3 states) while the "body" (4 states) is context independent.…”
Section: Connected-digit Recognitionmentioning
confidence: 99%
“…Discriminative training, usually with the MCE or MMI criterion has been shown to give improved performance, e.g. for connected digit recognition systems [7,8]. ANN-based hybrids were originally trained to do single frame discrimination by the embedded Viterbi algorithm [9,10].…”
Section: Introductionmentioning
confidence: 99%
“…(1-20) La minimización de esta expresión lleva, en [19], a una reducción de más del 25% de la tasa de error en el reconocimiento de la tarea de TIDIGITS -aunque utilizando modelos de dígito dependientes del contexto y entrenados con el corpus de train de la propia TIDIGITS -. En este experimento, el resultado base, con modelos de máxima verosimilitud, es del 0,97% de cadenas erróneas, reestimando los modelos según el criterio de mínimo error de clasificación, la tasa de error baja hasta el 0,72%.…”
Section: Entrenamiento De Mínimo Error De Clasificaciónunclassified