1996
DOI: 10.1109/97.508165
|View full text |Cite
|
Sign up to set email alerts
|

A training algorithm for statistical sequence recognition with applications to transition-based speech recognition

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

1997
1997
2015
2015

Publication Types

Select...
3
2
2

Relationship

0
7

Authors

Journals

citations
Cited by 9 publications
(4 citation statements)
references
References 4 publications
0
4
0
Order By: Relevance
“…However, useful information such as linguistic constraints and longer acoustic features should be added to improve CM performance. In this section, we firstly imposed dependence between phones as explained in Figure.2, then compute the enhanced posteriors via Recursive Estimation and Maximization of A Posteriori Probabilities (REMAP) proposed in [8], and finally propose two CMs based on this enhanced posteriors.…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…However, useful information such as linguistic constraints and longer acoustic features should be added to improve CM performance. In this section, we firstly imposed dependence between phones as explained in Figure.2, then compute the enhanced posteriors via Recursive Estimation and Maximization of A Posteriori Probabilities (REMAP) proposed in [8], and finally propose two CMs based on this enhanced posteriors.…”
Section: Methodsmentioning
confidence: 99%
“…Typically, the posterior probability can be estimated either by a generative model such as HMM/GMM [9] or a discriminative model such as multi-layer perceptron (MLP) [7], [8].…”
Section: Introductionmentioning
confidence: 99%
“…In this paper: (1) we demonstrate that the original HMM/ANN system 3, 4] trained using local criteria indeed optimizes the global posterior probability, given certain well-de ned assumptions (2) we use the REMAP algorithm to derive a f o r w ard-backward training algorithm for the original HMM/ANN system (3) we demonstrate the performance of these algorithms on the task-independent Phonebook database.…”
Section: Introductionmentioning
confidence: 92%
“…However, using similar factorizations and assumptions as above, Bourlard, Konig and Morgan (1996) and Hennebert, Ris, Bourlard, Renals and Morgan (1997) demonstrated that a generalized EM algorithm exists for the optimization of the parameters of acceptor HMMs. The E-step consists of estimating the posterior state/time probabilities given the acoustic data; the M-step involves the parameter optimization of the local posterior probability estimators (typically artificial neural networks).…”
Section: Generative Hmms and Acceptor Hmmsmentioning
confidence: 98%