1992
DOI: 10.1109/72.125866
|View full text |Cite
|
Sign up to set email alerts
|

Global optimization of a neural network-hidden Markov model hybrid

Abstract: The integration of multilayered and recurrent artificial neural networks (ANNs) with hidden Markov models (HMMs) is addressed. ANNs are suitable for approximating functions that compute new acoustic parameters, whereas HMMs have been proven successful at modeling the temporal structure of the speech signal. In the approach described, the ANN outputs constitute the sequence of observation vectors for the HMM. An algorithm is proposed for global optimization of all the parameters. Results on speaker-independent … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
27
0
1

Year Published

1998
1998
2023
2023

Publication Types

Select...
4
3
2

Relationship

0
9

Authors

Journals

citations
Cited by 154 publications
(29 citation statements)
references
References 20 publications
1
27
0
1
Order By: Relevance
“…Since it is not known how t o compute this probability directly 4 , (1) is usually used to split this posterior probability i n to a likelihood p(X jM ) that represents the contribution of the acoustic model, and a prior probability P (Mj ) that represents the contribution of the language model. p(X jM) and P (M) are estimated during recognition from subsystems whose parameters are sometimes trained from di erent training sets (for instance, from an acoustic training set and from a large corpus of written text).…”
Section: Hidden Markov Models (Hmms)mentioning
confidence: 99%
“…Since it is not known how t o compute this probability directly 4 , (1) is usually used to split this posterior probability i n to a likelihood p(X jM ) that represents the contribution of the acoustic model, and a prior probability P (Mj ) that represents the contribution of the language model. p(X jM) and P (M) are estimated during recognition from subsystems whose parameters are sometimes trained from di erent training sets (for instance, from an acoustic training set and from a large corpus of written text).…”
Section: Hidden Markov Models (Hmms)mentioning
confidence: 99%
“…The direct optimization of such a model (Bengio, de Mori, Flammia & Kompe, 1992) is a computationally expensive process. However, using similar factorizations and assumptions as above, Bourlard, Konig and Morgan (1996) and Hennebert, Ris, Bourlard, Renals and Morgan (1997) demonstrated that a generalized EM algorithm exists for the optimization of the parameters of acceptor HMMs.…”
Section: Generative Hmms and Acceptor Hmmsmentioning
confidence: 99%
“…One solution uses the ANN to compute an additional set of symbols as transformed observations for the HMM [90]. A further improvement of this method is achieved through a global optimization of both the ANN and HMM [91]. This approach uses the gradient of the HMM optimization criterion with respect to the transformed observations to estimate the weights of the ANN connections.…”
Section: Proposed Solutionsmentioning
confidence: 99%