1999
DOI: 10.1109/89.759031
|View full text |Cite
|
Sign up to set email alerts
|

Online adaptation of hidden Markov models using incremental estimation algorithms

Abstract: The mismatch that frequently occurs between the training and testing conditions of an automatic speech recognizer can be efficiently reduced by adapting the parameters of the recognizer to the testing conditions. Two measures that characterize the performance of an adaptation algorithm are the speed with which it adapts to the new conditions, and its computational complexity, which is important for online applications. Recently, a family of adaptation algorithms for continuous-density hidden Markov model (HMM)… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
18
0

Year Published

2006
2006
2017
2017

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 34 publications
(18 citation statements)
references
References 19 publications
0
18
0
Order By: Relevance
“…Supervised speaker adaptation experiments reveal the stable performance of the proposed method obtained by utilizing the advantages of the prediction and error-compensation mechanism based on the Kalman filter algorithm. We will apply the proposed method to more realistic incremental adaptation tasks such as online and unsupervised tasks [2,7].…”
Section: Discussionmentioning
confidence: 99%
“…Supervised speaker adaptation experiments reveal the stable performance of the proposed method obtained by utilizing the advantages of the prediction and error-compensation mechanism based on the Kalman filter algorithm. We will apply the proposed method to more realistic incremental adaptation tasks such as online and unsupervised tasks [2,7].…”
Section: Discussionmentioning
confidence: 99%
“…The scheme of parameter updating and merge/split of components [21] is also used in AHMM. The adaptive HMM algorithms have been proposed and applied to image processing, audio processing, and pattern recognition [24][25][26][27]. However, they generally consider only parameter or structure updating to learn dynamic changes of machine health online.…”
Section: Ahmmmentioning
confidence: 99%
“…In comparison with other adaptive HMM algorithms [22][23][24][25][26][27], which generally consider only parameter updating or structure updating, AHMM is capable of online learning both gradual and abrupt changes of machine health. (2) A health change detection method is proposed to recognize those abrupt health changes, and the adaptive learning scheme (by adding hidden states in AHMM) is provided to respond to these health state changes quickly and effectively.…”
Section: Introductionmentioning
confidence: 99%
“…For instance, some methods compute t−1 by performing a partial expectation step (E-step) of the Baum-Welch algorithm on t−1 and t [2][3][4]12,15,16], and a maximization step (M-step) after each time t. One shortcoming of this approach is that once one parameter in t−1 is set to 0, there is no way to re-estimate this parameter again. To work around that problem, a small constant is added to each parameter in t−1 [2], but this solution results in an imprecise estimate of parameters and additional evaluations are required for finding the best value of .…”
Section: Combining Old and New Informationmentioning
confidence: 99%
“…In other cases, D t presents some data that are useful to transform the parameters of a given HMM from more generalized ones to more specialized parameters, and the weight of the new data is greater than the weight of the old data. The latter is generally referred to as adaptation [12,13].…”
Section: The Weight Of the New Datamentioning
confidence: 99%