2006
DOI: 10.1007/11790853_33
|View full text |Cite
|
Sign up to set email alerts
|

A Variable Initialization Approach to the EM Algorithm for Better Estimation of the Parameters of Hidden Markov Model Based Acoustic Modeling of Speech Signals

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2009
2009
2022
2022

Publication Types

Select...
3
2
1

Relationship

0
6

Authors

Journals

citations
Cited by 6 publications
(3 citation statements)
references
References 20 publications
0
3
0
Order By: Relevance
“…In the sonar system, multiple measurements can be acquired by sensor arrays. Unlike previous studies using HMM [ 14 , 22 , 23 ], here, multiple measurements were exploited not only to determine the reliable initial values using the genetic algorithm (GA) but also to update parameters using the Baum–Welch algorithm; these are described comprehensively in the following section.…”
Section: Problem Descriptionmentioning
confidence: 99%
“…In the sonar system, multiple measurements can be acquired by sensor arrays. Unlike previous studies using HMM [ 14 , 22 , 23 ], here, multiple measurements were exploited not only to determine the reliable initial values using the genetic algorithm (GA) but also to update parameters using the Baum–Welch algorithm; these are described comprehensively in the following section.…”
Section: Problem Descriptionmentioning
confidence: 99%
“…This approach applies an HMM in order to classify each sentence in document C into a class corresponding to its co-author. The step (see Sub-section 3.2) for learning of HMM parameters {π π π, B, A} is heavily dependent on the initial values of these parameters (Wu, 1983;Xu and Jordan, 1996;Huda et al, 2006). Therefore, a good initial estimation of the HMM parameters can help achieve a higher classification accuracy.…”
Section: Initializationmentioning
confidence: 99%
“…Huda et al (Huda et al, 2006) proposed different initial guesses and the solution that corresponds to the local maximum with the largest probability is selected. The authors (Aupetit et al, 2007;Fengqin et al, 2008;Xue et al, 2006) used the Particle Swarm Optimization (PSO) algorithm (Kennedy et al, 2001).…”
Section: New Generative and Discriminative Training Algorithmsmentioning
confidence: 99%