International Conference on Acoustics, Speech, and Signal Processing
DOI: 10.1109/icassp.1990.115770
|View full text |Cite
|
Sign up to set email alerts
|

Some improvements in speech recognition algorithms based on HMM

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
25
0

Publication Types

Select...
2
2
1

Relationship

0
5

Authors

Journals

citations
Cited by 17 publications
(25 citation statements)
references
References 4 publications
0
25
0
Order By: Relevance
“…As noted in Section 1, Kriouile et al (1990) developed enhanced training algorithms for second-order HMMs. Using ORED, the HMM can also be trained by using standard first-order optimization algorithms.…”
Section: Fast Incremental Training (Fit) Of Higher-order Hmmsmentioning
confidence: 99%
See 2 more Smart Citations
“…As noted in Section 1, Kriouile et al (1990) developed enhanced training algorithms for second-order HMMs. Using ORED, the HMM can also be trained by using standard first-order optimization algorithms.…”
Section: Fast Incremental Training (Fit) Of Higher-order Hmmsmentioning
confidence: 99%
“…Note that, unlike the HMM1, both the previous and the current state determine the correct choice amongst the number of alternative probabilities on a given transition. Typically these probabilities are represented in a cubic structure, and the calculation of f(X L 1 |M) requires specialized algorithms to track the longer history of states (Kriouile et al, 1990).…”
Section: Assumptionmentioning
confidence: 99%
See 1 more Smart Citation
“…For training higher order HMMs, only second order extensions are currently available [4,5,6], where-as the ORED approach allows all existing algorithms used for processing first order HMMs to be directly applied to higher order HMMs. We therefor expects identical results using either, training algorithms extended to higher order HMMs, or the ORED approach.…”
Section: Summary Of the Ored And Fit Algorithms For Higher Order Hmmsmentioning
confidence: 99%
“…One approach to this, would be to enlarge the 2-dimensional transition matrix used for first order HMMs to be a p-dimensional transition matrix for pth order HMMs. The Baum-Welch and Viterbi algorithms can then be expanded to optimise these higher order models [4,5,6], In contrast, the ORED algorithm first reduces the higher order model to an equivalent first order model, and then trains it using an existing algorithm [1,2]. It does this by recognising that, if all pairs of states that are joined by transitions are by a new unique state, and transitions are correspondingly mapped between these new states, the result will be a first order HMM equivalent to the original second order HMM.…”
Section: Summary Of the Ored And Fit Algorithms For Higher Order Hmmsmentioning
confidence: 99%