2005
DOI: 10.1016/j.specom.2005.01.004
|View full text |Cite
|
Sign up to set email alerts
|

Cross-dialectal data sharing for acoustic modeling in Arabic speech recognition

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
28
0

Year Published

2008
2008
2020
2020

Publication Types

Select...
5
3
1

Relationship

0
9

Authors

Journals

citations
Cited by 40 publications
(28 citation statements)
references
References 18 publications
0
28
0
Order By: Relevance
“…Therefore, many modern speech recognition systems perform the recognition process without prior segmentation. These systems tend to be based on the use of features extraction techniques such as MFCC [6,[11][12][13][14], FFT [15][16][17], HFCC [18], Linear Predictive Coefficient (LPC) [8] ... etc.…”
Section: Arabic Phoneme Segmentation Techniquesmentioning
confidence: 99%
“…Therefore, many modern speech recognition systems perform the recognition process without prior segmentation. These systems tend to be based on the use of features extraction techniques such as MFCC [6,[11][12][13][14], FFT [15][16][17], HFCC [18], Linear Predictive Coefficient (LPC) [8] ... etc.…”
Section: Arabic Phoneme Segmentation Techniquesmentioning
confidence: 99%
“…Current literature solves the heterophone HMM training problem using three broad approaches:Diacritize the words and then model them phonemically (Kirchhoff and Vergyri 2005). We refer to this approach as the Simple Phoneme (SP) method, since the standard phoneme is the pronunciation unit.…”
Section: Introductionmentioning
confidence: 99%
“…In one phase, transcriptions are written without diacritics. Afterwards, automatic diacritization is performed to estimate missing diacritic marks (WER is 15%-25%) as in [2] and [3]. Finally, the mapping from diacritized text to phonetic transcription is almost a one-to-one mapping.…”
Section: Introductionmentioning
confidence: 99%