2015
DOI: 10.5430/air.v4n2p72
|View full text |Cite
|
Sign up to set email alerts
|

Cross-language phoneme mapping for phonetic search keyword spotting in continuous speech of under-resourced languages

Abstract: As automatic speech recognition-based applications become increasingly common in a wide variety of market segments, there is a growing need to support more languages. However, for many languages, the language resources needed to train speech recognition engines are either limited or completely non-existent, and the process of acquiring or constructing new language resources is both long and costly. This paper suggests a methodology that enables Phonetic Search Keyword Spotting to be implemented in a large spee… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2016
2016
2021
2021

Publication Types

Select...
2
2
1

Relationship

1
4

Authors

Journals

citations
Cited by 5 publications
(2 citation statements)
references
References 29 publications
0
2
0
Order By: Relevance
“…Our previous works [8,9] reviewed existing technologies that utilize cross-language phoneme mapping to enable the use of statistically representative acoustic models from a wellresourced language in order to perform KWS in an underresourced language. Examples for various implementations of this concept were proposed in several recent studies.…”
Section: Introductionmentioning
confidence: 99%
“…Our previous works [8,9] reviewed existing technologies that utilize cross-language phoneme mapping to enable the use of statistically representative acoustic models from a wellresourced language in order to perform KWS in an underresourced language. Examples for various implementations of this concept were proposed in several recent studies.…”
Section: Introductionmentioning
confidence: 99%
“…This idea was extended by Wang et al (2015) for pre-training of hybrid speech recognition systems. Tetariy et al (2015) proposed to create a phone mapping between languages (namely, English and Spanish in their experiments) to transfer a model pre-trained on a big dataset for use on a smaller dataset in another language. Compared to these methods, the methods investigated in Chapter 3 do not require a phone transcription for the corpus used for pre-training and do not require the existence of a phone mapping between languages.…”
Section: Training a Abstractpotter In A Low-resource Dataset Setupmentioning
confidence: 99%