2016
DOI: 10.1016/j.procs.2016.04.037
|View full text |Cite
|
Sign up to set email alerts
|

Lithuanian Broadcast Speech Transcription Using Semi-supervised Acoustic Model Training

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
6
0

Year Published

2017
2017
2020
2020

Publication Types

Select...
4
4

Relationship

0
8

Authors

Journals

citations
Cited by 12 publications
(7 citation statements)
references
References 21 publications
0
6
0
Order By: Relevance
“…They apply semi-supervised learning for lexicon learning as well as acoustic modeling. Similarly, Veselỳ et al (2018) & Lileikytė et al (2016 (Karita et al, 2019). In both cases, they assume that the encoded speech features in the current minibatch are sampled from one distribution and encoded text features in the current minibatch are sampled from a second distribution.…”
Section: Related Workmentioning
confidence: 99%
“…They apply semi-supervised learning for lexicon learning as well as acoustic modeling. Similarly, Veselỳ et al (2018) & Lileikytė et al (2016 (Karita et al, 2019). In both cases, they assume that the encoded speech features in the current minibatch are sampled from one distribution and encoded text features in the current minibatch are sampled from a second distribution.…”
Section: Related Workmentioning
confidence: 99%
“…Nonetheless, important researches are carried out in the field of annotation of speech, which is the most labour and time consuming activity. The majority of investigators demonstrate abilities in transcription of broadcast speech (Esteve et al, 2010;Lileikytė et al, 2016;Mansikkaniemi et al, 2017), extent of annotation and linkage (Johannessen et al, 2007).…”
Section: Related Workmentioning
confidence: 99%
“…Semi-supervised acoustic model training techniques, in which a small amount of reference data with annotations is used to bootstrap an ASR system that provides training labels for untranscribed speech data, have been researched intensively and various training strategies and data selection criteria have been proposed [26,27,28,29,30,31,32,33,34]. These techniques are especially useful for increasing the available training data while building ASR systems for under-resourced languages with minimal amount of manually annotated data.…”
Section: Introductionmentioning
confidence: 99%