2016
DOI: 10.1109/taslp.2016.2562505
|View full text |Cite
|
Sign up to set email alerts
|

Semi-Supervised Acoustic Model Training by Discriminative Data Selection From Multiple ASR Systems’ Hypotheses

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
8
0

Year Published

2017
2017
2024
2024

Publication Types

Select...
6
2

Relationship

0
8

Authors

Journals

citations
Cited by 12 publications
(8 citation statements)
references
References 60 publications
(53 reference statements)
0
8
0
Order By: Relevance
“…Yet another possibility to improve the semi-supervised training is to use the multi-system transcripts from system combination [13], or for the 'agreement analysis' [14]. Also having the 'captions' available can be helpful [15,16].…”
Section: Introductionmentioning
confidence: 99%
“…Yet another possibility to improve the semi-supervised training is to use the multi-system transcripts from system combination [13], or for the 'agreement analysis' [14]. Also having the 'captions' available can be helpful [15,16].…”
Section: Introductionmentioning
confidence: 99%
“…small amount of training data as in MALORCA) [23], [24], [25], [26]. For acoustic modeling, researchers have applied various data-selection schemes to utilize the additional unlabeled data [27], [28], [29], [30], [31], [32]. In this paper, we apply a technique built specifically to account for semantics of the ATM domain [27].…”
Section: B Supervised and Unsupervised Learningmentioning
confidence: 99%
“…More complex data selection methods were also proposed in SSL data selection. In [8], multiple ASR systems were trained to automatically transcribe the speech data, and a cascade of the conditional random field models were used to combine the ASR hypotheses from different systems and judge the reliability of the automatically transcribed data. [9] proposed the global entropy reduction maximization (GERM) method.…”
Section: Introductionmentioning
confidence: 99%