2013
DOI: 10.1007/978-3-642-39593-2_7
|View full text |Cite
|
Sign up to set email alerts
|

Statistical Error Correction Methods for Domain-Specific ASR Systems

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
15
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 18 publications
(15 citation statements)
references
References 8 publications
0
15
0
Order By: Relevance
“…Furthermore, each component must be implemented independently (Paulik et al, 2008;Škodová et al, 2012). Some post-processing studies have been conducted using the N-gram language model; however, the statistics-based method requires a large amount data and cannot consider the context (Cucu et al, 2013;Bassil and Semaan, 2012).…”
Section: Conventional Methodologymentioning
confidence: 99%
“…Furthermore, each component must be implemented independently (Paulik et al, 2008;Škodová et al, 2012). Some post-processing studies have been conducted using the N-gram language model; however, the statistics-based method requires a large amount data and cannot consider the context (Cucu et al, 2013;Bassil and Semaan, 2012).…”
Section: Conventional Methodologymentioning
confidence: 99%
“…Early work [12] proposed using domainspecific pre-parsed "exemplar sentences", which might be enough for closed domains with a small number of possible intents. Another approach is using a statistical machine translation model trained on raw/corrected ASR output pairs to correct future errors [3]. Recent work uses language models to correct the output of ASR systems.…”
Section: Related Workmentioning
confidence: 99%
“…Error correction has been applied in automatic speech recognition (ASR), which post-processes the outputs of the ASR system to achieve lower word error rate (WER) (Ringger and Allen, 1996;Cucu et al, 2013;D'Haro and Banchs, 2016;Tanaka et al, 2018). Taking the recognized sentence from the ASR system as source and the ground-truth sentence as target, ASR correction can be formulated as a sequence-to-sequence problem and modeled with autoregressive (Mani et al, 2020;Liao et al, 2020) or non-autoregressive (Leng et al, 2021) generation.…”
Section: Introductionmentioning
confidence: 99%