ICASSP 2021 - 2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) 2021
DOI: 10.1109/icassp39728.2021.9414800
|View full text |Cite
|
Sign up to set email alerts
|

Domain-Aware Neural Language Models for Speech Recognition

Abstract: As voice assistants become more ubiquitous, they are increasingly expected to support and perform well on a wide variety of use-cases across different domains. We present a domainaware rescoring framework suitable for achieving domainadaptation during second-pass rescoring in production settings. In our framework, we fine-tune a domain-general neural language model on several domains, and use an LSTMbased domain classification model to select the appropriate domain-adapted model to use for second-pass rescorin… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
8

Relationship

1
7

Authors

Journals

citations
Cited by 14 publications
(3 citation statements)
references
References 22 publications
(26 reference statements)
0
3
0
Order By: Relevance
“…To select the rescoring LM, we use the domain aware rescoring framework described in [22] to differentiate between the utterances with contact names and generic ones. For the generic utterances, we use an NCE based neural LM (NLM) [23] trained on 80 million utterances from live traffic.…”
Section: Second-pass Rescoringmentioning
confidence: 99%
“…To select the rescoring LM, we use the domain aware rescoring framework described in [22] to differentiate between the utterances with contact names and generic ones. For the generic utterances, we use an NCE based neural LM (NLM) [23] trained on 80 million utterances from live traffic.…”
Section: Second-pass Rescoringmentioning
confidence: 99%
“…Domain classifiers and other auxiliary models can be used to improve generalization properties. This, however, results in a more complex second pass architecture and higher maintenance costs [16]. Including textual context directly in a neural model has been shown effective in many tasks such as document classification [17], language modeling [18], and acoustic modeling [19].…”
Section: Introductionmentioning
confidence: 99%
“…Various methods have been proposed to mitigate this issue, which include using mixture of domain experts [4], context based interpolation weights [5] and second-pass rescoring through domain-adapted models [6] to feature based domain adaptation [7]. In [8,9], user-provided speech patterns were leveraged for on-the-fly adaptation.…”
Section: Introductionmentioning
confidence: 99%