2019
DOI: 10.1587/transinf.2018edp7222
|View full text |Cite
|
Sign up to set email alerts
|

Feature Based Domain Adaptation for Neural Network Language Models with Factorised Hidden Layers

Abstract: Language models are a key technology in various tasks, such as, speech recognition and machine translation. They are usually used on texts covering various domains and as a result domain adaptation has been a long ongoing challenge in language model research. With the rising popularity of neural network based language models, many methods have been proposed in recent years. These methods can be separated into two categories: model based and feature based adaptation methods. Feature based domain adaptation has … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
7
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
3
2
1

Relationship

1
5

Authors

Journals

citations
Cited by 6 publications
(11 citation statements)
references
References 33 publications
(43 reference statements)
0
7
0
Order By: Relevance
“…In addition, we plan to apply the proposed model to language models for the acoustic speech recognition to adapt them between different domains composed of various speaking styles and topics such as natural conversations and lecture speeches. Second, to improve our DA-RADMM to work without prior knowledge of domain attributes, we can introduce the estimation of domain attribute indicators using other domain estimation techniques [31]. Finally, we can introduce more sophisticated techniques to train experts, including adversarial techniques [22], [24], kernel techniques [19], or sub-space approaches [20].…”
Section: Discussionmentioning
confidence: 99%
“…In addition, we plan to apply the proposed model to language models for the acoustic speech recognition to adapt them between different domains composed of various speaking styles and topics such as natural conversations and lecture speeches. Second, to improve our DA-RADMM to work without prior knowledge of domain attributes, we can introduce the estimation of domain attribute indicators using other domain estimation techniques [31]. Finally, we can introduce more sophisticated techniques to train experts, including adversarial techniques [22], [24], kernel techniques [19], or sub-space approaches [20].…”
Section: Discussionmentioning
confidence: 99%
“…This paper mainly focuses on domains in the linguistic scene, which is related more to language modeling. Most previous researches on multi-domain speech recognition or multidomain language modeling focus on domain adaptation techniques [2], [3], [4], [5], [6], [7]. These adaptation techniques, which are only suitable for neural network models, fall into two categories: feature based approaches and model based approaches.…”
Section: Prior Workmentioning
confidence: 99%
“…These adaptation techniques, which are only suitable for neural network models, fall into two categories: feature based approaches and model based approaches. In feature based approaches, unsupervised features, such as latent Dirichlet allocation (LDA) [8], probabilistic latent semantic analysis (PLSA) [9], or hierarchical Dirichlet process (HDP) [10], are used as auxiliary features to represent implicit domain information [4], [7]. In addition, one hot encoding of ground-truth domain information is also used when it is available during inference [2], [3], [7].…”
Section: Prior Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Various methods have been proposed to mitigate this issue, which include using mixture of domain experts [4], context based interpolation weights [5] and second-pass rescoring through domain-adapted models [6] to feature based domain adaptation [7]. In [8,9], user-provided speech patterns were leveraged for on-the-fly adaptation.…”
Section: Introductionmentioning
confidence: 99%