Interspeech 2016 2016
DOI: 10.21437/interspeech.2016-480
|View full text |Cite
|
Sign up to set email alerts
|

Combining Feature and Model-Based Adaptation of RNNLMs for Multi-Genre Broadcast Speech Recognition

Abstract: Recurrent neural network language models (RNNLMs) have consistently outperformed n-gram language models when used in automatic speech recognition (ASR). This is because RNNLMs provide robust parameter estimation through the use of a continuous-space representation of words, and can generally model longer context dependencies than n-grams. The adaptation of RNNLMs to new domains remains an active research area and the two main approaches are: feature-based adaptation, where the input to the RNNLM is augmented w… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
42
0

Year Published

2017
2017
2024
2024

Publication Types

Select...
3
3

Relationship

3
3

Authors

Journals

citations
Cited by 20 publications
(43 citation statements)
references
References 27 publications
0
42
0
Order By: Relevance
“…In line with previous work [6], we hereby consider using a separate weight matrix W (hf ) at the hidden layer. The hidden state vector equation then becomes:…”
Section: Feature Sub-networkmentioning
confidence: 99%
See 4 more Smart Citations
“…In line with previous work [6], we hereby consider using a separate weight matrix W (hf ) at the hidden layer. The hidden state vector equation then becomes:…”
Section: Feature Sub-networkmentioning
confidence: 99%
“…There are four main approaches of adding features to a RNNLM [2,18,4,19,6]. Taking f to be the feature vectors, these approaches are:…”
Section: Feature-based Rnnlm Adaptationmentioning
confidence: 99%
See 3 more Smart Citations