2014
DOI: 10.1109/lsp.2014.2303136
|View full text |Cite
|
Sign up to set email alerts
|

Efficient One-Pass Decoding with NNLM for Speech Recognition

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
25
0

Year Published

2015
2015
2023
2023

Publication Types

Select...
4
3

Relationship

1
6

Authors

Journals

citations
Cited by 25 publications
(25 citation statements)
references
References 9 publications
0
25
0
Order By: Relevance
“…One important practical issue associated with RNNLMs is the computational cost incurred in model training. This limits the quantity of data and their possible application areas, and therefore has drawn increasing research interest in recent years [2,11,12,5,13,10,14,15].…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…One important practical issue associated with RNNLMs is the computational cost incurred in model training. This limits the quantity of data and their possible application areas, and therefore has drawn increasing research interest in recent years [2,11,12,5,13,10,14,15].…”
Section: Introductionmentioning
confidence: 99%
“…One technique that can be used to improve the testing speed is introducing the variance of the normalization term into the conventional cross entropy based objective function. This has been applied to training of feedforward NNLMs, class based [13,10,14] and full output RNNLMs [16]. By minimizing the variance of the normalization term during training, the normalization term at the output layer can be ignored during testing time thus gaining significant improvements in speed.…”
Section: Introductionmentioning
confidence: 99%
“…Another type of solution to speedup evaluation of NNLMs has been proposed both in [12] (variance regularisation) and [10] (self-norm). The variance of the softmax log normalisation is added into the objective function for optimisation.…”
Section: F-rnnlm With Variance Regularisationmentioning
confidence: 99%
“…The second method allows the RNNLM to be used without softmax normalisation during testing, by training with an extra variance regularisation term in the training objective function. This approach was applied on feedforward NNLMs and class-based RNNLMs in previous work [12,10,13]. It can also be applied to full output layer RNNLMs.…”
Section: Introductionmentioning
confidence: 99%
“…The language model of S1-10 is a word trigram language model, while S11 utilizes a feed-forward neural network language model with variance regularizations [19]. Besids, S11 employs our own decoder [19] while other systems employ the Kaldi decoder. The TWV results of our KWS systems after KST normalization are listed in Table 1.…”
Section: Kws Systemsmentioning
confidence: 99%