2014 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) 2014
DOI: 10.1109/icassp.2014.6854534
|View full text |Cite
|
Sign up to set email alerts
|

Paraphrastic neural network language models

Abstract: Expressive richness in natural languages presents a significant challenge for statistical language models (LM). As multiple word sequences can represent the same underlying meaning, only modelling the observed surface word sequence can lead to poor context coverage. To handle this issue, paraphrastic LMs were previously proposed to improve the generalization of back-off n-gram LMs. Paraphrastic neural network LMs (NNLM) are investigated in this paper. Using a paraphrastic multi-level feedforward NNLM modelling… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

1
2
0

Year Published

2015
2015
2015
2015

Publication Types

Select...
1

Relationship

1
0

Authors

Journals

citations
Cited by 1 publication
(3 citation statements)
references
References 21 publications
(37 reference statements)
1
2
0
Order By: Relevance
“…Experimental results suggest the proposed method is also effective in improving RNNLM performance, consistent with the improvements reported in the earlier research on back-off n-gram LMs [14] and feedforward NNLMs [15]. Significant error rate reductions of 1.94% absolute (12% relative) were obtained on a state-of-the-art large vocabulary speech recognition task.…”
Section: Conclusion and Relation To Prior Worksupporting
confidence: 85%
See 2 more Smart Citations
“…Experimental results suggest the proposed method is also effective in improving RNNLM performance, consistent with the improvements reported in the earlier research on back-off n-gram LMs [14] and feedforward NNLMs [15]. Significant error rate reductions of 1.94% absolute (12% relative) were obtained on a state-of-the-art large vocabulary speech recognition task.…”
Section: Conclusion and Relation To Prior Worksupporting
confidence: 85%
“…This form of intuitive and interpretable counts smoothing automatically re-distributes statistics to alternative expressions of the same observed word sequence. It was previously exploited to improve the context coverage and generalization for several forms of LMs that do not explicitly capture the expressive richness related variability in natural languages, including back-off n-gram LMs [14], and feedforward NNLMs [15].…”
Section: Generation Of Paraphrase Variantsmentioning
confidence: 99%
See 1 more Smart Citation