6th European Conference on Speech Communication and Technology 1999
DOI: 10.21437/eurospeech.1999-423
|View full text |Cite
|
Sign up to set email alerts
|

Using detailed linguistic structure in language modelling

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
1
0

Year Published

2002
2002
2024
2024

Publication Types

Select...
3
2
1

Relationship

0
6

Authors

Journals

citations
Cited by 11 publications
(2 citation statements)
references
References 8 publications
0
1
0
Order By: Relevance
“…This study first evaluated prompt performance with the perplexity measure [ 6 ]. Perplexity is a standard metric for evaluating the quality of language models and quantifies how “surprised” the model is when seeing a passage of text [ 24 ]. A passage of text with a perplexity too high may contain language errors or nonsensical content [ 30 ], while a perplexity too low may signify repetitive and uninteresting text [ 34 ].…”
Section: Study I: Prompt Engineeringmentioning
confidence: 99%
“…This study first evaluated prompt performance with the perplexity measure [ 6 ]. Perplexity is a standard metric for evaluating the quality of language models and quantifies how “surprised” the model is when seeing a passage of text [ 24 ]. A passage of text with a perplexity too high may contain language errors or nonsensical content [ 30 ], while a perplexity too low may signify repetitive and uninteresting text [ 34 ].…”
Section: Study I: Prompt Engineeringmentioning
confidence: 99%
“…Perplexity seems an appropriate figure of merit when we try to predict the next word given the previous one but, if we plan to use the LM as part of a recognition or of a translation system, we should consider the correlation between perplexity and the extrinsic figures of merit we would like to optimize (such as WER). This relationship has been investigated by several authors [Iyer et al 1997;; perplexity is a coarse measure Clarkson and Robinson 1999;Klakow and Peters 2002] and it is widely known that, for instance, two LMs with the same test set perplexity can lead to different WERs. 20 Indeed, since perplexity is a coarse measure of goodness (the reciprocal of a geometrical average), two models with the same perplexity may assign different probabilities to the same particular sequence.…”
Section: Segment Generation Confusabilitymentioning
confidence: 99%