Proceedings - Natural Language Processing in a Deep Learning World 2019
DOI: 10.26615/978-954-452-056-4_037
|View full text |Cite
|
Sign up to set email alerts
|

Entropy as a Proxy for Gap Complexity in Open Cloze Tests

Abstract: This paper presents a pilot study of entropy as a measure of gap complexity in open cloze tests aimed at learners of English. Entropy is used to quantify the information content in each gap, which can be used to estimate complexity. Our study shows that average gap entropy correlates positively with proficiency levels while individual gap entropy can capture contextual complexity. To the best of our knowledge, this is the first unsupervised information-theoretical approach to evaluating the quality of cloze te… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

1
8
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
4
3
1

Relationship

1
7

Authors

Journals

citations
Cited by 10 publications
(9 citation statements)
references
References 12 publications
(14 reference statements)
1
8
0
Order By: Relevance
“…For each gap, we generate a sentence where only the gap is replaced by the masking token and fetch its predictions from the BERT model. From these predictions we take the prediction probability of the solution as the first feature and the entropy of the prediction probabilities of the top-50 predicted words as the second feature in concordance with findings by Felice and Buttery (2019) who show that entropy strongly correlates with the gap difficulty. Adding both features to the 59 features proposed by Beinborn (2016) increases the accuracy of our MLP from 0.33 to 0.37.…”
Section: Datasupporting
confidence: 71%
“…For each gap, we generate a sentence where only the gap is replaced by the masking token and fetch its predictions from the BERT model. From these predictions we take the prediction probability of the solution as the first feature and the entropy of the prediction probabilities of the top-50 predicted words as the second feature in concordance with findings by Felice and Buttery (2019) who show that entropy strongly correlates with the gap difficulty. Adding both features to the 59 features proposed by Beinborn (2016) increases the accuracy of our MLP from 0.33 to 0.37.…”
Section: Datasupporting
confidence: 71%
“…A method of evaluating open cloze tests using entropy to model the restrictiveness of the context provided around the target word was proposed [11]. The context surrounding target words in open cloze tests requires a limited context to reduce choices so students do not respond with unexpected answers that are technically correct.…”
Section: Related Workmentioning
confidence: 99%
“…This study uses the number of possible syntactically and semantically correct choices for a gap and the probability of those choices to measure the restrictiveness of the context provided by questions. The number of choices and their probability is modeled with entropy, "which quantifies the amount of information conveyed by an event" [11]. 5-gram bi-directional language model was used to measure entropy.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…In these cases, tests are dynamically adapted to the examinee's proficiency level during the test session. From a different perspective, Felice and Buttery (2019) show that controlling gap entropy can be useful for designing open cloze tests at different CEFR levels. The work we present in this paper, however, aims to model the more complex task of predicting a full set of gaps at the paragraph level that comply with design and testing principles and is, to the best of our knowledge, the first to employ and adapt transformer-based models for this task.…”
Section: Related Workmentioning
confidence: 99%