Proceedings of the 7th International Conference on Computer Supported Education 2015
DOI: 10.5220/0005437200770087
|View full text |Cite
|
Sign up to set email alerts
|

Automatic Generation of English Vocabulary Tests

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
32
0

Year Published

2017
2017
2021
2021

Publication Types

Select...
3
2
2

Relationship

0
7

Authors

Journals

citations
Cited by 26 publications
(37 citation statements)
references
References 8 publications
1
32
0
Order By: Relevance
“…The results in their study showed that their proposed method ensured the validity of the items and produced fewer problematic distractors than the baseline method, being also comparable with that of human-made distractors. Their findings coincided to a similar earlier study, where Susanti et al (2015) attempted to construct English vocabulary automatic tests in which human evaluation indicated that 45% of the responses from English teachers mistakenly judged the automatically generated questions to be human-generated questions. Finally, Ha and Yaneva (2018)…”
Section: B) Option Weighting Practicessupporting
confidence: 87%
“…The results in their study showed that their proposed method ensured the validity of the items and produced fewer problematic distractors than the baseline method, being also comparable with that of human-made distractors. Their findings coincided to a similar earlier study, where Susanti et al (2015) attempted to construct English vocabulary automatic tests in which human evaluation indicated that 45% of the responses from English teachers mistakenly judged the automatically generated questions to be human-generated questions. Finally, Ha and Yaneva (2018)…”
Section: B) Option Weighting Practicessupporting
confidence: 87%
“…Still other approaches to content selection are more specific and are informed by the type of question to be generated. For example, the purpose of the study reported in Susanti et al (2015) is to generate "closest-in-meaning vocabulary questions" 9 which involve selecting a text snippet from the Internet that contains the target word, while making sure that the word has the same sense in both the input and retrieved sentences. To this end, the retrieved text was scored on the basis of metrics such as the number of query words that appear in the text.…”
Section: Generation Tasksmentioning
confidence: 99%
“…Among these are selection of distractors based on word frequency (i.e. the number of times distractors appear in a corpus is similar to the key) ( Kwankajornkiet et al 2016;Susanti et al 2015) selected distractors that are declared in a KB to be siblings of the key, which also implies some notion of similarity (siblings are assumed to be similar). Another approach that relies on structured knowledge sources is described in Seyler et al (2017).…”
Section: Generation Tasksmentioning
confidence: 99%
See 1 more Smart Citation
“…Most approaches require the distractor and the target word to have the same part-of-speech (POS) and similar level of difficulty, often approximated by word frequency (Coniam, 1997;Shei, 2001;Brown et al, 2005). They must also be semantically close, which can be quantified with semantic distance in WordNet (Lin et al, 2007;Pino et al, 2008;Chen et al, 2015;Susanti et al, 2015), thesauri (Sumita et al, 2005;Smith et al, 2010), ontologies (Karamanis et al, 2006;Ding and Gu, 2010), or handcrafted rules (Chen et al, 2006). Another approach generates distractors that are semantically similar to the target word in some sense, but not in the particular sense in the carrier sentence (Zesch and Melamud, 2014).…”
Section: Previous Workmentioning
confidence: 99%