2016
DOI: 10.4018/978-1-4666-9441-5.ch022
|View full text |Cite
|
Sign up to set email alerts
|

Using Automated Procedures to Generate Test Items That Measure Junior High Science Achievement

Abstract: The purpose of this chapter is to describe and illustrate a template-based method for automatically generating test items. This method can be used to produce a large numbers of high-quality items both quickly and efficiency. To highlight the practicality and feasibility of automatic item generation, we demonstrate the application of this method in the content area of junior high school science. We also describe the results from a study designed to evaluate the quality of the generated science items. Our chapte… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2016
2016
2024
2024

Publication Types

Select...
4
1

Relationship

2
3

Authors

Journals

citations
Cited by 5 publications
(5 citation statements)
references
References 13 publications
0
5
0
Order By: Relevance
“…These outcomes highlight the benefit of using a strong theory approach: if cognitive features are identified and the items generated adequate for testing, item features that predict test performance may be specified and controlled . Consequently, these results encourage the use of of items generated using cognitive modeling processes in operational administrations (Gierl et al, 2016a(Gierl et al, , 2016b.…”
Section: Item Qualitymentioning
confidence: 60%
“…These outcomes highlight the benefit of using a strong theory approach: if cognitive features are identified and the items generated adequate for testing, item features that predict test performance may be specified and controlled . Consequently, these results encourage the use of of items generated using cognitive modeling processes in operational administrations (Gierl et al, 2016a(Gierl et al, , 2016b.…”
Section: Item Qualitymentioning
confidence: 60%
“…Items that are rejected have no value because they fail to meet the content and item development standards. As a result, these two categories of items are unacceptable (Gierl et al., 2016).…”
Section: Methodsmentioning
confidence: 99%
“…The key characteristic of a logical structures cognitive model is that the content for the item can vary, but the idea, formula, algorithm, and/or logical outcome required to manipulate the content is fixed. Cognitive model development using logical structures is well-documented for generating items in content areas such as science (Gierl & Lai, 2017;Gierl et al, 2016) and mathematics (Gierl & Lai, 2016b;Gierl et al, 2015). For example, a logic structure model can be used to solve word problems that measure interval and ratio (problems and scenarios).…”
Section: Cognitive Models For Aigmentioning
confidence: 99%
“…Unfortunately, there are few empirical methods available for quantifying the similarity of generated test items, [36] and hence to date, similarity is often established more subjectively using judgments from test development specialists. To address this limitation in literature, we describe two measure of item similarity that can be used to evaluate the comparability of the generated items.…”
Section: Discussionmentioning
confidence: 99%
“…Given the high cost of item development, the proposed empirical methods for reviewing and identifying commonality within large item banks will help focus resources on unique items rather than on item editing and revision. [36] Nevertheless, the semantic space constructed in this study uses a corpora from a relatively small item bank. We expect that if the semantic space had been constructed using a large sample of operational test items, we would have access to more co-occurrence information thereby leading to the construction of word vectors with close to true semantic meanings.…”
Section: Discussionmentioning
confidence: 99%