1999
DOI: 10.1111/j.1745-3984.1999.tb00556.x
|View full text |Cite
|
Sign up to set email alerts
|

Psychometric and Cognitive Functioning of an Under‐Determined Computer‐Based Response Type for Quantitative Reasoning

Abstract: We evaluated a computer‐delivered response type for measuring quantitative skill. “Generating Examples” (GE) presents under‐determined problems that can have many right answers. We administered two GE tests that differed in the manipulation of specific item features hypothesized to affect difficulty. Analyses related to internal consistency reliability, external relations, and features contributing to item difficulty, adverse impact, and examinee perceptions. Results showed that GE scores were reasonably relia… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
8
0

Year Published

2000
2000
2017
2017

Publication Types

Select...
4
3
1

Relationship

3
5

Authors

Journals

citations
Cited by 14 publications
(8 citation statements)
references
References 7 publications
0
8
0
Order By: Relevance
“…Thus far, only preliminary work has been done on the accuracy of GE scoring, and that only with item types other than ME (Bennett, Morley, et al, 1999). In this study, responses were used from two parallel 20-item tests.…”
Section: Generating Examplesmentioning
confidence: 99%
“…Thus far, only preliminary work has been done on the accuracy of GE scoring, and that only with item types other than ME (Bennett, Morley, et al, 1999). In this study, responses were used from two parallel 20-item tests.…”
Section: Generating Examplesmentioning
confidence: 99%
“…Other tests similar to the "two-stage" include the "pyramidal," "flexilevel," "stradaptive," and "countdown" approaches (for reviews see Butcher, Keller, & Bacon, 1985;Epstein & Klinkenberg 2001;Weiss, 1985). More recent advances include test types called generating examples (GE) and are found described in reviews by Bennett (1999), Bennett et al, (1999) and Bennett, Steffen, Singley, Morley, and Jacquemin (1997).…”
Section: Introductionmentioning
confidence: 99%
“…Randy Bennett took the lead at ETS in exploring the technology for scoring constructed responses in concert with theory about the relevant constructs, including mathematics Bennett et al 1999Bennett et al , 2000aSandene et al 2005;Sebrechts et al 1991Sebrechts et al , 1996, computer science (Bennett and Wadkins 1995), graphical items (Bennett et al 2000a;, and formulating hypotheses (Bennett and Rock 1995). The scoring of mathematics items has reached a significant level of maturity (Fife 2013), as has the integration of task design and automated scoring (Graf and Fife 2012).…”
Section: Automated Scoringmentioning
confidence: 99%
“…He evaluated the SAT grid-in format with GRE items and concluded that the multiple-choice and grid-in versions of GRE items behaved very similarly. Following the adoption of the grid-in format in the SAT, a more comprehensive examination of mathematics item formats that could serve to elicit quantitative skills was undertaken, informed by advances in the understanding of mathematical cognition and a maturing computerbased infrastructure (Bennett and Sebrechts 1997;Bennett et al , 1999Bennett et al , 2000aSandene et al 2005;Sebrechts et al 1996). More recently, the mathematics strand of the CBAL initiative has attempted to unpack mathematical proficiency by means of competency models, the corresponding constructed-response tasks (Graf 2009), and scoring approaches (Fife 2013).…”
Section: Mathematicsmentioning
confidence: 99%