1932
DOI: 10.1080/00220671.1932.10880276
|View full text |Cite
|
Sign up to set email alerts
|

Comparisons of Short Answer and Multiple Choice Tests Covering Identical Subject Content

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
8
0

Year Published

1933
1933
2016
2016

Publication Types

Select...
7
1
1

Relationship

0
9

Authors

Journals

citations
Cited by 17 publications
(8 citation statements)
references
References 0 publications
0
8
0
Order By: Relevance
“…He thought that inconsistencies seemed too large to justify equivalence in validity argument. Those forms that were administered one day apart (Ruch & Stoddard, 1925;Hurd, 1932) resulted in slightly lower corrected correlations (.95 and -89). Ruch and Stoddard suggested that CR items were more reliable, item for item, due to the lack of guessing effects present in MC items.…”
Section: Existing Empirical Investigationsmentioning
confidence: 93%
“…He thought that inconsistencies seemed too large to justify equivalence in validity argument. Those forms that were administered one day apart (Ruch & Stoddard, 1925;Hurd, 1932) resulted in slightly lower corrected correlations (.95 and -89). Ruch and Stoddard suggested that CR items were more reliable, item for item, due to the lack of guessing effects present in MC items.…”
Section: Existing Empirical Investigationsmentioning
confidence: 93%
“…For instance, some authors test whether the correlation coefficients (corrected for test unreliability) between scores on tests differing in item format differ from unity (Hogan, 1981;Mellenbergh, 1971). Others interpret correlation coefficients of at least .80 as proof of the null hypothesis, which is that there are no differences in intellectual skills measured on tests with openended and multiple-choice items (Hurd, 1930;Paterson, 1926).…”
Section: University Of Utrechtmentioning
confidence: 99%
“…But for largely uncharted terrain like evaluative attitudes toward presidential performance, open‐ended questions are a uniquely appropriate strategy, worth “the trouble they take to ask and the complexities inherent in the analysis of their answers” (Krosnick, Judd, and Wittenbrink , 35; see also Holbrook et al ). In addition, past studies show that they have higher reliability and validity than closed‐ended questions (Hurd ; Remmers et al ).…”
Section: Methodsmentioning
confidence: 99%