2012
DOI: 10.1111/j.1745-3992.2012.00252.x
|View full text |Cite
|
Sign up to set email alerts
|

Evaluating the Comparability of Paper‐ and Computer‐Based Science Tests Across Sex and SES Subgroups

Abstract: As access and reliance on technology continue to increase, so does the use of computerized testing for admissions, licensure/certification, and accountability exams. Nonetheless, full computer‐based test (CBT) implementation can be difficult due to limited resources. As a result, some testing programs offer both CBT and paper‐based test (PBT) administration formats. In such situations, evidence that scores obtained from different formats are comparable must be gathered. In this study, we illustrate how contemp… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

4
9
0

Year Published

2014
2014
2021
2021

Publication Types

Select...
3
3
1

Relationship

0
7

Authors

Journals

citations
Cited by 33 publications
(13 citation statements)
references
References 36 publications
4
9
0
Order By: Relevance
“…The above findings meet criteria for evidence that the mathematics and science constructs were unchanged in eTIMSS (APA 1986;DePascale et al 2016;Randall et al 2012;Winter 2010). Therefore, the difference in scores that resulted from the mode effects can be accounted for through appropriate linking procedures and the paperTIMSS and eTIMSS scores can be put on a common scale.…”
Section: Discussionsupporting
confidence: 51%
See 1 more Smart Citation
“…The above findings meet criteria for evidence that the mathematics and science constructs were unchanged in eTIMSS (APA 1986;DePascale et al 2016;Randall et al 2012;Winter 2010). Therefore, the difference in scores that resulted from the mode effects can be accounted for through appropriate linking procedures and the paperTIMSS and eTIMSS scores can be put on a common scale.…”
Section: Discussionsupporting
confidence: 51%
“…Addressing the second part of the second research question, the final series of analyses examined differences between paperTIMSS and eTIMSS proficiency scores in relation to student background variables. The results provided additional information about the equivalence of the mathematics and science constructs between modes (Randall et al 2012). If two scores are measuring the same construct, then they should have the same degree of relationship with other related measures (APA 1986;DePascale et al 2016;Winter 2010).…”
Section: Estimating Mode Effects For Student Subgroupsmentioning
confidence: 93%
“…Research has also been expanded to the item-level analysis as an indication of factor structure by using a combination of analytic techniques such as Classical Test Theory (item difficulties), IRT calibrations and item characteristics curves (ICCs), SIBTEST, and distractor analysis (Bennett et al, 2008;Lin et al, 2016;Poggio et al, 2005;Randall et al, 2012;Welch et al, 2014). According to Keng, McClarty, & Davis (2008), examining mode effects at the item level is helpful because items can provide information about what features need special attention when included in both PPTs and CBTs.…”
Section: Methodology Reviewmentioning
confidence: 99%
“…Csapó, Molnár, and Tóth (2009) Gender. Some studies of gender analysis across modes have reported insignificant results (Bennett et al, 2008;Poggio, Glasnapp, Yang, & Poggio, 2005;Randall et al, 2012). Bennett et al (2008) conducted a comparability study of PPT and CBT versions of the NAEP in mathematics.…”
Section: Computer-based Tests (Cbt) Versus Paper-and-pencil Tests (Ppt)mentioning
confidence: 99%
See 1 more Smart Citation