2006
DOI: 10.1111/j.1468-2419.2006.00261.x
|View full text |Cite
|
Sign up to set email alerts
|

Assessment of the equivalence of conventional versus computer administration of the Test of Workplace Essential Skills

Abstract: This study examined the equivalency of computer and conventional versions of the Test of Workplace Essential Skills (TOWES), a test of adult literacy skills in Reading Text, Document Use and Numeracy. Seventy-three college students completed the computer version, and their scores were compared with those who had taken the test in the conventional paper-and-pencil mode. Scores for the two groups for all three subscales were equivalent based on their means and variances. Rank order equivalency was demonstrated f… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2011
2011
2014
2014

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(2 citation statements)
references
References 6 publications
0
2
0
Order By: Relevance
“…First, the support of complete factorial invariance across devices for the learning assessment implies that the factor structure and parameter values were consistent across both medium deliveries for a cognitive ability‐type assessment. This is an important expansion from previous studies that have either not established ME/I for mobile and nonmobile deliveries of cognitive ability assessments (e.g., Chuah, Drasgow, & Roberts, ; Whiting & Kline, ; Schroeders & Wilhelm, ) or have not examined the ME/I for handheld mobile devices in a selection context (e.g., Schroeders & Wilhelm, ). However, before these results can be generalized, a strong word of caution is warranted based on conflicting mean differences evidence gathered in preliminary studies (Doverspike et al, ; Impelman, ).…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…First, the support of complete factorial invariance across devices for the learning assessment implies that the factor structure and parameter values were consistent across both medium deliveries for a cognitive ability‐type assessment. This is an important expansion from previous studies that have either not established ME/I for mobile and nonmobile deliveries of cognitive ability assessments (e.g., Chuah, Drasgow, & Roberts, ; Whiting & Kline, ; Schroeders & Wilhelm, ) or have not examined the ME/I for handheld mobile devices in a selection context (e.g., Schroeders & Wilhelm, ). However, before these results can be generalized, a strong word of caution is warranted based on conflicting mean differences evidence gathered in preliminary studies (Doverspike et al, ; Impelman, ).…”
Section: Discussionmentioning
confidence: 99%
“…As no published research currently exists, it is unclear whether mobile and nonmobile delivered cognitive assessments are equivalent. Comparison studies of computerized versions and traditional paper‐and‐pencil tests have found that technological advancements in computer hardware have eliminated differences that were once caused by functional or contextual characteristics (e.g., legibility and functionality; proctored vs. unproctored; Leeson, ; Noyes & Garland, ; Scott & Mead, ; Whiting & Kline, ). Other research suggests that where these functional or contextual differences are discoverable, they do not have a meaningful impact on performance or equivalence (e.g., Waters & Pommerich, ).…”
Section: Introductionmentioning
confidence: 99%