2011
DOI: 10.1080/15434303.2011.613503
|View full text |Cite
|
Sign up to set email alerts
|

The Effect of Mode of Response on a Semidirect Test of Oral Proficiency

Abstract: This article reports on a study conducted with 42 participants from a Chilean university, which aimed to determine the effect of mode of response on test performance and test-taker perception of test features by comparing a semidirect online version and a direct face-to-face version of a speaking test. Candidate performances on both test versions were double-marked and analysed using both classical test theory and many-facet Rasch measurement. To gain an insight into students' perceptions of the two modes of d… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
8
0
1

Year Published

2013
2013
2021
2021

Publication Types

Select...
5
1
1

Relationship

0
7

Authors

Journals

citations
Cited by 14 publications
(10 citation statements)
references
References 15 publications
1
8
0
1
Order By: Relevance
“…Participants were found to have moderately positive perceptions of the construct validity, predictive validity, and content validity of the TOEIC Speaking test. Our results were consistent with those of previous studies conducted on computerdelivered speaking tests (Fan, 2014;Kiddle & Kormos, 2011;Zhou, 2012). In contrast, participants indicated more neutral and conservative attitudes toward computer delivery.…”
Section: Discussionsupporting
confidence: 93%
See 1 more Smart Citation
“…Participants were found to have moderately positive perceptions of the construct validity, predictive validity, and content validity of the TOEIC Speaking test. Our results were consistent with those of previous studies conducted on computerdelivered speaking tests (Fan, 2014;Kiddle & Kormos, 2011;Zhou, 2012). In contrast, participants indicated more neutral and conservative attitudes toward computer delivery.…”
Section: Discussionsupporting
confidence: 93%
“…Overall, results have revealed mixed views of technology-based speaking tests. Test-takers generally react positively to the construct, content, and predictive validity of technology-based speaking tests (Fan, 2014;Kiddle & Kormos, 2011;Qian, 2009;Zhou, 2012). In contrast, they seem to be more conservative toward computer-delivered speaking tests, compared to computer tests of other skills (Fan & Ji, 2014;Stricker & Attali, 2010) and face-to-face tests (Brooks & Swain, 2015;Fan, 2014;Kiddle & Kormos, 2011;Qian, 2009), citing lack of interaction in the computer-delivered test as the main reason for their negative perception.…”
Section: Test-taker Perception Of Computer-delivered Speaking Tests Amentioning
confidence: 99%
“…Surface et al () found that although attitudes toward the OPIc were generally positive, test takers preferred the OPI to the OPIc and felt that the OPI offered a better opportunity to demonstrate their language abilities (Surface et al, ). Although no other investigation has been published comparing attitudes toward the OPI and the OPIc, research dealing with computerized vs. face‐to‐face versions of proficiency tests has supported Surface et al's findings: Kiddle and Kormos () found that test takers preferred the face‐to‐face test to the computerized version, and they also described the face‐to‐face test as more “fair” than the computerized test. While test takers in Kenyon and Malabonga's () study also felt that the OPI allowed for a better demonstration of ability, they reported that the OPI was not as fair as the COPI/SOPI.…”
Section: Introductionmentioning
confidence: 87%
“…With several different test modes aiming to tap into communicative speaking ability, a fundamental question to ask is whether, and/or how, the delivery medium changes the nature of the construct being measured. Despite research which has reported overall score and difficulty equivalence between computer-delivered and face-to-face tests and, by extension, construct comparability (Bernstein, Van Moere, & Cheng, 2010;Kiddle & Kormos, 2011;Stansfield & Kenyon, 1992), theoretical discussions and empirical studies which go beyond sole score comparability have highlighted the fundamental construct-related differences between different test formats. Essentially, semi-direct and automated speaking tests are underpinned by a psycholinguistic construct, which places emphasis on the cognitive dimension of speaking, as opposed to the socio-cognitive construct of face-to-face tests, where speaking is seen both as a cognitive trait and a social, interactional one (Galaczi, 2010;McNamara & Roever, 2006;van Moere, 2012).…”
Section: Role Of Test Mode In Speaking Assessmentmentioning
confidence: 99%