2010
DOI: 10.1177/0265532210364049
|View full text |Cite
|
Sign up to set email alerts
|

Interaction in group oral assessment: A case study of higher- and lower-scoring students

Abstract: This article examines the interactional work in which two groups of secondary ESL students engaged to achieve and sustain participation in group oral assessment, which is designed to assess a student’s interactive communication skills in a school-based assessment context. The in-depth observation of the ways in which participants co-constructed talk-in-interaction led to the discovery of the particular pattern of speech exchange within each group.Within the higher-scoring group, the students engaged constructi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

5
57
2

Year Published

2011
2011
2023
2023

Publication Types

Select...
5
3
1

Relationship

0
9

Authors

Journals

citations
Cited by 67 publications
(79 citation statements)
references
References 35 publications
5
57
2
Order By: Relevance
“…CA-based research on measuring interactional competence has grown, attesting its contributions to conceptualizing and operationalizing interactional competence (e.g., Brown, 2003;Kasper & Ross, 2007;Lazaraton, 2002;May, 2009May, , 2011Galaczi, 2008;Gan, 2010;Young, 2008;Young & He, 1998). Thus, further discussion of interactional organizations in the CA literature will offer analytical insights into operationalizing pragmatic competence in interaction.…”
Section: Defining Pragmatic Competence In Interactionmentioning
confidence: 99%
“…CA-based research on measuring interactional competence has grown, attesting its contributions to conceptualizing and operationalizing interactional competence (e.g., Brown, 2003;Kasper & Ross, 2007;Lazaraton, 2002;May, 2009May, , 2011Galaczi, 2008;Gan, 2010;Young, 2008;Young & He, 1998). Thus, further discussion of interactional organizations in the CA literature will offer analytical insights into operationalizing pragmatic competence in interaction.…”
Section: Defining Pragmatic Competence In Interactionmentioning
confidence: 99%
“…CA has informed our understanding of paired and group oral testing discourse. In her research on the interactional features observed -listening to a tape (on a report) -free discussion on the main point of the report that they have listened to Up to 7 Liski & Puntanen (1983) -listening to a tape (on a report) -free discussion on the main point of the report that they have listened to 6 (minimum 5 and maximum 7 ) Shohamy, Reves, & Bejarano (1986) -free discussion on a given topic 4 Hilsdon (1991) [Secondary School Examination, Zambia] -free discussion on a given topic 5 Pavlou (1995) -free discussion on a given topic 3 Fulcher (1996) -free discussion on a given topic (not mentioned) Nunn (2000) (not mentioned) 3 Ockey (2001) -free discussion on a given topic 3 Masubuchi (2003) [Interactive English Forum] -free discussion on a given topic Lazaraton and Davies (2008), Galaczi (2010) and Gan (2010) have carried out CA on already scored paired and group test transcripts and identified discourse features salient at different oral proficiency levels. In order to seek to build on such contributions of CA in language assessment, this study will employ CA to obtain a more precise picture of group oral test discourse than has hitherto been available.…”
Section: Task Implementation Conditions In Group Oral Testsmentioning
confidence: 99%
“…Research has shown that rater consistency and rating validity can be increased through training (Kyle, Crossley, & McNamara, 2016). Third, MFRM can help reduce self-inconsistency and increase intra-rater reliability, which increases the fairness of a test, specifically in placement and summative evaluation tests (Gan, 2010). In another study, Lumley and McNamara (1995) investigated three sets of graded spoken English tests over a period of 20 months.…”
Section: Rater Behavior In Oral Performance Assessmentmentioning
confidence: 99%
“…Regarding bias, most studies conducted so far (e.g., Bijani & Fahim, 2011;Kim, 2011;Kondo-Brown, 2002) have not addressed the interaction of raters' severity/leniency with test takers' ability facets. While a few studies have looked at the differences between trained and untrained raters in speaking assessment (Bijani, 2010;Elder, Barkhuizen, Knoch, & Randow, 2007;Gan, 2010;Kim, 2011), few if any, studies have used a pre-and post-training design. Although a few studies have investigated the influence of training in second language speaking assessment (e.g., Barrette, 2001;Davis, 2016;Saito, 2008), they have not provided enough conclusive evidence about the impact of the training programs on raters' severity/leniency, or bias and consistency measures.…”
Section: Rater Behavior In Oral Performance Assessmentmentioning
confidence: 99%