2021
DOI: 10.1177/0265532221995475
|View full text |Cite
|
Sign up to set email alerts
|

Developing individualized feedback for listening assessment: Combining standard setting and cognitive diagnostic assessment approaches

Abstract: In this study, we present the development of individualized feedback for a large-scale listening assessment by combining standard setting and cognitive diagnostic assessment (CDA) approaches. We used the performance data from 3358 students’ item-level responses to a field test of a national EFL test primarily intended for tertiary-level EFL learners. The results showed that proficiency classifications and subskill mastery classifications were generally of acceptable reliability, and the two kinds of classifica… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

2
14
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
7

Relationship

1
6

Authors

Journals

citations
Cited by 25 publications
(22 citation statements)
references
References 75 publications
2
14
0
Order By: Relevance
“…Previous listening comprehension theories (e.g., Field, 2008; Rost, 2016) postulated that detail extraction, a lower-order subskill, is easier to master than higher-level subskills such as understanding intended meaning, which probably explains why in self-assessment in this study, students perceived intended meaning to be more difficult than details. However, in empirical research on listening, quite a few studies have shown that inferential information was easier to process than factual information (Min & He, 2022; Park, 2004), although there are also studies finding that extracting details is easier than higher-level subskills such as making inferences and summarizing (e.g., Lee & Sawaki, 2009; Rost, 2016). This is understandable as item difficulty is contingent on many other factors, particularly task characteristics such as the setting, test rubrics, input, expected response, and the relationship between input and response (Bachman & Palmer, 2010).…”
Section: Discussionmentioning
confidence: 99%
See 3 more Smart Citations
“…Previous listening comprehension theories (e.g., Field, 2008; Rost, 2016) postulated that detail extraction, a lower-order subskill, is easier to master than higher-level subskills such as understanding intended meaning, which probably explains why in self-assessment in this study, students perceived intended meaning to be more difficult than details. However, in empirical research on listening, quite a few studies have shown that inferential information was easier to process than factual information (Min & He, 2022; Park, 2004), although there are also studies finding that extracting details is easier than higher-level subskills such as making inferences and summarizing (e.g., Lee & Sawaki, 2009; Rost, 2016). This is understandable as item difficulty is contingent on many other factors, particularly task characteristics such as the setting, test rubrics, input, expected response, and the relationship between input and response (Bachman & Palmer, 2010).…”
Section: Discussionmentioning
confidence: 99%
“…Bachman (1990) once emphasized that “[the] single most important consideration in both the development of language tests and the interpretation of their results is the purpose or purposes the particular tests are intended to serve” (p. 54). Therefore, following Min and He’s (2022) practice, we further incorporated into the feedback the students’ attribute mastery status to enhance personal relevance. The feedback provided a fine-grained snapshot of the students’ listening proficiency on one hand, and served as a valuable reference for the subsequent learning, on the other.…”
Section: Discussionmentioning
confidence: 99%
See 2 more Smart Citations
“…Taking a step forward, quite a few studies have explored the extent to which diagnostic results could help discover the relationships among attributes (e.g., Chen and Chen, 2016a ; Ravand and Robitzsch, 2018 ; Du and Ma, 2021 ) and could differ across different proficiency groups (e.g., Kim, 2010 , 2011 ; Fan and Yan, 2020 ). Among all those studies, a notable phenomenon is that assessment of receptive language skills such as reading and listening (e.g., von Davier, 2008 ; Jang, 2009b ; Lee and Sawaki, 2009b ; Kim, 2015 ; Chen and Chen, 2016b ; Yi, 2017 ; Aryadoust, 2021 ; Dong et al, 2021 ; Toprak and Cakir, 2021 ; Min and He, 2022 ) gained much more attention than that of productive ones such as writing ( Kim, 2010 , 2011 ; Xie, 2017 ; Effatpanah et al, 2019 ; He et al, 2021 ). “One possible reason is that different test methods are used to assess these two types of skills” ( He et al, 2021 , p. 1).…”
Section: Literature Reviewmentioning
confidence: 99%