2022
DOI: 10.1111/lang.12498
|View full text |Cite
|
Sign up to set email alerts
|

Crowdsourced Adaptive Comparative Judgment: A Community‐Based Solution for Proficiency Rating

Abstract: The main objective of this Methods Showcase Article is to show how the technique of adaptive comparative judgment, coupled with a crowdsourcing approach, can offer practical solutions to reliability issues as well as to address the time and cost difficulties associated with a text-based approach to proficiency assessment in L2 research. We showcased this method by reporting on the methodological framework implemented in the Crowdsourcing Language Assessment Project and by presenting the results of a first stud… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
5
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(5 citation statements)
references
References 44 publications
0
5
0
Order By: Relevance
“…For example, signatories to the Peer Reviewers’ Openness Initiative (Morey et al., 2016) refuse to review papers that do not have open materials and data. Other community‐driven practices draw on crowdsourcing, citizen science, and other types of collaborative and multisite working that can occur in a surprising number of stages of the entire research process, including research team building (e.g., Moshontz et al., 2018), design decisions (e.g., Landy et al., 2020), data collection (e.g., Paquot et al., 2022, validation of community judgements of proficiency), community augmented meta‐analyses that accumulate multiple datasets (see Many Babies Metalab, https://langcog.github.io/metalab/documentation/using_ma_data/contribute_ma), and analysis (Aczel et al., 2021, to improve analytical robustness). Even the writing process itself has attracted an open community‐driven approach (e.g., Tennant et al.’s, 2020, discussion of massively open online papers [MOOPs] that involve between 10 and 100 [partially] self‐selecting authors in an openly participatory format).…”
Section: A Coevolution Of Open Cultures Infrastructures and Behaviors...mentioning
confidence: 99%
“…For example, signatories to the Peer Reviewers’ Openness Initiative (Morey et al., 2016) refuse to review papers that do not have open materials and data. Other community‐driven practices draw on crowdsourcing, citizen science, and other types of collaborative and multisite working that can occur in a surprising number of stages of the entire research process, including research team building (e.g., Moshontz et al., 2018), design decisions (e.g., Landy et al., 2020), data collection (e.g., Paquot et al., 2022, validation of community judgements of proficiency), community augmented meta‐analyses that accumulate multiple datasets (see Many Babies Metalab, https://langcog.github.io/metalab/documentation/using_ma_data/contribute_ma), and analysis (Aczel et al., 2021, to improve analytical robustness). Even the writing process itself has attracted an open community‐driven approach (e.g., Tennant et al.’s, 2020, discussion of massively open online papers [MOOPs] that involve between 10 and 100 [partially] self‐selecting authors in an openly participatory format).…”
Section: A Coevolution Of Open Cultures Infrastructures and Behaviors...mentioning
confidence: 99%
“…The other is a relatively new topic connected to the reliability of crowdsourcing non-expert judgments as a method of producing linguistic annotations (e.g. Paquot et al, 2022;Alfter et al, 2021).…”
Section: Example 1: Illustration Of Relative Ranking Of An Unknown Itemmentioning
confidence: 99%
“…This again lines up with previous research investigating the reliability of non-experts in tasks normally requiring expert knowledge. Paquot et al (2022) set essay assessment into a comparative judgment paradigm, employing both trained assessors (experts) and non-trained academics (non-experts). The results clearly show that the two groups exhibit high similarity in their assessments, thus demonstrating that an untrained crowd can be used reliably for essay assessment tasks.…”
Section: Crowdsourcing Linguistic Annotation From Experts Versusmentioning
confidence: 99%
See 1 more Smart Citation
“…Learner texts are therefore not labelled as 'intermediate' or 'advanced', but are rather situated on an overall ranking, thus reflecting the continuous nature of proficiency assessments. In this vein, I wish to also refer the reader to the Crowdsourcing Language Assessment Project (CLAP) (Paquot, Rubin & Vandeweerd, 2022) carried out at UCLouvain (Belgium), which specifically relies on crowdsourcing tools similar to Comproved to study how they can contribute to improving L2 proficiency assessment practices. I strongly urge all researchers who are directly or indirectly concerned with proficiency-related methodological issues to take Leal's arguments on board and start departing from dichotomization practices.…”
mentioning
confidence: 99%