2012
DOI: 10.1080/0969594x.2012.665354
|View full text |Cite
|
Sign up to set email alerts
|

The method of Adaptive Comparative Judgement

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

5
162
0

Year Published

2013
2013
2022
2022

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 140 publications
(183 citation statements)
references
References 12 publications
5
162
0
Order By: Relevance
“…(Pollitt, 2004, p. 6) Following the theoretical development of the ACJ process, a grading engine was commercialized by TAG Assessment under the name CompareAssess. Using a complex algorithm, which has been validated repeatedly and used on thousands of student artifacts (Pollitt, 2004(Pollitt, , 2012, CompareAssess combines rankings from a panel of judges to assign a final rank order to each artifact. In the CompareAssess engine, each artifact is compared with other artifacts by randomly assigned graders until a specified reliability requirement has been met.…”
Section: Adaptive Comparative Judgmentmentioning
confidence: 99%
“…(Pollitt, 2004, p. 6) Following the theoretical development of the ACJ process, a grading engine was commercialized by TAG Assessment under the name CompareAssess. Using a complex algorithm, which has been validated repeatedly and used on thousands of student artifacts (Pollitt, 2004(Pollitt, , 2012, CompareAssess combines rankings from a panel of judges to assign a final rank order to each artifact. In the CompareAssess engine, each artifact is compared with other artifacts by randomly assigned graders until a specified reliability requirement has been met.…”
Section: Adaptive Comparative Judgmentmentioning
confidence: 99%
“…The comparative pairs approach to marking requires assessors to select a 'winner' between a pair of performances, and repeat this process for many pairs, with the results analysed using a Rasch model for dichotomous data (Pollitt, 2012). Whereas Pollitt (2004) describes the comparative pairs method as "intrinsically more valid", he believes that without Information and Communications Technology (ICT) support it has not been feasible to apply due to time and cost constraints.…”
Section: Methods Of Markingmentioning
confidence: 99%
“…For the third phase, the rubrics were sent to the Willock Information Systems company and made available through their online assessment module for teachers and assessors for analytical marking. For comparative pairs marking the online marking tool called the Adaptive Comparative Judgement System (ACJS) (Pollitt, 2012), developed by TAG Learning for the e-scape research using MAPS, was used. Student responses (typed and oral recording) in digital form were downloaded from the Willock Information Systems website and uploaded to MAPS.…”
Section: Marking Criteria Tools and Assessorsmentioning
confidence: 99%
“…Pollitt (2012) describes in detail how the ACJS combines all the main processes involved in scoring using the comparative pairs method including generating the pairs of portfolios for each assessor to judge, provided a facility for recording that judgement and maintaining assessor notes, and calculating scores and associated reliability coefficients using Rasch modelling. This meant that assessors only needed to judge pairs until an acceptable level of reliability was attained.…”
Section: Analytical and Comparative Pairs Scoringmentioning
confidence: 99%
“…This has been impractical on a large scale until the recent development of online systems to facilitate the processes, such as the system used by the e-Scape project (Kimbell, Wheeler, Miller, & Pollitt, 2007). The method relies on the use of Rasch statistical modeling to generate scores on an interval scale, and has delivered highly reliable sets of scores (Pollitt, 2012), including in our own research (Newhouse,2010).…”
Section: Introductionmentioning
confidence: 99%