2019
DOI: 10.3389/fpsyg.2019.01131
|View full text |Cite
|
Sign up to set email alerts
|

Validating Test Score Interpretations Using Time Information

Abstract: A validity approach is proposed that uses processing times to collect validity evidence for the construct interpretation of test scores. The rationale of the approach is based on current research of processing times and on classical validity approaches, providing validity evidence based on relationships with other variables. Within the new approach, convergent validity evidence is obtained if a component skill, that is expected to underlie the task solution process in the target construct, positively moderates… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
9
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
5
4

Relationship

1
8

Authors

Journals

citations
Cited by 17 publications
(9 citation statements)
references
References 59 publications
0
9
0
Order By: Relevance
“…Large values of the CD statistic are considered as indicative of data irregularities. Note that a similar measure was used by Engelhardt and Goldhammer ( 2019 ) for the validation of tests.…”
Section: Indicators Of Cheatingmentioning
confidence: 99%
“…Large values of the CD statistic are considered as indicative of data irregularities. Note that a similar measure was used by Engelhardt and Goldhammer ( 2019 ) for the validation of tests.…”
Section: Indicators Of Cheatingmentioning
confidence: 99%
“…In short, studies related to response time have analyzed response speed or the time allotted to the either test-level (e.g., Engelhardt and Goldhammer, 2019 ) or item-level (e.g., Ren et al, 2019 ; Hahnel et al, 2022 ). Still, with test-level analysis, it is difficult to identify the precise step in which students spend a lot of time on an item.…”
Section: Introductionmentioning
confidence: 99%
“…To our knowledge, the idea of using log data from an EAP to analyse exam items was first introduced by Neel's 1999 work, presented at the Annual Meeting of AERA (cited in Jung Kim, 2001). To date, exam logs have mostly been used for measuring and modelling exam‐takers' accuracy, speed, revisits and effort (Bezirhan et al, 2021; Klein Entink et al, 2008; Sharma et al, 2020; Wise, 2015; Wise & Gao, 2017); analysing answering and revising behaviour during exams (Costagliola et al, 2008; Pagni et al, 2017); examining and enhancing metacognitive regulation of strategy use and cognitive processing (Dodonova & Dodonov, 2012; Goldhammer et al, 2014; Papamitsiou & Economides, 2015; Thillmann et al, 2013); classifying exam‐takers towards testing services personalisation (Papamitsiou & Economides, 2017); validating the interpretations of test score (Engelhardt & Goldhammer, 2019; Kane & Mislevy, 2017; Kong et al, 2007; Padilla & Benítez, 2014; Toton & Maynes, 2019; van der Linden & Guo, 2008); understanding exam‐takers' performance (Greiff et al, 2016; Kupiainen et al, 2014; Papamitsiou et al, 2014, 2018; Papamitsiou & Economides, 2013, 2014); enhancing item selection in adaptive testing environment (van der Linden, 2008); analysing exam items (Costagliola et al, 2008; Jung Kim, 2001); detecting cheating (Cleophas et al, 2021; Costagliola et al, 2008); and identifying test‐taking strategies (Costagliola et al, 2008). Nonetheless, most of the previous work focused on time‐based behaviours and the interpretation of exam‐taker results; few of them examined the potential of using exam‐taker behaviours to validate or enrich the interpretation of the quality of exam items.…”
Section: Background and Related Workmentioning
confidence: 99%