Proceedings of the Tenth International Conference on Learning Analytics &Amp; Knowledge 2020
DOI: 10.1145/3375462.3375517
|View full text |Cite
|
Sign up to set email alerts
|

R2de

Abstract: The main objective of exams consists in performing an assessment of students' expertise on a specific subject. Such expertise, also referred to as skill or knowledge level, can then be leveraged in different ways (e.g., to assign a grade to the students, to understand whether a student might need some support, etc.). Similarly, the questions appearing in the exams have to be assessed in some way before being used to evaluate students. Standard approaches to questions' assessment are either subjective (e.g., as… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
5
1

Relationship

1
5

Authors

Journals

citations
Cited by 23 publications
(1 citation statement)
references
References 17 publications
(18 reference statements)
0
1
0
Order By: Relevance
“…As with examinee covariates, interest in the relationship between item characteristics and various ancillary data extends beyond the domain of score comparability. Applications include difficulty or item parameter prediction (e.g., Baldwin et al., 2004; Collis et al., 1995; Hall & Ansley, 2008; Irvine et al., 1990; Mislevy, 1988; Nungester & Vass, 1985; Scheuneman et al., 1991; Stowe, 2002; Swaminathan et al., 2003; Wang & Jiao, 2011; Xie, 2019); response time prediction (e.g., Halkitis et al., 1996; Parshall et al., 1994; Smith, 2000; Swanson et al., 2001; Baldwin et al., 2021); evaluation of automatically generated items (Leo et al., 2019; Kurdi, 2020; Benedetto et al., 2020); item pretest survival prediction (Ha et al., 2019, Yaneva et al., 2020); response process complexity estimation (Yaneva et al., 2021); and differential item functioning detection (Sinharay et al., 2009). Nevertheless, while researchers have sought to capitalize on item covariates to improve a broad range of activities, they have not been widely used to facilitate score comparability.…”
Section: Connectivesmentioning
confidence: 99%
“…As with examinee covariates, interest in the relationship between item characteristics and various ancillary data extends beyond the domain of score comparability. Applications include difficulty or item parameter prediction (e.g., Baldwin et al., 2004; Collis et al., 1995; Hall & Ansley, 2008; Irvine et al., 1990; Mislevy, 1988; Nungester & Vass, 1985; Scheuneman et al., 1991; Stowe, 2002; Swaminathan et al., 2003; Wang & Jiao, 2011; Xie, 2019); response time prediction (e.g., Halkitis et al., 1996; Parshall et al., 1994; Smith, 2000; Swanson et al., 2001; Baldwin et al., 2021); evaluation of automatically generated items (Leo et al., 2019; Kurdi, 2020; Benedetto et al., 2020); item pretest survival prediction (Ha et al., 2019, Yaneva et al., 2020); response process complexity estimation (Yaneva et al., 2021); and differential item functioning detection (Sinharay et al., 2009). Nevertheless, while researchers have sought to capitalize on item covariates to improve a broad range of activities, they have not been widely used to facilitate score comparability.…”
Section: Connectivesmentioning
confidence: 99%