2022
DOI: 10.1177/25152459211061337
|View full text |Cite
|
Sign up to set email alerts
|

A Conceptual Framework for Investigating and Mitigating Machine-Learning Measurement Bias (MLMB) in Psychological Assessment

Abstract: Given significant concerns about fairness and bias in the use of artificial intelligence (AI) and machine learning (ML) for psychological assessment, we provide a conceptual framework for investigating and mitigating machine-learning measurement bias (MLMB) from a psychometric perspective. MLMB is defined as differential functioning of the trained ML model between subgroups. MLMB manifests empirically when a trained ML model produces different predicted score levels for different subgroups (e.g., race, gender)… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
5

Citation Types

0
15
0
2

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
1

Relationship

1
5

Authors

Journals

citations
Cited by 30 publications
(19 citation statements)
references
References 119 publications
0
15
0
2
Order By: Relevance
“…Our proposed argument-based fairness approach mirrors the process of argumentbased validity in many ways as it follows parts of the Toulmin model of argument structure (1958/2003). To introduce the approach before couching it within AI assessment, we use some isolated examples from ongoing work developing a classroom-based reading assessment in the Institute of Education Science's funded Project DIMES (Huggins-Manley, Benedict, Goodwin, & Templin, 2019-2022.…”
Section: Argument-based Fairnessmentioning
confidence: 99%
See 4 more Smart Citations
“…Our proposed argument-based fairness approach mirrors the process of argumentbased validity in many ways as it follows parts of the Toulmin model of argument structure (1958/2003). To introduce the approach before couching it within AI assessment, we use some isolated examples from ongoing work developing a classroom-based reading assessment in the Institute of Education Science's funded Project DIMES (Huggins-Manley, Benedict, Goodwin, & Templin, 2019-2022.…”
Section: Argument-based Fairnessmentioning
confidence: 99%
“…This work established methods and metrics for quantifying and studying bias, aligning such methods to demarked stages of such assessment development. More broadly, Tay et al (2022) proposed a framework for conceptualizing and quantifying bias in AI assessments that use machine-learning (ML) as the core assessment engine, discussing some distinctions between fairness and bias and then focusing the framework on the latter. They, too, centered their approach to fairness on matters of bias only and aligned the bias concerns to stages of assessment development, which are somewhat different than traditional assessments as elaborated later and discussed in D' Mello et al (2021).…”
Section: Fairness In Ai Assessmentsmentioning
confidence: 99%
See 3 more Smart Citations