2021
DOI: 10.35542/osf.io/pbmvz
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Algorithmic Bias in Education

Abstract: Draft Preprint. In this paper, we review algorithmic bias in education, discussing the causes of that bias and reviewing the empirical literature on the specific ways that algorithmic bias is known to have manifested in education. While other recent work has reviewed mathematical definitions of fairness and expanded algorithmic approaches to reducing bias, our review focuses instead on solidifying the current understanding of the concrete impacts of algorithmic bias in education—which groups are known to be im… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
17
0
1

Year Published

2021
2021
2024
2024

Publication Types

Select...
3
3

Relationship

0
6

Authors

Journals

citations
Cited by 36 publications
(18 citation statements)
references
References 71 publications
(58 reference statements)
0
17
0
1
Order By: Relevance
“…The public demanded accountability after what can only be described as a fraught and messy policymaking process involving competing conceptions of fairness, unavoidable harms, and an insufficient appeal system (Adams, Weale, and Barr 2020;Kippin and Cairney 2021). Algorithmic bias raises concern in various application areas of AIED technologies, including educational assessment, students' dropout risk prediction, and algorithmic ability-grouping (see Baker & Hawn [2021]; Kizilcec & Lee [forthcoming]; Carmel & Ben-Shahar [2017]).…”
Section: Accountability With and For Algorithms In Educationmentioning
confidence: 99%
See 4 more Smart Citations
“…The public demanded accountability after what can only be described as a fraught and messy policymaking process involving competing conceptions of fairness, unavoidable harms, and an insufficient appeal system (Adams, Weale, and Barr 2020;Kippin and Cairney 2021). Algorithmic bias raises concern in various application areas of AIED technologies, including educational assessment, students' dropout risk prediction, and algorithmic ability-grouping (see Baker & Hawn [2021]; Kizilcec & Lee [forthcoming]; Carmel & Ben-Shahar [2017]).…”
Section: Accountability With and For Algorithms In Educationmentioning
confidence: 99%
“…Trained on historical data, ML algorithms may infer (multiple) proxies for legally protected or otherwise sensitive attributes (e.g., 'race', gender, or socio-economic status), consequently introducing disparities into algorithmic predictions (Baker & Hawn [2021]; Mehrabi et al [2021]). Due to unrepresentative sampling and other technical design flaws, bias against specific groups in outcomes may occur without explicit use of protected or sensitive attributes as model features.…”
Section: Algorithmic Fairness For Accountability?mentioning
confidence: 99%
See 3 more Smart Citations