2022
DOI: 10.31234/osf.io/4eqnk
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

The sins of the parents are to be laid upon the children: biased humans, biased data, biased models

Abstract: Technological innovations have become a key driver of societal advancements. Nowhere is this more evident than in the field of machine learning (ML), which has developed algorithmic models that shape our decisions, behaviors, and outcomes. These tools have widespread use, in part, because they can synthesize massive amounts of data to make seemingly objective recommendations. Yet, in the past few years, the ML community has been raising the alarm on why we should be cautious in interpreting and using these mod… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
8
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(8 citation statements)
references
References 82 publications
0
8
0
Order By: Relevance
“…Our framework is both theoretically and practically motivated. In terms of the algorithmic fairness literature, this research is theoretically motivated by two sources: first, it is motivated by psychologists' and computer scientists' invitations to management scholars for their input on the growing body of research on fairness in machine learning (Osborne et al 2023). Second, this research is motivated by the growing prevalence (and power) of AI models in organizations.…”
Section: Theoretical and Practical Motivationsmentioning
confidence: 99%
See 3 more Smart Citations
“…Our framework is both theoretically and practically motivated. In terms of the algorithmic fairness literature, this research is theoretically motivated by two sources: first, it is motivated by psychologists' and computer scientists' invitations to management scholars for their input on the growing body of research on fairness in machine learning (Osborne et al 2023). Second, this research is motivated by the growing prevalence (and power) of AI models in organizations.…”
Section: Theoretical and Practical Motivationsmentioning
confidence: 99%
“…Primarily, this research has argued that programmers struggle to recognize that there is bias in AI models. For instance: Osborne et al (2023) note that fairness in AI is a relatively nascent field, and many programmers may not have received training about creating fair models or know that they should look for bias in their models. Indeed, many programmers may not know specifically how bias is input into the model or where to look for it within the model.…”
Section: Programmers' Fairness-awarenessmentioning
confidence: 99%
See 2 more Smart Citations
“…In spite of the resource-saving potential, we similarly urge caution and careful psychometric consideration. Decades of research in social psychology have established that judgments of human behavior are strongly subject to prejudice and bias (Greenwald & Krieger, 2006), and lessons from NLP suggest that these biases will inevitably permeate early attempts at behavioral coding algorithms (Osborne et al, 2022). In the long run, many of our recommendations for mitigating sources of biases in human coders (see "Coding System Development" and "Recruiting and Training Coders") may be adapted for the training of machine learning algorithms.…”
Section: Addressing Biases Inserted By Automationmentioning
confidence: 99%