2021
DOI: 10.1016/j.patter.2021.100241
|View full text |Cite
|
Sign up to set email alerts
|

Moving beyond “algorithmic bias is a data problem”

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
65
0
3

Year Published

2021
2021
2023
2023

Publication Types

Select...
7
1
1

Relationship

0
9

Authors

Journals

citations
Cited by 116 publications
(74 citation statements)
references
References 5 publications
0
65
0
3
Order By: Relevance
“…Their focus is bias in autonomous systems such as self-driving cars, and offer the example of using a biased estimator to minimize variance if you have a small sample size. Another exception is Hooker [ 24 ], who makes a plea for looking beyond just data bias, arguing that model design also contributes to bias, citing examples from work in facial recognition.…”
Section: Data People or Algorithms?mentioning
confidence: 99%
“…Their focus is bias in autonomous systems such as self-driving cars, and offer the example of using a biased estimator to minimize variance if you have a small sample size. Another exception is Hooker [ 24 ], who makes a plea for looking beyond just data bias, arguing that model design also contributes to bias, citing examples from work in facial recognition.…”
Section: Data People or Algorithms?mentioning
confidence: 99%
“…Learning bias arises when modeling choices amplify performance disparities across different examples in the data. 18 For example, an important modeling choice is the objective function that an ML algorithm learns to optimize during training. Typically, these functions encode some measure of accuracy on the task (e.g., cross-entropy loss for classification problems or mean squared error for regression problems).…”
Section: Learning Biasmentioning
confidence: 99%
“…Thus, on many occasions, a perfectly balanced dataset cannot be obtained and algorithmic solutions may come in handy. As discussed in a recent article [ 4 ], we need to move beyond the idea that “algorithmic bias is a data problem” and start acknowledging that algorithms are not impartial, and some design choices are better than others. In that sense, the choice of specific model architectures, loss functions, and training strategies plays a fundamental role in amplifying or mitigating potential equity issues because they are meant to induce specific behaviour in our systems.…”
Section: It Is Not All About Datamentioning
confidence: 99%
“…Recently, the research community of fairness in ML has shown that, contrary to popular belief about computer systems, these models can be far from objective and the decisions that they make can be strongly influenced—even unintentionally—by population demographic factors such as sex, ethnicity, or age, resulting in poor performance for specific subgroups [ 1 , 2 ]. The causes behind this phenomenon are multiple, and range from lack of diversity in the team of developers [ 3 ] to the technical design choices in terms of model architecture, objective functions, and training algorithms [ 4 ]. Another fundamental aspect is the data used to train these models.…”
Section: Introductionmentioning
confidence: 99%