2020
DOI: 10.1007/s00146-020-00996-y
|View full text |Cite
|
Sign up to set email alerts
|

Machine learning’s limitations in avoiding automation of bias

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
4
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
7
1

Relationship

1
7

Authors

Journals

citations
Cited by 14 publications
(9 citation statements)
references
References 11 publications
0
4
0
Order By: Relevance
“…We observed biased predictions due to imbalance news items in few cases (e.g. the model predicted fake news items with higher accuracy than real news items) and therefore this example presents a scenario of limitation of ML algorithms in avoiding automation of bias [31].…”
Section: Resultsmentioning
confidence: 86%
“…We observed biased predictions due to imbalance news items in few cases (e.g. the model predicted fake news items with higher accuracy than real news items) and therefore this example presents a scenario of limitation of ML algorithms in avoiding automation of bias [31].…”
Section: Resultsmentioning
confidence: 86%
“…Both the engineering approach and the stipulation of the regulatory approach need to be incorporated into an integrative mechanism oriented to reducing and mitigating algorithmic decision-making (ADM) systems-produced discriminatory outcomes, analyzed in previous studies [1,2]. The traditional approach conducted to manage discrimination, prejudice or bias, and algorithmic unfairness historically exhibits a reactive character that must be overcome, as criticized in [3]. Additionally, the needed proactive approach must incorporate the determination of possible remedy actions due to discriminatory ADM systems' outcomes.…”
Section: Introductionmentioning
confidence: 99%
“…Ethically, one of the central issues in AI ethics is: how can they be created in such a way that they do not share the same ethical biases as their creator [8,41]. When Google's word embedding tool was used to solve verbal analogy problems, researchers found that it performed the task with blatant and rampant gender bias [41,65]. AI bias has the potential to be a particular problem in fields prone to unequal treatment of individuals such as medicine or fields that have the potential to have severe environmental impacts if the AI involved are only concerned with maximum yields, for example agriculture [30,32,66].…”
Section: Introductionmentioning
confidence: 99%