2022
DOI: 10.1007/s43681-022-00138-8
|View full text |Cite
|
Sign up to set email alerts
|

AI bias: exploring discriminatory algorithmic decision-making models and the application of possible machine-centric solutions adapted from the pharmaceutical industry

Abstract: A new and unorthodox approach to deal with discriminatory bias in Artificial Intelligence is needed. As it is explored in detail, the current literature is a dichotomy with studies originating from the contrasting fields of study of either philosophy and sociology or data science and programming. It is suggested that there is a need instead for an integration of both academic approaches, and needs to be machine-centric rather than human-centric applied with a deep understanding of societal and individual preju… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
9
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
6
4

Relationship

0
10

Authors

Journals

citations
Cited by 75 publications
(28 citation statements)
references
References 64 publications
0
9
0
Order By: Relevance
“…A side-advantage of our current approach (learning mathematical representation of ML-based outcome models) is that it adds a level of transparency to the more 'black-box' ML models. This in turn can enhance the users' ability to detect 'nonsensical' decision rules that might have been learned by the models due to, among other factors, model overfitting, insufficient or noisy data, or misrepresentation or bias in the training dataset (Cho 2021, Belenguer 2022, Tasci et al 2022. This is a crucial step, as prediction uncertainty and the lack of physician trust in ML-based outcome models are among the most important hurdles facing the clinical implementation of outcome-based treatment planning.…”
Section: Discussionmentioning
confidence: 99%
“…A side-advantage of our current approach (learning mathematical representation of ML-based outcome models) is that it adds a level of transparency to the more 'black-box' ML models. This in turn can enhance the users' ability to detect 'nonsensical' decision rules that might have been learned by the models due to, among other factors, model overfitting, insufficient or noisy data, or misrepresentation or bias in the training dataset (Cho 2021, Belenguer 2022, Tasci et al 2022. This is a crucial step, as prediction uncertainty and the lack of physician trust in ML-based outcome models are among the most important hurdles facing the clinical implementation of outcome-based treatment planning.…”
Section: Discussionmentioning
confidence: 99%
“…In our context, over half of the participants (56.5%) had concerns about AI systems potentially overselling unnecessary medications and cosmetics to patients, raising questions about beneficence and potential maleficence. Such an act may be the result of a biased algorithmic system in the pharmaceutical market, and the absence of legally binding regulations in such a situation would harm consumers [ 21 ]. AI in pharmacy practice should prioritize patient well-being and ensure that recommendations are based on evidence-based guidelines rather than profit motives.…”
Section: Discussionmentioning
confidence: 99%
“…These prejudices may show up in automated decision-making procedures, which could result in discrimination in the hiring, lending, and criminal justice systems, among other areas. It is crucial to protect fairness by tackling bias in AI systems to avoid feeding into societal stereotypes [104,105]. Privacy issues are also brought up by the enormous amount of individual data that is necessary for AI to perform at its best [106].…”
Section: Ethical Concerns Related To Ai Advancementsmentioning
confidence: 99%