Our system is currently under heavy load due to increased usage. We're actively working on upgrades to improve performance. Thank you for your patience.
2019
DOI: 10.7189/jogh.09.020318
|View full text |Cite
|
Sign up to set email alerts
|

Artificial intelligence and algorithmic bias: implications for health systems

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
139
0
5

Year Published

2020
2020
2024
2024

Publication Types

Select...
8
1
1

Relationship

1
9

Authors

Journals

citations
Cited by 247 publications
(176 citation statements)
references
References 7 publications
1
139
0
5
Order By: Relevance
“…Algorithms and artificial intelligence are at risk of exacerbating health inequalities by systematic misclassification. 40 We found that in our population, the proportion classified as at high risk according to national criteria was similar for those aged over 95 years as in those aged 65-69 years. Given that the proportion at high risk increases monotonically until the age of 85 years, there is a concern that this may reflect systematic underclassification of risk in the oldest old of our population, especially because the proportion at low risk paradoxically increases from the age of 85 years and onward.…”
Section: Discussionsupporting
confidence: 53%
“…Algorithms and artificial intelligence are at risk of exacerbating health inequalities by systematic misclassification. 40 We found that in our population, the proportion classified as at high risk according to national criteria was similar for those aged over 95 years as in those aged 65-69 years. Given that the proportion at high risk increases monotonically until the age of 85 years, there is a concern that this may reflect systematic underclassification of risk in the oldest old of our population, especially because the proportion at low risk paradoxically increases from the age of 85 years and onward.…”
Section: Discussionsupporting
confidence: 53%
“…Algorithmic bias occurs when an algorithm compounds and amplifies the existing inequities in socioeconomic status, race, ethnic background, religion, gender, disability, or sexual disorientation during its application, which adversely impacts equity in health systems. The factors contributing to algorithm bias lack exact definitions and standards of fairness, inadequate contextual specificity, and the black-box nature of algorithms [27].…”
Section: Challenges and Biases In Aimentioning
confidence: 99%
“…"Opt-in" models, though lighter touch, may result in only the most proactive patients engaging. The inherent risk is that algorithms will only work well in these proactive populations, compounding inequities in regard to age, ethnicity, and biological sex 16 . In 2016 the UK's National Data Guardian concluded that "opt-out" models would be the most appropriate for collection and secondary use of National Health Service data 17 and from 2018, a national data opt-out was instituted.…”
Section: Compliance and Contractingmentioning
confidence: 99%