2019
DOI: 10.1001/amajethics.2019.167
|View full text |Cite
|
Sign up to set email alerts
|

Can AI Help Reduce Disparities in General Medical and Mental Health Care?

Abstract: Background: As machine learning becomes increasingly common in health care applications, concerns have been raised about bias in these systems' data, algorithms, and recommendations. Simply put, as health care improves for some, it might not improve for all. Methods:Two case studies are examined using a machine learning algorithm on unstructured clinical and psychiatric notes to predict intensive care unit (ICU) mortality and 30-day psychiatric readmission with respect to race, gender, and insurance payer type… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

1
78
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
9

Relationship

0
9

Authors

Journals

citations
Cited by 214 publications
(81 citation statements)
references
References 32 publications
(35 reference statements)
1
78
0
Order By: Relevance
“…Furthermore, the substructure of genomic data is correlated with population structure, which can lead to the appearance of non-causal trait associations [94]. However, tools that will help to address machine bias are being developed, and careful attention to these issues could not only help to resolve machine bias issues but could eventually lead to diagnostic systems that are free from human bias [95].…”
Section: Challenges and Limitationsmentioning
confidence: 99%
“…Furthermore, the substructure of genomic data is correlated with population structure, which can lead to the appearance of non-causal trait associations [94]. However, tools that will help to address machine bias are being developed, and careful attention to these issues could not only help to resolve machine bias issues but could eventually lead to diagnostic systems that are free from human bias [95].…”
Section: Challenges and Limitationsmentioning
confidence: 99%
“…Artificial intelligence (AI), another promising technology that could be used during emergency situations, could support trained clinicians to make treatment decisions. Currently, the research on the potential use and benefits of AI in addiction care and mental health services is in early development and needs to address important scientific, legal and ethical issues (108,109). Current AI research is focused on assisting addiction care practitioners with treatment for alcohol use disorder (110), identifying and preventing relapse (111), and identifying risk factors (112,113).…”
Section: Integrate It Solutions To Strengthen and Modernize The Addicmentioning
confidence: 99%
“…Practitioners should, however, be aware that algorithms can be subject to biases (due to misclassification and measurement error, missing data, and small sample size) (108). The implication of such biases can be severe as they might create disparities in addiction care (108,109). Involving addiction care specialists and patient advocacy groups from the beginning in the development of AI can facilitate innovative, ethical, acceptable, and effective solutions.…”
Section: Integrate It Solutions To Strengthen and Modernize The Addicmentioning
confidence: 99%
“…However, developers still need to demonstrate that when using sensible thresholds, the algorithm does not create or exacerbate inequalities. In fact, several methodological developments in the area of fairness evaluation support this type of analysis,717273 and ML/AI developers and health practitioners should engage with these tools. One way in which researchers might demonstrate bias in key subgroups (eg, in minority ethnic groups, or by age) would be to explicitly present these findings so that users of the algorithm know where it has good or poor predictive accuracy.…”
Section: Critical Questionsmentioning
confidence: 99%