2021
DOI: 10.1093/jamia/ocab148
|View full text |Cite
|
Sign up to set email alerts
|

Bias and fairness assessment of a natural language processing opioid misuse classifier: detection and mitigation of electronic health record data disadvantages across racial subgroups

Abstract: Objectives To assess fairness and bias of a previously validated machine learning opioid misuse classifier. Materials & Methods Two experiments were conducted with the classifier’s original (n = 1000) and external validation (n = 53 974) datasets from 2 health systems. Bias was assessed via testing for differences in type II error rates across racial/ethnic subgroups (Black, Hispanic/Latinx, White, Other) using bootstrapp… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
58
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
6
1

Relationship

1
6

Authors

Journals

citations
Cited by 44 publications
(58 citation statements)
references
References 39 publications
(36 reference statements)
0
58
0
Order By: Relevance
“…Applications of ML were varied and involved diagnosis, outcome prediction, and clinical score prediction performed on data sets including images, diagnostic studies, clinical text, and clinical variables. Furthermore, 1 (8%) study described a model in routine clinical use [ 36 ], 2 (17%) examined prospectively validated clinical models [ 35 , 39 ], and the remaining 9 (75%) described internally validated models.…”
Section: Resultsmentioning
confidence: 99%
See 2 more Smart Citations
“…Applications of ML were varied and involved diagnosis, outcome prediction, and clinical score prediction performed on data sets including images, diagnostic studies, clinical text, and clinical variables. Furthermore, 1 (8%) study described a model in routine clinical use [ 36 ], 2 (17%) examined prospectively validated clinical models [ 35 , 39 ], and the remaining 9 (75%) described internally validated models.…”
Section: Resultsmentioning
confidence: 99%
“…Of the 12 studies, 5 (42%) published code used for analysis, 3 (25%) made model development code available [ 34 , 36 , 39 ], 2 (17%) published bias analysis code [ 33 , 36 ], 1 (8%) published code relevant to debiasing [ 30 ], and 1 (8%) published data selection code [ 33 ]. In addition, 1 (8%) study used publicly available code for analysis [ 31 ], and code was specified as available upon request in 1 (8%) study [ 35 ].…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…Post-hoc mitigation methods in the opioid misuse classifier were previously shown to improve disparities and could be considered before deployment. 28 Close observation is still required during deployment to identify unintended consequences. Implicit bias in provider notes remains a real problem in treatment of patients, especially in patients with substance misuse.…”
Section: Discussionmentioning
confidence: 99%
“…It is becoming increasingly apparent that ML models are prone to bias that can harm marginalized groups [159,160]. Only 1 article evaluated algorithmic fairness [161]. Fairness must be integrated into phenotyping in the future.…”
Section: Discussionmentioning
confidence: 99%