2022
DOI: 10.1093/jamia/ocac078
|View full text |Cite
|
Sign up to set email alerts
|

A framework for the oversight and local deployment of safe and high-quality prediction models

Abstract: Artificial intelligence/machine learning models are being rapidly developed and used in clinical practice. However, many models are deployed without a clear understanding of clinical or operational impact and frequently lack monitoring plans that can detect potential safety signals. There is a lack of consensus in establishing governance to deploy, pilot, and monitor algorithms within operational healthcare delivery workflows. Here, we describe a governance framework that combines current regulatory best pract… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
16
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5

Relationship

0
5

Authors

Journals

citations
Cited by 36 publications
(18 citation statements)
references
References 12 publications
0
16
0
Order By: Relevance
“…Transparency in the types of training data, processes, and evaluations used is paramount. For example, an academic medical center recently published its framework for oversight and deployment of prediction models, which includes checkpoint gates and an oversight governance structure . Current evidence suggests that such governance infrastructure is rare …”
Section: Resultsmentioning
confidence: 99%
“…Transparency in the types of training data, processes, and evaluations used is paramount. For example, an academic medical center recently published its framework for oversight and deployment of prediction models, which includes checkpoint gates and an oversight governance structure . Current evidence suggests that such governance infrastructure is rare …”
Section: Resultsmentioning
confidence: 99%
“…Stanford Health Care (SHC) and Duke Health, for instance, both require all AI tools proposed for clinical use in their facilities to undergo review by an oversight group specifically constituted for that purpose. 6,7 In SHC's FURM (Fair, Useful, Reliable Model) review process, a team of data scientists, ethicists, clinicians, and administrators conducts a broad evaluation of proposed AI uses. 7 For uses greenlighted for deployment, follow-up assessments monitor how the concerns raised are addressed.…”
Section: The Challenge Aheadmentioning
confidence: 99%
“…7 For uses greenlighted for deployment, follow-up assessments monitor how the concerns raised are addressed. The ethical review process includes quantitative assessments of model fairness for specific patient subgroups, qualitativeinterviewingwithmultiplestakeholders(AIdevelopers,clinical staff, hospital administrators, patient representatives) to surface ethical concerns, 8 and consultation with additional AI experts about recommendationsforaddressingtheseconcerns.DukeHealth'sABCDS (Algorithm-Based Clinical Decision Support) process requires proposers to develop and submit plans demonstrating the algorithm's clinical utility that include assessments of model fairness, 6 and work is underway to operationalize a wider range of ethical assessment criteria.…”
Section: The Challenge Aheadmentioning
confidence: 99%
See 1 more Smart Citation
“…In response to this, institutions, such as Duke University, are establishing governance frameworks for deploying CDS tools. 5 A key aspect of our process is the local testing of CDS tools and assessing their performance across and within diverse patient populations before deployment. We view this as an important step to ensuring the high quality and equitable performance of any CDS.…”
Section: + Related Articlementioning
confidence: 99%