2019
DOI: 10.1001/jamapediatrics.2019.1075
|View full text |Cite
|
Sign up to set email alerts
|

Machine Learning at the Clinical Bedside—The Ghost in the Machine

Abstract: settings, in which lack of resources means that patients are given an appointment up to 3 months after the initial script. Too often, both clinicians and patients may settle for better without an understanding of what best looks like. Dose optimization requires several visits, clinician time, and patient education. This initial investment might lead to more patients achieving remission over just improvement and might mitigate our current rates of adherence and persistence, which are poor. 7 Ching et al 1 ha… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
12
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
4
1
1

Relationship

0
6

Authors

Journals

citations
Cited by 9 publications
(12 citation statements)
references
References 12 publications
0
12
0
Order By: Relevance
“…However, these complex machine-learning models lack the interpretability to integrate judgment, thus not allowing review nor the recognition of bias, which may build mistrust in the user. 30 Therefore, we use interpretable models with visual representation to allow stability analysis and ensure the integration of clinical judgment within the CDI. 28…”
Section: Contextualizing Pcs In the Context Of CDI Developmentmentioning
confidence: 99%
“…However, these complex machine-learning models lack the interpretability to integrate judgment, thus not allowing review nor the recognition of bias, which may build mistrust in the user. 30 Therefore, we use interpretable models with visual representation to allow stability analysis and ensure the integration of clinical judgment within the CDI. 28…”
Section: Contextualizing Pcs In the Context Of CDI Developmentmentioning
confidence: 99%
“…Here lies a key challenge for machine learning tools, especially for techniques like neural networks, which provide more "black box" predictions. 73 The integration of such black box predictions in clinical decision making is problematic because it means a departure from the paradigm of evidence-based medicine. 74 Additionally, shared decision making between the patient and physician also requires that decisions supported by machine-learning tools can also be explained.…”
Section: Validation For Clinical Use In Pediatricsmentioning
confidence: 99%
“…74 Additionally, shared decision making between the patient and physician also requires that decisions supported by machine-learning tools can also be explained. 67,73 Therefore, explicability for both the physician and the patient is likely a requirement for meaningful contributions to the decision process. Ongoing efforts to improve the explicability of complex machine-learning models are, therefore, crucial to support their clinical acceptance and implementation.…”
Section: Validation For Clinical Use In Pediatricsmentioning
confidence: 99%
See 2 more Smart Citations