2023
DOI: 10.2196/43734
|View full text |Cite
|
Sign up to set email alerts
|

Explainable Machine Learning Techniques To Predict Amiodarone-Induced Thyroid Dysfunction Risk: Multicenter, Retrospective Study With External Validation

Abstract: Background Machine learning offers new solutions for predicting life-threatening, unpredictable amiodarone-induced thyroid dysfunction. Traditional regression approaches for adverse-effect prediction without time-series consideration of features have yielded suboptimal predictions. Machine learning algorithms with multiple data sets at different time points may generate better performance in predicting adverse effects. Objective We aimed to develop and … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
6
1
1

Relationship

0
8

Authors

Journals

citations
Cited by 9 publications
(7 citation statements)
references
References 58 publications
0
4
0
Order By: Relevance
“…This process involved identifying crucial risk factors for predicting subsequent parental stress. SHAP techniques were developed to predict the risk of amiodarone-induced thyroid dysfunction; which served as a tool for personalized risk assessment and helped in clinical decision-making [17]. An interpretable model using the SHAP approach enhanced mental stress detection using heart rate variability data.…”
Section: Game Theory Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…This process involved identifying crucial risk factors for predicting subsequent parental stress. SHAP techniques were developed to predict the risk of amiodarone-induced thyroid dysfunction; which served as a tool for personalized risk assessment and helped in clinical decision-making [17]. An interpretable model using the SHAP approach enhanced mental stress detection using heart rate variability data.…”
Section: Game Theory Methodsmentioning
confidence: 99%
“…XAI has numerous applications across healthcare and medicine. Currently, interpretability is the primary focus in the medical domain to provide easy-to-understand results for patients and caregivers and enhance their trust in specific applications such as drug discovery [16], thyroid [17], parental stress [18], wearable health trackers and biosensors [19], and respiratory disease [20]. The black-box issue in AI arises when a system faces challenges to provide a clear explanation for how the model achieved a decision; which black-box, gray-box, and white-box terms are used to describe different levels of transparency regarding the inner workings of machine learning algorithms.…”
Section: 1the General Concept Of Interpretability In Healthcarementioning
confidence: 99%
“…Recently, machine learning models, which can predict the occurrence of amiodarone-induced hyperthyroidism, were described. This study found that the most important baseline risk factors for amiodarone-induced thyroid dysfunction were higher HDL-C, TSH, lower free thyroxine(FT4), alkaline phosphatase, and LDL-C, and shorter duration of treatment (18). Such tools are still far from being used in routine clinical practice, but they could significantly improve patients' safety if available.…”
Section: Hyperthyroidismmentioning
confidence: 99%
“…Model transparency is critical to the application of models in the medical domain. Therefore, to make time-to-event ML models more transparent, we introduced SHAP, which is a model-agnostic post hoc explanation algorithm that has been widely applied to explain ML models [10,38,39].…”
Section: Model Explanationmentioning
confidence: 99%