2023
DOI: 10.1016/j.ajp.2022.103316
|View full text |Cite
|
Sign up to set email alerts
|

An explainable predictive model for suicide attempt risk using an ensemble learning and Shapley Additive Explanations (SHAP) approach

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
11
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 44 publications
(24 citation statements)
references
References 35 publications
0
11
0
Order By: Relevance
“…Natural Language Processing (NLP) techniques have been used to analyze electronic health records (EHRs) to extract information that can be used to predict or diagnose diseases [ 63 ]. Explainable AI techniques, such as SHAP (SHapley Additive exPlanations), are presented to interpret the predictions made by ML models, which is essential in a medical context where the decision-making should be transparent [ 64 ]. Generative models such as GANs (Generative Adversarial Networks) are invented to generate synthetic medical images that can augment existing data, such as lung disease, and improve the result performance [ 65 ].…”
Section: Artificial Intelligence Techniques In Disease Diagnosis and ...mentioning
confidence: 99%
“…Natural Language Processing (NLP) techniques have been used to analyze electronic health records (EHRs) to extract information that can be used to predict or diagnose diseases [ 63 ]. Explainable AI techniques, such as SHAP (SHapley Additive exPlanations), are presented to interpret the predictions made by ML models, which is essential in a medical context where the decision-making should be transparent [ 64 ]. Generative models such as GANs (Generative Adversarial Networks) are invented to generate synthetic medical images that can augment existing data, such as lung disease, and improve the result performance [ 65 ].…”
Section: Artificial Intelligence Techniques In Disease Diagnosis and ...mentioning
confidence: 99%
“…However, due to the complexity of the non-parametric algorithms that are common in machine-learning methods, it is impossible for a human to analyze each tree and execute an explanation of how the machine-learning method works [ 1 , 62 65 ]. Thus, using SHAP allows for a similar covariate interpretation as linear regression even if the exact effect-sizes of the covariates cannot be interpreted the way it can in linear regression [ 15 , 22 , 49 , 66 68 ]. Fig 2A highlights the relationship between increasing values of a covariate (purple) and increased odds for heart disease.…”
Section: Discussionmentioning
confidence: 99%
“…SHAP is a useful method for sorting effects and breaking down predictions into individual feature impacts [98]. The SHAP value indicates the degree to which a particular feature has changed the prediction, and allows the modeler to decompose any prediction into the sum of the effects of each feature value [99]. The SHAP value is used as a unified measure in measuring feature importance.…”
Section: Shap Values (Shapley Additive Explanations)mentioning
confidence: 99%