2022
DOI: 10.1161/circimaging.122.014526
|View full text |Cite|
|
Sign up to set email alerts
|

Deep Learning for Explainable Estimation of Mortality Risk From Myocardial Positron Emission Tomography Images

Abstract: Background: We aim to develop an explainable deep learning (DL) network for the prediction of all-cause mortality directly from positron emission tomography myocardial perfusion imaging flow and perfusion polar map data and evaluate it using prospective testing. Methods: A total of 4735 consecutive patients referred for stress and rest 82 Rb positron emission tomography between 2010 and 2018 were followed up fo… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
19
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
7

Relationship

0
7

Authors

Journals

citations
Cited by 15 publications
(19 citation statements)
references
References 38 publications
(57 reference statements)
0
19
0
Order By: Relevance
“…Singh et al 7 have presented an interesting a practical application of deep learning to integrate image data with clinical information to enhance interpretive power. As this and other techniques find their way into everyday use, it is essential we understand the strengths and weaknesses of these methods.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…Singh et al 7 have presented an interesting a practical application of deep learning to integrate image data with clinical information to enhance interpretive power. As this and other techniques find their way into everyday use, it is essential we understand the strengths and weaknesses of these methods.…”
Section: Discussionmentioning
confidence: 99%
“…Singh et al 7 present an interesting new addition to this family of deep learning models. The model presented combines image data with clinical data in much the same way as human clinician would do.…”
Section: Using the Deep Learning Modelsmentioning
confidence: 99%
“…For instance, Pérez-Pelegrıó et al 28 developed a new explainable approach that combines class activation mapping with U-net to automatically estimate the LV volume in end diastole and obtain the result in the form of a segmentation mask without segmentation labels to train the algorithm. Grad-CAM was used in 7 cardiac imaging studies, either for classification 18,34,40,[48][49][50] or segmentation. 51 The latter in particular proposed a new interpretable CNN model (fast and accurate echocardiographic automatic segmentation based on U-Net) that integrates U-net architecture and transfer learning (from Visual Geometry Group 19) to segment 2-dimensional echocardiography of 88 patients into 3 regions (LV, interventricular septal, and posterior LV wall).…”
Section: Literature Reviewmentioning
confidence: 99%
“…In addition, 4 studies relied on SHAP to interpret the model outputs. [32][33][34][35] One of these studies 32 relied on SHAP to develop and test an explainable ML model to assess whether noncontrast CMR could bring added value to predicting HF hospitalization compared with clinical data only. Their results demonstrated that CMRbased ML models could provide a significantly superior prediction of HF hospitalization (area under the curve, 0.81) compared with the basic clinical model (area under the curve, 0.64).…”
Section: Literature Reviewmentioning
confidence: 99%
See 1 more Smart Citation