2020
DOI: 10.1007/978-3-030-59710-8_28
|View full text |Cite
|
Sign up to set email alerts
|

Interpretable Deep Models for Cardiac Resynchronisation Therapy Response Prediction

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
31
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
5
3
2

Relationship

3
7

Authors

Journals

citations
Cited by 27 publications
(35 citation statements)
references
References 15 publications
0
31
0
Order By: Relevance
“…Another approach is to embed known clinical concepts into deep learning models. In previous study, we have proposed a method that ensures clinical concepts (in this case ejection fraction or septal flash) are encoded into the latent space of a variational autoencoder, a dimensionality reduction algorithm frequently used for classification [33]. This approach allows to solve the classification process while simultaneously interrogating the importance of known clinical features in the decision process.…”
Section: Interpretabilitymentioning
confidence: 99%
“…Another approach is to embed known clinical concepts into deep learning models. In previous study, we have proposed a method that ensures clinical concepts (in this case ejection fraction or septal flash) are encoded into the latent space of a variational autoencoder, a dimensionality reduction algorithm frequently used for classification [33]. This approach allows to solve the classification process while simultaneously interrogating the importance of known clinical features in the decision process.…”
Section: Interpretabilitymentioning
confidence: 99%
“…Puyol-Antón et al ( 127 ) offered in the first of its kind, an interpretable approach to a DL model for the prediction of CRT response. This framework was based around a DL-based generative model known as a variational autoencoder (VAE) which encodes the segmented biventricular data into a low dimensional latent space, followed by a primary task classifier of predicting those who would respond to CRT utilizing pre-treatment CMR images.…”
Section: Clinical Risk Prediction and The Role For Aimentioning
confidence: 99%
“…That is why more focus is recently put on designing self-explainable models [4,7] to make the decision process directly visible. Many interpretable solutions are based on the attention [32,53,60,[62][63][64] or exploit the activation space [18,42], e.g. with adversarial autoencoder.…”
Section: Related Workmentioning
confidence: 99%