2020
DOI: 10.1109/tmi.2020.2964499
|View full text |Cite
|
Sign up to set email alerts
|

Explainable Anatomical Shape Analysis Through Deep Hierarchical Generative Models

Abstract: Quantification of anatomical shape changes still relies on scalar global indexes which are largely insensitive to regional or asymmetric modifications. Accurate assessment of pathology-driven anatomical remodeling is a crucial step for the diagnosis and treatment of heart conditions. Deep learning approaches have recently achieved wide success in the analysis of medical images, but they lack interpretability in the feature extraction and decision processes. In this work, we propose a new interpretable deep lea… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
34
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
5
3

Relationship

1
7

Authors

Journals

citations
Cited by 48 publications
(34 citation statements)
references
References 42 publications
0
34
0
Order By: Relevance
“…An example of the latter was presented in [ 68 ] using the latent space of the features of a variational autoencoder for classification and segmentation of the brain MRI of Alzheimer’s patients. The classification was performed in a two-dimensional latent space using an multi layer perceptron (MLP).…”
Section: Applicationsmentioning
confidence: 99%
“…An example of the latter was presented in [ 68 ] using the latent space of the features of a variational autoencoder for classification and segmentation of the brain MRI of Alzheimer’s patients. The classification was performed in a two-dimensional latent space using an multi layer perceptron (MLP).…”
Section: Applicationsmentioning
confidence: 99%
“…Hence, it is of great importance to make graph generation methods more interpretable. As of now, deep generative methods in areas such as image [161]- [163] and text [164]- [166] have slowly moved towards being more interpretable. However, only a few attempts [55], [56], [102] have recently been made in graph generation, making model interpretability a notable future research prospect.…”
Section: ) Interpretabilitymentioning
confidence: 99%
“…To this end, there are two general approaches that are actively being researched in the field [282]. The first is to develop an interpretable computational structure instead of DNNs [283,284], so that the predictions are made based on the crafted logic in the DL model. The second approach is to provide post hoc model prediction interpretation, such as attention mechanism [42,285] and uncertainty quantification [20,286,287], while keeping the same DNN structure.…”
Section: Challenges and Opportunities Across Multiple Imaging Domainsmentioning
confidence: 99%