2022
DOI: 10.48550/arxiv.2203.10417
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Attri-VAE: attribute-based, disentangled and interpretable representations of medical images with variational autoencoders

Abstract: Deep learning (DL) methods where interpretability is intrinsically considered as part of the model are required to better understand the relationship of clinical and imaging-based attributes with DL outcomes, thus facilitating their use in reasoning medical decisions. Latent space representations built with variational autoencoders (VAE) do not ensure individual control of data attributes. Attribute-based methods enforcing attribute disentanglement have been proposed in the literature for classical computer vi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 45 publications
0
1
0
Order By: Relevance
“…Currently, most deep learning algorithms for biological studies attempt to establish the relationship between microscopic molecular biology characteristics and macroscopic biology variables to obtain the interpretability of the models (18)(19)(20)(21)(22)(23)(24). However, it is always challenging to establish a direct, quantitative and computable correlation between them.…”
Section: Discussionmentioning
confidence: 99%
“…Currently, most deep learning algorithms for biological studies attempt to establish the relationship between microscopic molecular biology characteristics and macroscopic biology variables to obtain the interpretability of the models (18)(19)(20)(21)(22)(23)(24). However, it is always challenging to establish a direct, quantitative and computable correlation between them.…”
Section: Discussionmentioning
confidence: 99%