2021
DOI: 10.1101/2021.01.22.427799
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

REM: An Integrative Rule Extraction Methodology for Explainable Data Analysis in Healthcare

Abstract: Deep learning models are receiving increasing attention in clinical decision-making, however the lack of interpretability and explainability impedes their deployment in day-to-day clinical practice. We propose REM, an interpretable and explainable methodology for extracting rules from deep neural networks and combining them with other data-driven and knowledge-driven rules. This allows integrating machine learning and reasoning for investigating applied and basic biological research questions. We evaluate the … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
1
1
1

Relationship

1
2

Authors

Journals

citations
Cited by 3 publications
(3 citation statements)
references
References 49 publications
0
3
0
Order By: Relevance
“…To address this, boosting algorithms can advance the performance of random forest–based models by using them to approximate the workings of highly performant neural networks 54 . Post hoc , researchers aim to extract nodes from the forest that are most important to its performance and use these key rules to provide explanations of “why” a particular choice was made 55 . Alternatively, key nodes could be used to select “concepts” that are provided to a neural network model during its training.…”
Section: Gray Areas and Computing Judgmentsmentioning
confidence: 99%
See 1 more Smart Citation
“…To address this, boosting algorithms can advance the performance of random forest–based models by using them to approximate the workings of highly performant neural networks 54 . Post hoc , researchers aim to extract nodes from the forest that are most important to its performance and use these key rules to provide explanations of “why” a particular choice was made 55 . Alternatively, key nodes could be used to select “concepts” that are provided to a neural network model during its training.…”
Section: Gray Areas and Computing Judgmentsmentioning
confidence: 99%
“…54 Post hoc, researchers aim to extract nodes from the forest that are most important to its performance and use these key rules to provide explanations of "why" a particular choice was made. 55 Alternatively, key Figure 3. The same artery is sectioned in two different planes according to its orientation when the cut is made.…”
Section: Gray Areas and Computing Judgmentsmentioning
confidence: 99%
“…Formalized Priors for Explanations Shams et al [67] propose a methodology which extracts conditional rules from deep neural networks and combines them with other data-driven and knowledge-driven ones. Clinicians are able to directly validate and calibrate the extracted rules with their domain knowledge to yield more precise and acceptable explanations.…”
Section: Informed Explainabilitymentioning
confidence: 99%