2020
DOI: 10.1007/s10115-020-01473-0
|View full text |Cite
|
Sign up to set email alerts
|

Rule extraction from neural network trained using deep belief network and back propagation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
6
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
6
1
1

Relationship

1
7

Authors

Journals

citations
Cited by 17 publications
(10 citation statements)
references
References 26 publications
0
6
0
Order By: Relevance
“…( 2015 ) Global interpretability (technical) Global interpretability is provided when interpretability analysis is performed to explain the system behavior for a set of inputs corresponding to an entire class or multiple classes Post-hoc interpretability methods may provide global interpretability, such as distillation techniques Frosst and Hinton ( 2017 ) and the extraction of rule lists (Chakraborty et al. ( 2020 ) Explainability (global) Explainable AI, also denoted as XAI, defines the branch of AI research that focuses on generating explanations for complex AI systems The six families of post-hoc interpretability methods known as feature attribution, feature visualization, concept attribution, surrogate, case-based and textual explanations are addressed as explainable AI. Transparency (global) Transparency is used in AI to characterize those systems for which the role of internal components, paradigms and overall behaviour is known and can be simulated The family of linear regression models and decision trees in low dimension are transparent and can be simulated Brackets specify the domain in which each definition applies.…”
Section: Resultsmentioning
confidence: 99%
“…( 2015 ) Global interpretability (technical) Global interpretability is provided when interpretability analysis is performed to explain the system behavior for a set of inputs corresponding to an entire class or multiple classes Post-hoc interpretability methods may provide global interpretability, such as distillation techniques Frosst and Hinton ( 2017 ) and the extraction of rule lists (Chakraborty et al. ( 2020 ) Explainability (global) Explainable AI, also denoted as XAI, defines the branch of AI research that focuses on generating explanations for complex AI systems The six families of post-hoc interpretability methods known as feature attribution, feature visualization, concept attribution, surrogate, case-based and textual explanations are addressed as explainable AI. Transparency (global) Transparency is used in AI to characterize those systems for which the role of internal components, paradigms and overall behaviour is known and can be simulated The family of linear regression models and decision trees in low dimension are transparent and can be simulated Brackets specify the domain in which each definition applies.…”
Section: Resultsmentioning
confidence: 99%
“…The table shows that the average testing accuracies of the NNEs are higher than the average testing accuracies of FFNNs in all the datasets. [7], and Eclectic Rule Extraction from Neural Network with Multi-Hidden Layer (ERENN_MHL) [32] algorithms.…”
Section: Resultsmentioning
confidence: 99%
“…The table shows that the average testing accuracies of the NNEs are higher than the average testing accuracies of FFNNs in all the datasets. Layer (ERENN_MHL) [32] algorithms.…”
Section: Resultsmentioning
confidence: 99%