2018
DOI: 10.1155/2018/4084850
|View full text |Cite
|
Sign up to set email alerts
|

A Comparison Study on Rule Extraction from Neural Network Ensembles, Boosted Shallow Trees, and SVMs

Abstract: One way to make the knowledge stored in an artificial neural network more intelligible is to extract symbolic rules. However, producing rules from Multilayer Perceptrons (MLPs) is an NP-hard problem. Many techniques have been introduced to generate rules from single neural networks, but very few were proposed for ensembles. Moreover, experiments were rarely assessed by 10-fold cross-validation trials. In this work, based on the Discretized Interpretable Multilayer Perceptron (DIMLP), experiments were performed… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
27
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
5
3
1

Relationship

0
9

Authors

Journals

citations
Cited by 33 publications
(27 citation statements)
references
References 36 publications
(41 reference statements)
0
27
0
Order By: Relevance
“…Several works in recent decades proposed to extract symbolic knowledge from numeric models. As witnessed by several surveys [158,[171][172][173] and works on the topic [174][175][176][177][178][179][180][181][182][183][184]-some of which are from the 80s or the 90s-the potential of symbolic knowledge extraction is well understood, although without hype.…”
Section: Application Scenarios: Explainable Responsible and Ethical Aimentioning
confidence: 99%
“…Several works in recent decades proposed to extract symbolic knowledge from numeric models. As witnessed by several surveys [158,[171][172][173] and works on the topic [174][175][176][177][178][179][180][181][182][183][184]-some of which are from the 80s or the 90s-the potential of symbolic knowledge extraction is well understood, although without hype.…”
Section: Application Scenarios: Explainable Responsible and Ethical Aimentioning
confidence: 99%
“…An important degree of freedom in distillation is the transfer set used to train the simpler model. Traditionally, knowledge transfer has been treated as a standard learning process, where the training data are relabelled and extended to learn an alternative model [26]. Most papers use the same set to train teacher and student, either in its raw form [26], [13], [27] or enriched with additional synthetic data [28], [11], [17].…”
Section: Related Workmentioning
confidence: 99%
“…Traditionally, knowledge transfer has been treated as a standard learning process, where the training data are relabelled and extended to learn an alternative model [26]. Most papers use the same set to train teacher and student, either in its raw form [26], [13], [27] or enriched with additional synthetic data [28], [11], [17]. Besides, researchers have also studied cases where teachers and students faced with the same task have different access to the training data [29].…”
Section: Related Workmentioning
confidence: 99%
“…The plethora of experiments carried out, also play a major role in the validation of the proposed algorithm. The proposed method is examined through a number of different individual base learners, where the ensemble learning technique is also explored as the aggregated models tend to produce more accurate predictions and are commonly used in today’s applications [ 9 , 10 ].…”
Section: Introductionmentioning
confidence: 99%