2021
DOI: 10.1007/978-3-030-69535-4_13
|View full text |Cite
|
Sign up to set email alerts
|

ERIC: Extracting Relations Inferred from Convolutions

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
10
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
3
2
1

Relationship

0
6

Authors

Journals

citations
Cited by 7 publications
(11 citation statements)
references
References 23 publications
0
10
0
Order By: Relevance
“…These literals were then used to generate symbolic rules by means of a logic program that approximates the behavior of the convolutional layer(s) with respect to the CNN’s output. The approximation M * of the original CNN was reported to achieve high classification accuracy and fidelity in [ 12 ]. The results were also evaluated in terms of the sizes of the extracted rule sets, with smaller sets considered to be more human-comprehensible.…”
Section: Related Workmentioning
confidence: 99%
See 3 more Smart Citations
“…These literals were then used to generate symbolic rules by means of a logic program that approximates the behavior of the convolutional layer(s) with respect to the CNN’s output. The approximation M * of the original CNN was reported to achieve high classification accuracy and fidelity in [ 12 ]. The results were also evaluated in terms of the sizes of the extracted rule sets, with smaller sets considered to be more human-comprehensible.…”
Section: Related Workmentioning
confidence: 99%
“…This work used the ERIC method [ 12 , 39 ] for extracting rules from CNN kernels. Based on the ERIC framework, the last convolutional layer, l , of the trained VGG16, M , was quantized and binarized to produce literals.…”
Section: Interactive Model Explanation and Interventionmentioning
confidence: 99%
See 2 more Smart Citations
“…Explainability in neuro-symbolic systems has been traditionally approached by learning a set of symbolic rules, known as Knowledge Extraction, and evaluating how well it may approximate the behaviour of a complex neural network by measuring the percentage of matching predictions on a test set, referred to as fidelity [41,35] This is comparable to most contemporary explainability methods that are not powerful enough to guarantee the soundness and completeness of the explanation concerning the underlying model. Most metrics currently in place are lacking a reliable way of expressing this uncertainty.…”
Section: Xai In Neural-symbolic Aimentioning
confidence: 99%