2020
DOI: 10.1007/978-3-030-43823-4_23
|View full text |Cite
|
Sign up to set email alerts
|

LioNets: Local Interpretation of Neural Networks Through Penultimate Layer Decoding

Abstract: Towards a future where machine learning systems will integrate into every aspect of people's lives, researching methods to interpret such systems is necessary, instead of focusing exclusively on enhancing their performance. Enriching the trust between these systems and people will accelerate this integration process. Many medical and retail banking/finance applications use state-of-the-art machine learning techniques to predict certain aspects of new instances. Tree ensembles, like random forests, are widely a… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
9
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
6
1
1

Relationship

1
7

Authors

Journals

citations
Cited by 12 publications
(9 citation statements)
references
References 34 publications
0
9
0
Order By: Relevance
“…An interesting implementation of Hunter’s argumentation model (Besnard & Hunter, 2001, 2009) can be found in Mollas et al . (2020). The authors use a feature importance technique, in order to extract untruthful parts in the explanation of a data-driven model.…”
Section: Argumentation and Machine Learning For Explainabilitymentioning
confidence: 99%
“…An interesting implementation of Hunter’s argumentation model (Besnard & Hunter, 2001, 2009) can be found in Mollas et al . (2020). The authors use a feature importance technique, in order to extract untruthful parts in the explanation of a data-driven model.…”
Section: Argumentation and Machine Learning For Explainabilitymentioning
confidence: 99%
“…Other model-specific outcome explanation techniques include the one proposed by Mollas et al [42], which uses unsupervised learning techniques and a similarity metric to explain individual outcomes of random forests. Another technique was proposed by Haufe et al [41], which transforms non-linear models in terms of multivariate classifiers into interpretable linear models.…”
Section: A Explaining How the Outcome Was Generatedmentioning
confidence: 99%
“…LionForest [42] Specific Explaining the decisions of random forests via unsupervised learning techniques and similarity metrics. TABLE 1: A summary of the XAI techniques covered in Section II-A, all of which are designed for outcome explanation.…”
Section: Introductionmentioning
confidence: 99%
“…Afterwards, it allows the ranking of neurons and dimensions based on their overall saliency. Finally, lionets [29] looks at the penultimate layer of a DNN, which models texts in an alternative representation, randomly permutes the weights of nodes in that layer to generate new vectors, classifies them, observes the classification outcome and returns the explanation using a linear regressor like lime. Differently from these model-specific methods, xspells is not tied to a specific architecture and it can be used to explain any black box sentiment classifier.…”
Section: Related Workmentioning
confidence: 99%