2021
DOI: 10.1609/aaai.v35i18.17946
|View full text |Cite
|
Sign up to set email alerts
|

Scalable Partial Explainability in Neural Networks via Flexible Activation Functions (Student Abstract)

Abstract: Current state-of-the-art neural network explanation methods (e.g. Saliency maps, DeepLIFT, LIME, etc.) focus more on the direct relationship between NN outputs and inputs rather than the NN structure and operations itself, hence there still exists uncertainty over the exact role played by neurons. In this paper, we propose a novel neural network structure with Kolmogorov-Arnold Superposition Theorem based topology and Gaussian Processes based flexible activation function to achieve partial explainability of th… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 1 publication
(2 reference statements)
0
1
0
Order By: Relevance
“…A definition of the substitution rules from the real world to the model and back greatly adds to interpretability. Many machine learning techniques are difficult to interpret [154], but attempts at explainable artificial intelligence [155] present an opportunity to improve complex systems model interpretability. Once a model has external validity, it can be used to see what would happen in the future (or could have happened in the past) if the inputs are changed, i.e., to measure the sensitivity of the output results with respect to changing input variables [154].…”
Section: External Validity-users' Perspectivementioning
confidence: 99%
“…A definition of the substitution rules from the real world to the model and back greatly adds to interpretability. Many machine learning techniques are difficult to interpret [154], but attempts at explainable artificial intelligence [155] present an opportunity to improve complex systems model interpretability. Once a model has external validity, it can be used to see what would happen in the future (or could have happened in the past) if the inputs are changed, i.e., to measure the sensitivity of the output results with respect to changing input variables [154].…”
Section: External Validity-users' Perspectivementioning
confidence: 99%