2023
DOI: 10.3390/axioms12100997
|View full text |Cite
|
Sign up to set email alerts
|

Interpretable Model-Agnostic Explanations Based on Feature Relationships for High-Performance Computing

Zhouyuan Chen,
Zhichao Lian,
Zhe Xu

Abstract: In the explainable artificial intelligence (XAI) field, an algorithm or a tool can help people understand how a model makes a decision. And this can help to select important features to reduce computational costs to realize high-performance computing. But existing methods are usually used to visualize important features or highlight active neurons, and few of them show the importance of relationships between features. In recent years, some methods based on a white-box approach have taken relationships between … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(1 citation statement)
references
References 32 publications
0
1
0
Order By: Relevance
“…In the study of interpretable neural networks, Zhang et al [10] classified interpretable models according to three dimensions: type of engagement (passive and active interpretation methods), the type of explanation, and the focus (from local to global explainability). Chen et al [11] proposed a post-event locally interpretable model applied to image data, obtained the disturbed samples to be classified through local Mask sampling, and trained a simple linear model to interpret the field of interest of the local output labels of complex models. Tejaswini et al [12] proposed using decision tree to construct a model equivalent to neural network, which is a global rule extraction method.…”
Section: Introductionmentioning
confidence: 99%
“…In the study of interpretable neural networks, Zhang et al [10] classified interpretable models according to three dimensions: type of engagement (passive and active interpretation methods), the type of explanation, and the focus (from local to global explainability). Chen et al [11] proposed a post-event locally interpretable model applied to image data, obtained the disturbed samples to be classified through local Mask sampling, and trained a simple linear model to interpret the field of interest of the local output labels of complex models. Tejaswini et al [12] proposed using decision tree to construct a model equivalent to neural network, which is a global rule extraction method.…”
Section: Introductionmentioning
confidence: 99%