2022
DOI: 10.1007/s11416-022-00441-2
|View full text |Cite
|
Sign up to set email alerts
|

XAI for intrusion detection system: comparing explanations based on global and local scope

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
7
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
8
1
1

Relationship

1
9

Authors

Journals

citations
Cited by 25 publications
(13 citation statements)
references
References 20 publications
0
7
0
Order By: Relevance
“…These tools can help us obtain trends in the predictions from the trained models to explain the decisions made by the model. LIME and SHAP could be used for multi-class classification (with more than two classes) [ 31 ] , regression [ 32 ] and other types of applications such as image processing using CNNs [ 33 ] , etc. Since both tools have to run the trained model several times to produce explanations, it may not be useful for real-time explanations.…”
Section: Resultsmentioning
confidence: 99%
“…These tools can help us obtain trends in the predictions from the trained models to explain the decisions made by the model. LIME and SHAP could be used for multi-class classification (with more than two classes) [ 31 ] , regression [ 32 ] and other types of applications such as image processing using CNNs [ 33 ] , etc. Since both tools have to run the trained model several times to produce explanations, it may not be useful for real-time explanations.…”
Section: Resultsmentioning
confidence: 99%
“…In this study, we combine our past achievements with recent XAI-based analysis methodologies. We shaped our study's approach in light of our results [1]- [16]. We designed system constructs to measure and analyze acceleration and angular velocity data using general human dynamics and statistical approaches [17]- [27].…”
Section: A Methodsmentioning
confidence: 99%
“…Explanation of how an algorithm reach a specific decision is important in analysing the decision making process of autonomous agent. Studies in Explainable Artificial Intelligence (XAl) is an evolving field which gives insight into the decision taking process of autonomous agents [27]. The factors affecting algorithmic decisions and bias introduced by algorithms are also relevant in this regard.…”
Section: Emotional Decision Makingmentioning
confidence: 99%