2021
DOI: 10.1007/978-3-030-70594-7_15
|View full text |Cite
|
Sign up to set email alerts
|

Rule Extraction from Neural Networks and Other Classifiers Applied to XSS Detection

Abstract: Explainable artificial intelligence (XAI) is concerned with creating artificial intelligence that is intelligible and interpretable by humans. Many AI techniques build classifiers, some of which result in intelligible models, some of which don't. Rule extraction from classifiers treated as black boxes is an important topic in XAI, that aims to find rule sets that describe classifiers and that are understandable to humans. Neural networks provide one type of classifier where it is difficult to explain why the i… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
2
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(2 citation statements)
references
References 41 publications
0
2
0
Order By: Relevance
“…The effective adoption of machine learning for cybersecurity attack detection requires to properly mitigate these three problems. In the review, only two studies have been found addressing the interpretability of their proposed models [118,154]. In [118], Bayes Networks are adopted since they provide clear semantics that enable learning probability distributions from data [178].…”
Section: Limitations Of Attack Detection Techniquesmentioning
confidence: 99%
See 1 more Smart Citation
“…The effective adoption of machine learning for cybersecurity attack detection requires to properly mitigate these three problems. In the review, only two studies have been found addressing the interpretability of their proposed models [118,154]. In [118], Bayes Networks are adopted since they provide clear semantics that enable learning probability distributions from data [178].…”
Section: Limitations Of Attack Detection Techniquesmentioning
confidence: 99%
“…In [118], Bayes Networks are adopted since they provide clear semantics that enable learning probability distributions from data [178]. In [154], the authors proposed deriving explainable rules from black-box models that make the predictions generative and explainable. The latter approach is limited to models using binary features which does not fit to all discriminating features explored in the literature.…”
Section: Limitations Of Attack Detection Techniquesmentioning
confidence: 99%