2022
DOI: 10.1109/access.2022.3204171
|View full text |Cite
|
Sign up to set email alerts
|

Explainable Artificial Intelligence in CyberSecurity: A Survey

Abstract: Nowadays, Artificial Intelligence (AI) is widely applied in every area of human being's daily life. Despite the AI benefits, its application suffer from the opacity of complex internal mechanisms and doesn't satisfy by design the principles of Explainable Artificial Intelligence (XAI). The lack of transparency further exacerbates the problem in the field of Cybersecurity because entrusting crucial decisions to a system that cannot explain itself presents obvious dangers. There are several methods in the litera… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
55
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
2
2

Relationship

0
8

Authors

Journals

citations
Cited by 92 publications
(55 citation statements)
references
References 229 publications
0
55
0
Order By: Relevance
“…For more details, we request the interested reader to refer to [1,10,22,32,39,53] for general surveys on explainability methods, or for specific domains refer to: medical [115], embedded systems [117], multimodal [56], time series [90], cybersecurity [23], and tabular data [94].…”
Section: Overview Of Explainability Methodsmentioning
confidence: 99%
“…For more details, we request the interested reader to refer to [1,10,22,32,39,53] for general surveys on explainability methods, or for specific domains refer to: medical [115], embedded systems [117], multimodal [56], time series [90], cybersecurity [23], and tabular data [94].…”
Section: Overview Of Explainability Methodsmentioning
confidence: 99%
“…Although machine learning approaches have been extensively investigated, the explainability of the models has been addressed to a very limited extent 29 as also suggested by recent surveys on cyber security attacks. 30,31 Our work tries to fill this gap by proposing a methodological approach to make the machine learning models implemented for detecting phishing websites explainable.…”
Section: Related Workmentioning
confidence: 99%
“…Although machine learning approaches have been extensively investigated, the explainability of the models has been addressed to a very limited extent 29 as also suggested by recent surveys on cyber security attacks 30,31 …”
Section: Related Workmentioning
confidence: 99%
“…Explanations generated for security analysts should be generated in such a way that they are intelligible and easily understandable. Existing research uses several approaches like trees, formal language, attention scores, and saliency maps to visualize explanation [10]. Similarly, there is no fixed set of quantitative metrics to evaluate explanation methods for security.…”
Section: Security Concernsmentioning
confidence: 99%