2022 IEEE 7th European Symposium on Security and Privacy (EuroS&P) 2022
DOI: 10.1109/eurosp53844.2022.00013
|View full text |Cite
|
Sign up to set email alerts
|

Illuminati: Towards Explaining Graph Neural Networks for Cybersecurity Analysis

Abstract: Graph neural networks (GNNs) have been utilized to create multi-layer graph models for a number of cybersecurity applications from fraud detection to software vulnerability analysis. Unfortunately, like traditional neural networks, GNNs also suffer from a lack of transparency, that is, it is challenging to interpret the model predictions. Prior works focused on specific factor explanations for a GNN model. In this work, we have designed and implemented ILLUMINATI, a comprehensive and accurate explanation frame… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3
2
1

Relationship

0
6

Authors

Journals

citations
Cited by 10 publications
(3 citation statements)
references
References 41 publications
0
3
0
Order By: Relevance
“…To address these engineering issues, researchers have proposed graph sampling methods to divide the graph into manageable batches and distribute the training across multiple workers [57]. Additionally, model interpretability is a critical concern for many applications, including those in cybersecurity [67], [158]. Despite their importance, these topics remain less discussed in current literature and represent areas for future research.…”
Section: Discussion and Future Directionsmentioning
confidence: 99%
“…To address these engineering issues, researchers have proposed graph sampling methods to divide the graph into manageable batches and distribute the training across multiple workers [57]. Additionally, model interpretability is a critical concern for many applications, including those in cybersecurity [67], [158]. Despite their importance, these topics remain less discussed in current literature and represent areas for future research.…”
Section: Discussion and Future Directionsmentioning
confidence: 99%
“…However, the global explanation is derived from the training data and thus it may not be accurate for a particular decision of an instance [55]. A more popular approach is local explanation [31,83], which adopts perturbation-based mechanisms such as LEMNA [30] to provide justifications for individual predictions. The high-level idea behind this approach is to search for important features positively contributing to the model's prediction by removing or replacing a subset of the features in the input space.…”
Section: Introductionmentioning
confidence: 99%
“…stock exchange [7], law enforcement department [8], ecology [9], human resource management [10], signal processing with blind separation [11], and cybersecurity [12]. The ANNs are mainly based on mathematical models inspired by biological nervous systems, such as the brain's route information.…”
Section: Introductionmentioning
confidence: 99%