2020 IEEE Symposium on Visualization for Cyber Security (VizSec) 2020
DOI: 10.1109/vizsec51108.2020.00010
|View full text |Cite
|
Sign up to set email alerts
|

Interpretable Visualizations of Deep Neural Networks for Domain Generation Algorithm Detection

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
15
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
4
2

Relationship

1
5

Authors

Journals

citations
Cited by 11 publications
(15 citation statements)
references
References 21 publications
0
15
0
Order By: Relevance
“…In future work it is required to compare the level of explainability provided by our approach with different techniques, such as Lemna [12] and DMM-MEN [11], which try to explain the predictions of deep neural network classifiers. Moreover, recently, a visual analytics system [2] was proposed which strives to provide understandable interpretations for predictions of deep learning based DGA detection classifiers. This system first clusters the activations of a model's neurons and subsequently leverages decision trees in order to explain the constructed clusters.…”
Section: Conclusion and Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…In future work it is required to compare the level of explainability provided by our approach with different techniques, such as Lemna [12] and DMM-MEN [11], which try to explain the predictions of deep neural network classifiers. Moreover, recently, a visual analytics system [2] was proposed which strives to provide understandable interpretations for predictions of deep learning based DGA detection classifiers. This system first clusters the activations of a model's neurons and subsequently leverages decision trees in order to explain the constructed clusters.…”
Section: Conclusion and Discussionmentioning
confidence: 99%
“…For solving the DGA binary classification task FANCI implements an SVM and an RF-based classifier and makes use of 12 structural, 7 linguistic, and 22 statistical features. 2 The in total 41 features are extracted solely from the domain name which is to be classified and thus FANCI works completely contextless. FANCI does not support DGA multiclass classification.…”
Section: Dga Detection Classifiersmentioning
confidence: 99%
“…Such dominant focus on visual datasets is not unique to the study of bias but is, in fact, also observed in fields like visual analytics, where non-visual aspects of the system are transformed into visual aspects. For example, the neuron activations are presented graphically (visually) in the research on network interpretability (Becker, Drichel, Müller, & Ertl, 2020) and security (Liu, Dolan-Gavitt, & Garg, 2018), which enables problem identification (detection). This in turn motivates deeper research/solutions.…”
Section: Bias and The Focus On Visual Datasetsmentioning
confidence: 99%
“…The explanation is attributable to subgraph decomposition theory [198], where it is feasible to determine whether the learned model is interpretable by identifying the subgraph with the most significant influence on prediction and judging whether the subgraph is faithful to general knowledge. [199]- [201], three explainable studies focused on DGAbased botnet detection, are also worth mentioning, as is [202],in which the authors created a Gradient-based Explainable Variational Autoencoder for Network Anomaly Detection utilizing a BotNet dataset as a test.…”
Section: ) Explainable Artificial Intelligence In Bot(net) Detectionmentioning
confidence: 99%