2022
DOI: 10.48550/arxiv.2210.17376
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

SoK: Modeling Explainability in Security Monitoring for Trust, Privacy, and Interpretability

Abstract: Trust, privacy, and interpretability have emerged as significant concerns for experts deploying deep learning models for security monitoring. Due to their back-box nature, these models cannot provide an intuitive understanding of the machine learning predictions, which are crucial in several decision-making applications, like anomaly detection. Security operations centers have a number of security monitoring tools that analyze logs and generate threat alerts which security analysts inspect. The alerts lack suf… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...

Citation Types

0
0
0

Publication Types

Select...

Relationship

0
0

Authors

Journals

citations
Cited by 0 publications
references
References 52 publications
0
0
0
Order By: Relevance

No citations

Set email alert for when this publication receives citations?