2022
DOI: 10.1007/s10489-022-03412-8
|View full text |Cite
|
Sign up to set email alerts
|

Robust anomaly-based intrusion detection system for in-vehicle network by graph neural network framework

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
6
1
1

Relationship

0
8

Authors

Journals

citations
Cited by 15 publications
(3 citation statements)
references
References 47 publications
0
3
0
Order By: Relevance
“…DL methods have also been explored for detecting attacks in in-vehicle network traffic [28]. Song et al [10] presented a deep convolutional neural network (CNN) based approach for detecting attacks in CAN traffic, which reported high attack detection accuracy.…”
Section: Related Workmentioning
confidence: 99%
“…DL methods have also been explored for detecting attacks in in-vehicle network traffic [28]. Song et al [10] presented a deep convolutional neural network (CNN) based approach for detecting attacks in CAN traffic, which reported high attack detection accuracy.…”
Section: Related Workmentioning
confidence: 99%
“…Anomal-E [14] adopts a self-supervised approach combining edge features and graph topology to detect network intrusions and anomalies. Xiao et al [15] constructed a control area network graph attention network model that improves anomaly detection accuracy by capturing correlations among different flow byte states. Through network embedding feature representations, Zhang et al [16] proposed a GNN-based intrusion detection framework that can handle high-dimensional redundant but imbalanced and rare labeled data in industrial Internet-of-Things, distinguishing between cyber-attacks and physical failures.…”
Section: Challenging Issues and Related Workmentioning
confidence: 99%
“…Unlike classification-based IDSs, anomaly-based IDSs can detect unknown or novel attacks that have not been previously seen. However apart from this advantage, these models cannot easily specify the type of attack and perform worse than classification approaches for known data types [26][27][28].…”
Section: Introductionmentioning
confidence: 99%