2021
DOI: 10.1109/tii.2020.3044576
|View full text |Cite
|
Sign up to set email alerts
|

Honeypot Identification in Softwarized Industrial Cyber–Physical Systems

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
25
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
4

Relationship

0
9

Authors

Journals

citations
Cited by 48 publications
(25 citation statements)
references
References 21 publications
0
25
0
Order By: Relevance
“…If some IoT data involves the user's private information, once it is leaked, it will cause property and life safety issues. More and more solutions [31][32][33][34] are proposed to solve data security issues, which are implemented by not directly processing data. In addition, people can also protect their privacy by processing data.…”
Section: Related Workmentioning
confidence: 99%
“…If some IoT data involves the user's private information, once it is leaked, it will cause property and life safety issues. More and more solutions [31][32][33][34] are proposed to solve data security issues, which are implemented by not directly processing data. In addition, people can also protect their privacy by processing data.…”
Section: Related Workmentioning
confidence: 99%
“…There are various protocols mixed together on the network, and malicious programs use private protocols to avoid being tracked and analyzed, which brings great challenges to network security. Especially in the industrial cyber-physical system, which faces various security threats, these programs can jeopardize the system's stability [6,7]. In addition, well-known network protocols may have different formats due to different application requirements.…”
Section: Introductionmentioning
confidence: 99%
“…Although the above CNN-based methods have an outstanding performance for RSI scene classification, they also face various security risks [15], e.g., data poisoning attacks [16] and adversarial sample attacks. To defend against these attacks, researchers also proposed corresponding malicious detection methods [17][18][19]. Among these attacks, the classifiers can easily be fooled by the designed adversarial samples and get unexpected results.…”
Section: Introductionmentioning
confidence: 99%