2023
DOI: 10.3390/electronics12112437
|View full text |Cite
|
Sign up to set email alerts
|

Adversarial Perturbation Elimination with GAN Based Defense in Continuous-Variable Quantum Key Distribution Systems

Abstract: Machine learning is being applied to continuous-variable quantum key distribution (CVQKD) systems as defense countermeasures for attack classification. However, recent studies have demonstrated that most of these detection networks are not immune to adversarial attacks. In this paper, we propose to implement typical adversarial attack strategies against the CVQKD system and introduce a generalized defense scheme. Adversarial attacks essentially generate data points located near decision boundaries that are lin… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(2 citation statements)
references
References 32 publications
0
1
0
Order By: Relevance
“…(3) training phase-data poisoning is the main threat; defense strategies focus on techniques that can identify and remove poisoned data (eg, the certified defense technique proposed by Tang et al [58]) and provide robust and reliable AI models; (4) inference phase-this phase mainly faces adversarial example attacks such as white-box, gray-box, and black-box attacks depending on how much the attacker knows about the target model; a variety of defense strategies can be implemented to tackle such attacks, such as adopting strategies in phases 1 to 3 to modify data (eg, data reconstruction and randomization) or modify or enhance models with newer model construction methods resistant to adversarial example attacks (eg, using deep neural networks and GAN-based networks [58,59]); (5) integration phase-AI models face AI biases, confidentiality attacks (eg, model inversion, model extraction, and various privacy attacks), and code vulnerability exploitation; defense strategies in this phase should be comprehensive via integrating various solutions such as fuzz testing and blockchain-based privacy protection.…”
Section: Security and Privacy Threats In The Life Cycle Of A Generati...mentioning
confidence: 99%
“…(3) training phase-data poisoning is the main threat; defense strategies focus on techniques that can identify and remove poisoned data (eg, the certified defense technique proposed by Tang et al [58]) and provide robust and reliable AI models; (4) inference phase-this phase mainly faces adversarial example attacks such as white-box, gray-box, and black-box attacks depending on how much the attacker knows about the target model; a variety of defense strategies can be implemented to tackle such attacks, such as adopting strategies in phases 1 to 3 to modify data (eg, data reconstruction and randomization) or modify or enhance models with newer model construction methods resistant to adversarial example attacks (eg, using deep neural networks and GAN-based networks [58,59]); (5) integration phase-AI models face AI biases, confidentiality attacks (eg, model inversion, model extraction, and various privacy attacks), and code vulnerability exploitation; defense strategies in this phase should be comprehensive via integrating various solutions such as fuzz testing and blockchain-based privacy protection.…”
Section: Security and Privacy Threats In The Life Cycle Of A Generati...mentioning
confidence: 99%
“…Furthermore, the DCGAN, based on generative adversarial network (GAN) (Ring et al, 2019;Kawai et al, 2019;Hu & Tan, 2017;Frid-Adar et al, 2018), introduces convolutional layers to enhance the quality of generated traffic and the training stability of the network (Jia et al, 2023). Through an automated parameter determination method, it further enhances the ability of various system models to detect unknown threats, strengthening the robustness of the model (Tang et al, 2023).…”
mentioning
confidence: 99%