Adaptive Autonomous Secure Cyber Systems 2020
DOI: 10.1007/978-3-030-33432-1_2
|View full text |Cite
|
Sign up to set email alerts
|

Defending Against Machine Learning Based Inference Attacks via Adversarial Examples: Opportunities and Challenges

Abstract: As machine learning (ML) becomes more and more powerful and easily accessible, attackers increasingly leverage ML to perform automated large-scale inference attacks in various domains. In such an ML-equipped inference attack, an attacker has access to some data (called public data) of an individual, a software, or a system; and the attacker uses an ML classifier to automatically infer their private data. Inference attacks pose severe privacy and security threats to individuals and systems. Inference attacks ar… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
60
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
5
3
1

Relationship

1
8

Authors

Journals

citations
Cited by 30 publications
(60 citation statements)
references
References 65 publications
(118 reference statements)
0
60
0
Order By: Relevance
“…This approach is shown to produce less noisy results between rounds, but also takes longer to converge than the updates described in Equation 1. There is also the potential of reduced privacy, in that parameters related to particular clients are retained on the server, rather than being removed immediately following calculation of the new global model, which could conceivably allow, for instance, inference attacks [8].…”
Section: Discussionmentioning
confidence: 99%
“…This approach is shown to produce less noisy results between rounds, but also takes longer to converge than the updates described in Equation 1. There is also the potential of reduced privacy, in that parameters related to particular clients are retained on the server, rather than being removed immediately following calculation of the new global model, which could conceivably allow, for instance, inference attacks [8].…”
Section: Discussionmentioning
confidence: 99%
“…Therefore, attackers who rely on machine learning also share its vulnerabilities and we can exploit such vulnerabilities to defend against them. For instance, we can leverage adversarial examples to mislead attackers who use machine learning classifiers to perform automated inference attacks [27]. One key challenge in this research direction is how to extend existing adversarial example methods to address the unique challenges of privacy protection.…”
Section: Discussion and Limitationsmentioning
confidence: 99%
“…To be more exact, the energy that the player will pick at each narrative stage is determined by the strategy used in a game with just one player and repeated steps. e action must be feasible; that is, it must belong to all the available options of the gaming stage [24,25].…”
Section: Proposed Game Strategymentioning
confidence: 99%
“…is was done because the large number of visitors combined with the enormous amounts of music content spent on a daily basis creates a new landscape of threats, in which the clear strategies for cybersecurity need to be rearranged on an ongoing basis. According to this logic, they frequently employ advanced techniques, including zero-day attacks, to launch attacks on music streaming platforms that modern cybercriminals have targeted [24,27,28].…”
Section: Application Testingmentioning
confidence: 99%