2021
DOI: 10.48550/arxiv.2106.13997
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

The Feasibility and Inevitability of Stealth Attacks

Abstract: We develop and study new adversarial perturbations that enable an attacker to gain control over decisions in generic Artificial Intelligence (AI) systems including deep learning neural networks. In contrast to adversarial data modification, the attack mechanism we consider here involves alterations to the AI system itself. Such a stealth attack could be conducted by a mischievous, corrupt or disgruntled member of a software development team. It could also be made by those wishing to exploit a "democratization … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...

Citation Types

0
1
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 18 publications
0
1
0
Order By: Relevance
“…The general consideration of adaptability of individuals and technical complexes also yields useful hints for solving the problem of AGI (Gorban et al, 2021a). Finally, the safety of AI systems should be prioritized without any doubts (Tyukin et al, 2021b).…”
mentioning
confidence: 99%
“…The general consideration of adaptability of individuals and technical complexes also yields useful hints for solving the problem of AGI (Gorban et al, 2021a). Finally, the safety of AI systems should be prioritized without any doubts (Tyukin et al, 2021b).…”
mentioning
confidence: 99%