2022
DOI: 10.3390/e24030412
|View full text |Cite
|
Sign up to set email alerts
|

ABCAttack: A Gradient-Free Optimization Black-Box Attack for Fooling Deep Image Classifiers

Abstract: The vulnerability of deep neural network (DNN)-based systems makes them susceptible to adversarial perturbation and may cause classification task failure. In this work, we propose an adversarial attack model using the Artificial Bee Colony (ABC) algorithm to generate adversarial samples without the need for a further gradient evaluation and training of the substitute model, which can further improve the chance of task failure caused by adversarial perturbation. In untargeted attacks, the proposed method obtain… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...

Citation Types

0
0
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3
1
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(1 citation statement)
references
References 39 publications
0
0
0
Order By: Relevance
“…This algorithm first divides the search space and initializes the particle swarm by randomly selecting a few image blocks to apply perturbations. It then performs an optimization search.• ABCAttack[28]. ABCAttack generates adversarial examples based on the artificial bee colony algorithm.…”
mentioning
confidence: 99%
“…This algorithm first divides the search space and initializes the particle swarm by randomly selecting a few image blocks to apply perturbations. It then performs an optimization search.• ABCAttack[28]. ABCAttack generates adversarial examples based on the artificial bee colony algorithm.…”
mentioning
confidence: 99%