2024
DOI: 10.3390/app14083312
|View full text |Cite
|
Sign up to set email alerts
|

Segment Shards: Cross-Prompt Adversarial Attacks against the Segment Anything Model

Shize Huang,
Qianhui Fan,
Zhaoxin Zhang
et al.

Abstract: Foundation models play an increasingly pivotal role in the field of deep neural networks. Given that deep neural networks are widely used in real-world systems and are generally susceptible to adversarial attacks, securing foundation models becomes a key research issue. However, research on adversarial attacks against the Segment Anything Model (SAM), a visual foundation model, is still in its infancy. In this paper, we propose the prompt batch attack (PBA), which can effectively attack SAM, making it unable t… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...

Citation Types

0
0
0

Publication Types

Select...

Relationship

0
0

Authors

Journals

citations
Cited by 0 publications
references
References 48 publications
0
0
0
Order By: Relevance

No citations

Set email alert for when this publication receives citations?