Proceedings of the 32nd ACM International Conference on Information and Knowledge Management 2023
DOI: 10.1145/3583780.3614793
|View full text |Cite
|
Sign up to set email alerts
|

Black-box Adversarial Attacks against Dense Retrieval Models: A Multi-view Contrastive Learning Method

Yu-An Liu,
Ruqing Zhang,
Jiafeng Guo
et al.
Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
1
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4
2

Relationship

0
6

Authors

Journals

citations
Cited by 6 publications
(4 citation statements)
references
References 24 publications
0
1
0
Order By: Relevance
“…This method can effectively bypass the defense mechanisms of different models, improving the success rate and transferability of the attack. Liu et al [58] first proposed ensemble attacks by averaging the predictions (probability) of multiple models and using existing adversarial attack methods (such as FGSM and PGD) to improve the transferability of adversarial examples. Dong et al [56] Although existing methods have shown some improvement in the transferability of adversarial examples, the effect is insignificant.…”
Section: Model Ensemble Attacksmentioning
confidence: 99%
“…This method can effectively bypass the defense mechanisms of different models, improving the success rate and transferability of the attack. Liu et al [58] first proposed ensemble attacks by averaging the predictions (probability) of multiple models and using existing adversarial attack methods (such as FGSM and PGD) to improve the transferability of adversarial examples. Dong et al [56] Although existing methods have shown some improvement in the transferability of adversarial examples, the effect is insignificant.…”
Section: Model Ensemble Attacksmentioning
confidence: 99%
“…The study of transfer-based attacks reveals that different classification models tend to have similar decision boundaries for the same classification tasks [13]. In this paper, we assume that the attacker can access pretrained local models, which is a common assumption used in transfer-based attacks.…”
Section: Basic Ideamentioning
confidence: 99%
“…In this paper, we assume that the attacker can access pretrained local models, which is a common assumption used in transfer-based attacks. Inspired by [13], for a non-targeted attack, if a point within the perturbation constraint is far from the decision boundary of the clean image which is found in the local surrogate model, it may be an adversarial example for the target model or it may accelerate the adversarial attack. Based on this idea, we roughly regard the process of searching for adversarial examples as a problem of minimizing the true class probability along the correct classification direction of the surrogate model.…”
Section: Basic Ideamentioning
confidence: 99%
See 1 more Smart Citation