2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW) 2020
DOI: 10.1109/cvprw50498.2020.00405
|View full text |Cite
|
Sign up to set email alerts
|

Vulnerability of Person Re-Identification Models to Metric Adversarial Attacks

Abstract: Person re-identification (re-ID) is a key problem in smart supervision of camera networks. Over the past years, models using deep learning have become state of the art. However, it has been shown that deep neural networks are flawed with adversarial examples, i.e. human-imperceptible perturbations. Extensively studied for the task of image closedset classification, this problem can also appear in the case of open-set retrieval tasks. Indeed, recent work has shown that we can also generate adversarial examples … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
25
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
3
2
1

Relationship

0
6

Authors

Journals

citations
Cited by 18 publications
(25 citation statements)
references
References 16 publications
0
25
0
Order By: Relevance
“…Opposite-Direction Feature Attack (ODFA) [10] exploits featurelevel adversarial gradients to generate adversarial examples to pull the feature in the opposite direction with an artificial guide. Self Metric Attack (SMA) [8] uses the image with added noise as the reference image and obtains the adversarial examples by attacking the feature distance between the original image and the reference image. This process does not require any additional images.…”
Section: Adversarial Attacksmentioning
confidence: 99%
See 4 more Smart Citations
“…Opposite-Direction Feature Attack (ODFA) [10] exploits featurelevel adversarial gradients to generate adversarial examples to pull the feature in the opposite direction with an artificial guide. Self Metric Attack (SMA) [8] uses the image with added noise as the reference image and obtains the adversarial examples by attacking the feature distance between the original image and the reference image. This process does not require any additional images.…”
Section: Adversarial Attacksmentioning
confidence: 99%
“…The metric defense schemes proposed by [7,8] correspond to offline adversarial training and online adversarial training respectively. The defense method used in [7] is offline adversarial training, which is based on a generation of an adversarial version of the training set obtained with a frozen version of the trained model.…”
Section: Adversarial Defensesmentioning
confidence: 99%
See 3 more Smart Citations