2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2021
DOI: 10.1109/cvpr46437.2021.01055
|View full text |Cite
|
Sign up to set email alerts
|

Explaining Classifiers using Adversarial Perturbations on the Perceptual Ball

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
15
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
3
3
1

Relationship

0
7

Authors

Journals

citations
Cited by 15 publications
(15 citation statements)
references
References 23 publications
(46 reference statements)
0
15
0
Order By: Relevance
“…Moreover, during the test phase (on images not seen at the training time) the explanation is not provided since we aim to measure how well a user is able to predict the model and not the ability of the explanation to leak the label. and the field of Explainable AI is no exception [18,20,25,36,43,51]. We use this dataset because we expect it to be representative of real-world scenarios where it is difficult to understand what the model is relying on for its decisions.…”
Section: Methodsmentioning
confidence: 99%
See 2 more Smart Citations
“…Moreover, during the test phase (on images not seen at the training time) the explanation is not provided since we aim to measure how well a user is able to predict the model and not the ability of the explanation to leak the label. and the field of Explainable AI is no exception [18,20,25,36,43,51]. We use this dataset because we expect it to be representative of real-world scenarios where it is difficult to understand what the model is relying on for its decisions.…”
Section: Methodsmentioning
confidence: 99%
“…Evaluations based on ground-truth annotations A first class of evaluation approaches scores explainability methods according to their ability to identify image locations that overlap with the target object defined either by a humanderived bounding box or a segmentation mask [17,19,20,37,41,51]. More recently, the evaluation method called Pointing Game [56] counts the number of times the most important region according to the explanation intersects with the location of the object to classify.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Adversarial Examples as Explanation-Rather than using a perturbation function and optimizing a mask, [48,12] propose to find for each input image an L 2 -close adversarial example (to be compared to the input) that impacts the classifier's decision within a constrained space. More recently, [8] train two models with the same architecture and partly shared parameters to produce for a given input image, L 1 ,L 2 -close images with respectively similar and opposite classifications compared to the original one.…”
Section: Related Workmentioning
confidence: 99%
“…Moreover, the contribution of the stable image is less important than what is described in[8], even if it slightly improves the localization (see tables 2a and 2b) SyCE w/o St. vs SyCE). This is due to a generation process which is not penalized with L p norms as in[8,12]. Finally, although CyCE is the best performer for domain translation (see section F.1) and is competitive with other works of the literature, it obtains poorer localization results than SyCE.…”
mentioning
confidence: 99%