2020
DOI: 10.1007/978-3-030-58583-9_31
|View full text |Cite
|
Sign up to set email alerts
|

Attributional Robustness Training Using Input-Gradient Spatial Alignment

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
25
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 9 publications
(25 citation statements)
references
References 30 publications
0
25
0
Order By: Relevance
“…CAM [51] 62.57 ART [36] 75.45 Ours (method I) 76.30 Ours (method II) 76.70 Table 4. Results for CUB [41] segmentation.…”
Section: Methods Pxapmentioning
confidence: 99%
See 1 more Smart Citation
“…CAM [51] 62.57 ART [36] 75.45 Ours (method I) 76.30 Ours (method II) 76.70 Table 4. Results for CUB [41] segmentation.…”
Section: Methods Pxapmentioning
confidence: 99%
“…For augmentation, the training procedure em-Figure 7. CUB weakly supervised segmentation where the first row is the input image, the second row is the ground-truth mask, the third row is the output map of [36] and the last row is our output map M . ploys a resize to a fixed size followed by a random crop, as well as a random horizontal flip.…”
Section: Methodsmentioning
confidence: 99%
“…Moreover, to ensure fair comparisons, we keep the hyper-parameters the same for models with or without IGR. (Chen et al, 2019) 36.13% 0.1562 51.84% 0.3446 74.49% 0.5811 IG-SUM-NORM (Chen et al, 2019) 41.53% 0.2240 57.27% 0.4097 78.70% 0.6901 AdvAAT (Ivankay et al, 2020) 51.74% 0.3791 73.62% 0.5810 72.11% 0.5484 ART (Singh et al, 2020b) 30.38% 0.1439 31.71% 0.2079 70.44% 0.6875 SSR (Wang et al, 2020) 38…”
Section: Experimental Configurationsmentioning
confidence: 99%
“…To compare, attribution protection methods, IG-NORM and IG-SUM-NORM by Chen et al (2019), Smooth Surface Regularization (SSR) (Wang et al, 2020), Attributional Robustness Traning (ART) (Singh et al, 2020b) and Adversarial Attributional Training with robust training loss (AdvAAT), are implemented and evaluated on all the datasets. A cross-entropy loss trained natural model (standard) is also included as a baseline.…”
Section: Evaluation On Attribution Robustnessmentioning
confidence: 99%
“…Bae et al [29] used percentile to avoid the influence of extreme value when obtaining the final object location information. Singh et al [30] obtained more accurate contour of an object by antagonism confrontation enhancement. The method we proposed also has no objects proposal or additional batch for classification compared with others.…”
Section: Related Workmentioning
confidence: 99%