Proceedings of the 2022 ACM SIGSAC Conference on Computer and Communications Security 2022
DOI: 10.1145/3548606.3559392
|View full text |Cite
|
Sign up to set email alerts
|

"Is your explanation stable?"

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
4
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3
2
2

Relationship

1
6

Authors

Journals

citations
Cited by 11 publications
(5 citation statements)
references
References 22 publications
0
4
0
Order By: Relevance
“…Hsieh et al [28] propose Robustness-S to evaluate explanation methods and design a new search-based explanation method, Greedy-AS. Gan et al [18] propose the Median Test for Feature Attribution to evaluate and improve the robustness of explanation methods. Traditional tests are used in the paper, which may also suffer from the random dominance problem.…”
Section: Robustness Of Explanation Methodsmentioning
confidence: 99%
See 3 more Smart Citations
“…Hsieh et al [28] propose Robustness-S to evaluate explanation methods and design a new search-based explanation method, Greedy-AS. Gan et al [18] propose the Median Test for Feature Attribution to evaluate and improve the robustness of explanation methods. Traditional tests are used in the paper, which may also suffer from the random dominance problem.…”
Section: Robustness Of Explanation Methodsmentioning
confidence: 99%
“…Based on previous studies, stable explanations ensure that if a given input is perturbed within đťś– and the model's output label remains unchanged, the corresponding explanations will stay stable [17,18,28]. However, stable explanations may not always guarantee faithfulness [18], as stable and faithful are two different properties of explanations. There could be the cases where explanations are stable but not faithful.…”
Section: E Adversarial Attack On Explanation Methodsmentioning
confidence: 99%
See 2 more Smart Citations
“…Pandey [22] adopted LSTM as an encoder to achieve machine translation, which improved the effectiveness of machine translation. However, the autoencoder network based on the attention mechanism has only been proven effective in machine translation, face recognition, and image processing, and there are few studies on time series prediction [23][24][25].…”
Section: Introductionmentioning
confidence: 99%