Extended Abstracts of the 2020 CHI Conference on Human Factors in Computing Systems 2020
DOI: 10.1145/3334480.3382977
|View full text |Cite
|
Sign up to set email alerts
|

Massif: Interactive Interpretation of Adversarial Attacks on Deep Learning

Abstract: Deep neural networks (DNNs) are increasingly powering high-stakes applications such as autonomous cars and healthcare; however, DNNs are often treated as "black boxes" in such applications. Recent research has also revealed that DNNs are highly vulnerable to adversarial attacks, raising serious concerns over deploying DNNs in the real world. To overcome these deficiencies, we are developing MASSIF, an interactive tool for deciphering adversarial attacks. MASSIF identifies and interactively visualizes neurons a… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
4
2

Relationship

1
5

Authors

Journals

citations
Cited by 7 publications
(3 citation statements)
references
References 11 publications
0
3
0
Order By: Relevance
“…For a neuron to detect a feature, it requires an orchestrated interaction among many neurons across different layers. Recent research [10,11,23,41] visually explains how higher level concepts can be constructed by neural connections. In the context of adversarial attack, some methods [10,11,31] identify where in a network the activation pathways of a benign and attacked input instance diverge, and how those diverging activations arrive at an incorrect prediction through connections among neurons.…”
Section: Connection Among Neuronsmentioning
confidence: 99%
See 2 more Smart Citations
“…For a neuron to detect a feature, it requires an orchestrated interaction among many neurons across different layers. Recent research [10,11,23,41] visually explains how higher level concepts can be constructed by neural connections. In the context of adversarial attack, some methods [10,11,31] identify where in a network the activation pathways of a benign and attacked input instance diverge, and how those diverging activations arrive at an incorrect prediction through connections among neurons.…”
Section: Connection Among Neuronsmentioning
confidence: 99%
“…Recent research [10,11,23,41] visually explains how higher level concepts can be constructed by neural connections. In the context of adversarial attack, some methods [10,11,31] identify where in a network the activation pathways of a benign and attacked input instance diverge, and how those diverging activations arrive at an incorrect prediction through connections among neurons. Inspired by these techniques, we summarize and visualize how neuron groups interact through connections among them, providing a new way to interpret concept cascading across layers.…”
Section: Connection Among Neuronsmentioning
confidence: 99%
See 1 more Smart Citation