2018
DOI: 10.48550/arxiv.1811.01302
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Adversarial Gain

Peter Henderson,
Koustuv Sinha,
Rosemary Nan Ke
et al.

Abstract: Adversarial examples can be defined as inputs to a model which induce a mistake -where the model output is different than that of an oracle, perhaps in surprising or malicious ways. Original models of adversarial attacks are primarily studied in the context of classification and computer vision tasks. While several attacks have been proposed in natural language processing (NLP) settings, they often vary in defining the parameters of an attack and what a successful attack would look like. The goal of this work … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...

Citation Types

0
0
0

Publication Types

Select...

Relationship

0
0

Authors

Journals

citations
Cited by 0 publications
references
References 10 publications
0
0
0
Order By: Relevance

No citations

Set email alert for when this publication receives citations?