Proceedings of the 2014 Workshop on Artificial Intelligent and Security Workshop 2014
DOI: 10.1145/2666652.2666661
|View full text |Cite
|
Sign up to set email alerts
|

On the Practicality of Integrity Attacks on Document-Level Sentiment Analysis

Abstract: Sentiment analysis plays an important role in the way companies, organizations, or political campaigns are run, making it an attractive target for attacks. In integrity attacks an attacker influences the data used to train the sentiment analysis classification model in order to decrease its accuracy. Previous work did not consider practical constraints dictated by the characteristics of data generated by a sentiment analysis application and relied on synthetic or preprocessed datasets inspired by spam, intrusi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
24
0

Year Published

2015
2015
2020
2020

Publication Types

Select...
5
2
2

Relationship

1
8

Authors

Journals

citations
Cited by 42 publications
(26 citation statements)
references
References 28 publications
0
24
0
Order By: Relevance
“…In the security community, practical poisoning attacks have been demonstrated in worm signature generation [42], [45], spam filters [40], network traffic analysis systems for detection of DoS attacks [47], sentiment analysis on social networks [41], crowdsourcing [54], and health-care [38]. In supervised learning settings, Newsome et al [42] have proposed red herring attacks that add spurious words (features) to reduce the maliciousness score of an instance.…”
Section: Defense Algorithmsmentioning
confidence: 99%
See 1 more Smart Citation
“…In the security community, practical poisoning attacks have been demonstrated in worm signature generation [42], [45], spam filters [40], network traffic analysis systems for detection of DoS attacks [47], sentiment analysis on social networks [41], crowdsourcing [54], and health-care [38]. In supervised learning settings, Newsome et al [42] have proposed red herring attacks that add spurious words (features) to reduce the maliciousness score of an instance.…”
Section: Defense Algorithmsmentioning
confidence: 99%
“…We consider the setting of poisoning attacks here, in which attackers inject a small number of corrupted points in the training process. Such poisoning attacks have been practically demonstrated in worm signature generation [42], [45], spam 1 Preprint of the work accepted for publication at the 39th IEEE Symposium on Security and Privacy, San Francisco, CA, USA, May 21-23, 2018. filters [40], DoS attack detection [47], PDF malware classification [55], handwritten digit recognition [5], and sentiment analysis [41]. We argue that these attacks become easier to mount today as many machine learning models need to be updated regularly to account for continuously-generated data.…”
Section: Introductionmentioning
confidence: 99%
“…Poisoning attacks date back to (Xiao, Xiao, and Eckert 2012;Biggio, Nelson, and Laskov 2012;Biggio et al 2013) where data poisoning was used to flip the results of a SVM classifier. More advanced methods were proposed in (Xiao et al 2015;Koh and Liang 2017;Mei and Zhu 2015;Burkard and Lagesse 2017;Newell et al 2014) which change the result of the classifier on the clean data as well. These reduce the practical impact of such attacks as the victim may not deploy the model if the validation accuracy on the clean data is low.…”
Section: Related Workmentioning
confidence: 99%
“…Weight poisoning was initially explored by Gu et al (2017) in the context of computer vision, with later work researching further attack scenarios (Liu et al, , 2018bShafahi et al, 2018;Chen et al, 2017), including on NLP models (Muñoz González et al, 2017;Steinhardt et al, 2017;Newell et al, 2014;. These works generally rely on the attacker directly poisoning the end model, although some work has investigated methods for attacking transfer learning, creating backdoors for only one example (Ji et al, 2018) or assuming that some parts of the poisoned model won't be fine-tuned .…”
Section: Related Workmentioning
confidence: 99%