Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Langua 2022
DOI: 10.18653/v1/2022.naacl-main.43
|View full text |Cite
|
Sign up to set email alerts
|

A Word is Worth A Thousand Dollars: Adversarial Attack on Tweets Fools Stock Prediction

Abstract: More and more investors and machine learning models rely on social media (e.g., Twitter and Reddit) to gather real-time information and sentiment to predict stock price movements. Although text-based models are known to be vulnerable to adversarial attacks, whether stock prediction models have similar vulnerability is underexplored. In this paper, we experiment with a variety of adversarial attack configurations to fool three stock prediction victim models. We address the task of adversarial generation by solv… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
3
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
3
1

Relationship

0
7

Authors

Journals

citations
Cited by 7 publications
(3 citation statements)
references
References 5 publications
0
3
0
Order By: Relevance
“…They achieve SOTA results with much lower word replacement. A real-world example of adversarial attacks on NLP models has been studied in Xie et al (2022). This work explores adversarial attacks on models which make finance related decisions based on the stream of data from a social media platform.…”
Section: Adversarial Attacks and Defenses In Nlp: A Broader Picturementioning
confidence: 99%
“…They achieve SOTA results with much lower word replacement. A real-world example of adversarial attacks on NLP models has been studied in Xie et al (2022). This work explores adversarial attacks on models which make finance related decisions based on the stream of data from a social media platform.…”
Section: Adversarial Attacks and Defenses In Nlp: A Broader Picturementioning
confidence: 99%
“…Neural network-based natural language processing (NLP) is increasingly being applied in real-world tasks (Oshikawa et al, 2018;Xie et al, 2022;Ope-nAI, 2023). However, neural network models are vulnerable to adversarial examples (Papernot et al, 2016;Samanta and Mehta, 2017;.…”
Section: Introductionmentioning
confidence: 99%
“…Detecting and managing safety concerns in AI is a multi-faceted challenge [5]. Amongst many facets, this will require iterative sociotechnical understanding of human behavior that motivates such outcomes [6]; theoretical models and frameworks that define such behaviors [24]; sociological observation of impact of such human and algorithmic behaviors [16]; computational advances in pursuing research beyond model accuracy by focusing on catastrophic consequences to humans [1,14]; creative opportunities for design to manage the user experience and journeys of humans who are likely to be targeted at scale [10]; practical challenges of tracking human and algorithmic harmful / unsafe operations at scale [13]; balancing model accuracy and safety-related metrics which pose a technical dilemma for product-oriented practitioners and ethicists [8,9,11,12,26,27], and balancing safety with constructive conflict [3].…”
mentioning
confidence: 99%