Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery &Amp; Data Mining 2021
DOI: 10.1145/3447548.3467145
|View full text |Cite
|
Sign up to set email alerts
|

Adversarial Attacks on Deep Models for Financial Transaction Records

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
6
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
7
2
1

Relationship

0
10

Authors

Journals

citations
Cited by 18 publications
(9 citation statements)
references
References 6 publications
0
6
0
Order By: Relevance
“…Apart from text data, recent research showed that only a couple of generated, false transaction prices in trading data can alter the response of deep learning models. 21 Using blackbox attack methods, meaning the attacker does not have access to the original model, Fursov et al (2021) finds that these attacks degrade model accuracy and result in significant financial losses. However, the authors also note that detecting and repelling most of these attacks is relatively straightforward.…”
Section: Potential Costmentioning
confidence: 99%
“…Apart from text data, recent research showed that only a couple of generated, false transaction prices in trading data can alter the response of deep learning models. 21 Using blackbox attack methods, meaning the attacker does not have access to the original model, Fursov et al (2021) finds that these attacks degrade model accuracy and result in significant financial losses. However, the authors also note that detecting and repelling most of these attacks is relatively straightforward.…”
Section: Potential Costmentioning
confidence: 99%
“…Nonetheless, in contrast to the actively researched adversarial attacks on static GNNs, the vulnerabilities of TGNNs to adversarial attacks remain underexplored, yet the significance of conducting such research is undeniable. For instance, in a financial attack (Fursov et al 2021;Zeager et al 2017) scenario, an adversary may inject adversarial transactions into the transaction graph, perturbing the timing and content of transactions to mislead the model running on the graph. This manipulation could lead to the model falsely predicting fraudulent transactions as legitimate, resulting in inaccurate risk predictions and potential financial losses.…”
Section: Introductionmentioning
confidence: 99%
“…Similarly, despite advanced progress in face recognition and speech recognition that led to their deployment in real-world applications, such as retail, social networks, and intelligent homes, Vakhshiteh et al [24] and Schonherr et al [25] have demonstrated their vulnerabilities against numerous attacks to illustrate potential research directions in different areas. Even in the area of finance and health, which traditionally requires a high level of robustness in the system, adversarial attacks are capable of manipulating the system, for  ISSN: 2252-8938 example, by deceiving fraud detection engines to register fraudulent transactions [26], manipulating the health status of individuals [27], and fooling text classifiers [28]- [31].…”
Section: Introductionmentioning
confidence: 99%