2023
DOI: 10.48550/arxiv.2301.02344
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

TrojanPuzzle: Covertly Poisoning Code-Suggestion Models

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
5
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
2
2

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(5 citation statements)
references
References 35 publications
0
5
0
Order By: Relevance
“…T ′ is generated to cause misclassifications. In the proposed backdoor attacks, inspired by works like [17], [18], %poison of V i are poisoned and labelled as non-vulnerable, becoming V P i and modified as follows:…”
Section: E Poisoning Attacksmentioning
confidence: 99%
“…T ′ is generated to cause misclassifications. In the proposed backdoor attacks, inspired by works like [17], [18], %poison of V i are poisoned and labelled as non-vulnerable, becoming V P i and modified as follows:…”
Section: E Poisoning Attacksmentioning
confidence: 99%
“…[35], [36]. There are other attack types like including a text as a trigger and insecure code for insecure code suggestion [13] or change to ECB encryption mode in a code autocompleter system [11], which are not general but specially crafted for the attacked system. In the field of poisoning, studying attacks detectability is also demanding, being activation clustering and spectral signatures the most common techniques in this regard [12], [41].…”
Section: B Poisoning Attacks In Codementioning
confidence: 99%
“…[8]- [10]) and using a great variety of AI algorithms in the detection process, from the most classical like support vector machines to deep learning ones. In terms of code processing, some studies analyse the effect of poisoning attacks in code summarization [11], code search [12] or code suggestion [13]. Indeed, just [14] deals with some kind of poisoning attacks in the field of vulnerability detection in C programming language and without considering different vulnerability types.…”
Section: Introductionmentioning
confidence: 99%
“…Döderlein et al [13] highlight the importance of correctly tuning the temperature of a model when using it to generate code. The research led by Aghakhani et al [3] shows that poisoned models suggest insecure code more often as the temperature increases. Our results demonstrate that increasing the temperature also increases the chance of generating slow and inefficient code.…”
Section: Related Workmentioning
confidence: 99%
“…On the other hand, some LLMs are already seamlessly integrated into the developer's IDE as code assistants, like GitHub Copilot, 1 Amazon CodeWhisperer, 2 and Tabnine. 3 There has been a significant amount of work dedicated to comprehending how these LLMs perform in various situations and defining their limits. For instance, several works address the security of the code generated by such models [29,30,33] or the prevalence of bugs in the generations [20].…”
Section: Introductionmentioning
confidence: 99%