2022
DOI: 10.1101/2022.10.24.513465
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Adversarial Attacks on Protein Language Models

Abstract: Deep Learning models for protein structure prediction, such as AlphaFold2, leverage Transformer architectures and their attention mechanism to capture structural and functional properties of amino acid sequences. Despite the high accuracy of predictions, biologically insignificant perturbations of the input sequences, or even single point mutations, can lead to substantially different 3d structures. On the other hand, protein language models are often insensitive to biologically relevant mutations that induce … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(2 citation statements)
references
References 32 publications
0
2
0
Order By: Relevance
“…In addition to advancing attacks and defenses on traditional models, we also acknowledge the transformative impact of large language models in the field of recommender systems [3,40,50]. Despite their powerful generative capabilities, these models are also susceptible to various attacks [39]. Therefore, our future research will also focus on the development of attack and defense mechanisms specifically tailored to large language model-based recommendations.…”
Section: Discussionmentioning
confidence: 99%
“…In addition to advancing attacks and defenses on traditional models, we also acknowledge the transformative impact of large language models in the field of recommender systems [3,40,50]. Despite their powerful generative capabilities, these models are also susceptible to various attacks [39]. Therefore, our future research will also focus on the development of attack and defense mechanisms specifically tailored to large language model-based recommendations.…”
Section: Discussionmentioning
confidence: 99%
“…Such data points are particularly threatening for DNNs in the field of biomedical analysis due to concerns regarding prediction integrity and security ( Ozbulak et al 2019 ). Although such data points are first and foremost a threat in terms of security, a number of studies revealed that it is possible to leverage them for the purpose of DNN interpretability ( Pezeshkpour et al 2019 , Tao et al 2018 ), for example, to understand protein structures, which makes them relevant to our study ( Carbone et al 2022 ).…”
Section: Introductionmentioning
confidence: 99%