2022
DOI: 10.1101/2022.07.10.499510
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Antigen-Specific Antibody Design and Optimization with Diffusion-Based Generative Models for Protein Structures

Abstract: Antibodies are immune system proteins that protect the host by binding to specific antigens such as viruses and bacteria. The binding between antibodies and antigens are mainly determined by the complementarity-determining regions (CDR) on the antibodies. In this work, we develop a deep generative model that jointly models sequences and structures of CDRs based on diffusion processes and equivariant neural networks. Our method is the first deep learning-based method that can explicitly target specific antigen … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
103
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
1
1
1

Relationship

0
7

Authors

Journals

citations
Cited by 72 publications
(121 citation statements)
references
References 54 publications
(119 reference statements)
0
103
0
Order By: Relevance
“…Note that context features vary from setting to setting. For example, in antibody CDR design (Jin et al, 2021;Luo et al, 2022), they are derived from antibody framework and binding antigen structures with CDR region masked, while in full protein design (Anand & Achim, 2022), they can be secondary structure annotations and residue-residue contact features.…”
Section: Preliminariesmentioning
confidence: 99%
See 3 more Smart Citations
“…Note that context features vary from setting to setting. For example, in antibody CDR design (Jin et al, 2021;Luo et al, 2022), they are derived from antibody framework and binding antigen structures with CDR region masked, while in full protein design (Anand & Achim, 2022), they can be secondary structure annotations and residue-residue contact features.…”
Section: Preliminariesmentioning
confidence: 99%
“…We then adopt a variant of Invariant Point Attention (IPA) (Jumper et al, 2021) called SeqIPA to capture the interplay of residue types, residue structures and context features, integrating them all altogether into updated context features. Such a practice is often favored in literature (Anand & Achim, 2022;Luo et al, 2022;Tubiana et al, 2022) as it is aware of the orientation of each residue frame while being roto-translation invariant to input and output features. Distinct from vanilla IPA, our SeqIPA takes residue types as the additional input to bias the attention map and steer the representation of the whole protein generated so far:…”
Section: Joint Sequence-structure Decodermentioning
confidence: 99%
See 2 more Smart Citations
“…The general approach involves training a model on experimental data and applying it to predict which sequences are most likely to improve the measured trait. Several promising approaches have been proposed [8][9][10][11][12][13][14], but only two studies have had in silico predictions validated in the lab [15,16]. While being valuable demonstrations, previous models are limited by throughput and the use of binary (rather than continuous) readouts, which can compromise their accuracy at high mutational loads.…”
Section: Introductionmentioning
confidence: 99%