2022
DOI: 10.1016/j.patter.2022.100513
|View full text |Cite
|
Sign up to set email alerts
|

Deciphering the language of antibodies using self-supervised learning

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
82
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 80 publications
(94 citation statements)
references
References 55 publications
0
82
0
Order By: Relevance
“…The wealth of immune repertoire data provided by sequencing experiments has enabled development of antibody-specific language models. Models trained for masked language modeling have been shown to learn meaningful representations of immune repertoire sequences (21, 25, 26), and even repurposed to humanize antibodies (27). Generative models trained on sequence infilling have been shown to generate high-quality antibody libraries (28, 29).…”
Section: Introductionmentioning
confidence: 99%
“…The wealth of immune repertoire data provided by sequencing experiments has enabled development of antibody-specific language models. Models trained for masked language modeling have been shown to learn meaningful representations of immune repertoire sequences (21, 25, 26), and even repurposed to humanize antibodies (27). Generative models trained on sequence infilling have been shown to generate high-quality antibody libraries (28, 29).…”
Section: Introductionmentioning
confidence: 99%
“…Examples of categorical predictions include predicting the source of an antibody (e.g. murine or human) from its amino acid sequence [ 24 ] or the incidence of amino acids at specific positions in protein sequence [ 25 ]. On the other hand, continuous predictions aim to capture values such as aggregation propensity [ 26 ] or orientation angles of residues in a structure [ 27 ].…”
Section: Encoding Antibody Antigen Sequence and Structure For Machine...mentioning
confidence: 99%
“…In such encoder–decoder architecture ( Figure 2E ), the network attempts to encode the input in a lower number of dimensions (encoder) and then reconstruct the original input from it (decoder). Several network architectures attempt that, such as variational autoencoders (VAEs) [ 34 ], Generative Adversarial Networks [ 35 ] or Transformers [ 25 ]. Latent representations can be trained from voluminous unlabeled datasets (e.g.…”
Section: Common Network Architectures Employed For Therapeutic Antibo...mentioning
confidence: 99%
See 2 more Smart Citations