Generative artificial intelligence (AI) has the potential to greatly increase the speed, quality and controllability of antibody design. Traditional de novo antibody discovery requires time and resource intensive screening of large immune or synthetic libraries. These methods also offer little control over the output sequences, which can result in lead candidates with sub-optimal binding and poor developability attributes. Several groups have introduced models for generative antibody design with promising in silico evidence, however, no such method has demonstrated de novo antibody design with experimental validation. Here we use generative deep learning models to de novo design antibodies against three distinct targets, in a zero-shot fashion, where all designs are the result of a single round of model generations with no follow-up optimization. In particular, we screen over 400,000 antibody variants designed for binding to human epidermal growth factor receptor 2 (HER2) using our high-throughput wet lab capabilities. From these screens, we further characterize 421 binders using surface plasmon resonance (SPR), finding three that bind tighter than the therapeutic antibody trastuzumab. The binders are highly diverse, have low sequence identity to known antibodies, and adopt variable structural conformations. Additionally, these binders score highly on our previously introduced Naturalness metric, indicating they are likely to possess desirable developability profiles and low immunogenicity. We open source the HER2 binders and report the measured binding affinities. These results unlock a path to accelerated drug creation for novel therapeutic targets using generative AI combined with high-throughput experimentation.
The Basic Local Alignment Search Tool (BLAST) [2] is currently the most popular method for searching databases of biological sequences. BLAST compares sequences via similarity defined by a weighted edit distance, which results in it being computationally expensive. As opposed to working with edit distance, a vector similarity approach can be accelerated substantially using modern hardware or hashing techniques [9]. Such an approach would require fixed-length embeddings for biological sequences. There has been recent interest in learning fixed-length protein embeddings using deep learning models under the hypothesis that the hidden layers of supervised or semi-supervised models could produce potentially useful vector embeddings. We consider transformer (BERT [6]) protein language models that are pretrained on the TrEMBL data set [20] and learn fixed-length embeddings on top of them with contextual lenses [11]. The embeddings are trained to predict the family a protein belongs to for sequences in the Pfam database [7,8]. We show that for nearest-neighbor family classification, pretraining offers a noticeable boost in performance and that the corresponding learned embeddings are competitive with BLAST. Furthermore, we show that the raw transformer embeddings, obtained via static pooling, do not perform well on nearest-neighbor family classification, which suggests that learning embeddings in a supervised manner via contextual lenses may be a compute-efficient alternative to fine-tuning.Preprint. Under review.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.