2021
DOI: 10.1101/2021.08.08.455394
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

BioPhi: A platform for antibody design, humanization and humanness evaluation based on natural antibody repertoires and deep learning

Abstract: Despite recent advances in transgenic animal models and display technologies, humanization of mouse sequences remains the primary route for therapeutic antibody development. Traditionally, humanization is manual, laborious, and requires expert knowledge. Although automation efforts are advancing, existing methods are either demonstrated on a small scale or are entirely proprietary. To predict the immunogenicity risk, the human-likeness of sequences can be evaluated using existing humanness scores, but these la… Show more

Help me understand this report
View published versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
33
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
4
1

Relationship

0
5

Authors

Journals

citations
Cited by 11 publications
(33 citation statements)
references
References 43 publications
(69 reference statements)
0
33
0
Order By: Relevance
“… 251 Most recently, Prihoda et al 57 , 252 devised an in silico platform BioPhi that offers three complementary tools: 1) OASis, short for Observed Antibody Space (OAS) identity search, is an interpretable humanness scoring system based on an exact 9-mer peptide search within the OAS database, capable of accurately distinguishing human and non-human sequences with clinical immunogenicity correlation; 2) Sapiens, is an ML-based humanization method trained on the OAS human database using language modeling to recognize and substitute non-human sequences with human native equivalents in FR regions to improve sequence humanness (the OASis score); and 3) an interactive interface, to incorporate AA substitutions in the sequence and visualization. 57 , 252 In their study, Prihoda et al 57 , 252 compared the humanization performance of Sapiens on 152 precursor sequences of humanized mAbs against Hu-mAb (computational) and mutation-based humanization (experimental). They reported that Sapiens achieved higher humanness improvement than Hu-mAb and comparable results to experimental methods, suggesting AA substitutions that were experimentally validated to be advantageous for sequence humanization, while maintaining mAb specificity and binding affinity.…”
Section: Capacity To Modularly Learn Antibody Design Parametersmentioning
confidence: 99%
“… 251 Most recently, Prihoda et al 57 , 252 devised an in silico platform BioPhi that offers three complementary tools: 1) OASis, short for Observed Antibody Space (OAS) identity search, is an interpretable humanness scoring system based on an exact 9-mer peptide search within the OAS database, capable of accurately distinguishing human and non-human sequences with clinical immunogenicity correlation; 2) Sapiens, is an ML-based humanization method trained on the OAS human database using language modeling to recognize and substitute non-human sequences with human native equivalents in FR regions to improve sequence humanness (the OASis score); and 3) an interactive interface, to incorporate AA substitutions in the sequence and visualization. 57 , 252 In their study, Prihoda et al 57 , 252 compared the humanization performance of Sapiens on 152 precursor sequences of humanized mAbs against Hu-mAb (computational) and mutation-based humanization (experimental). They reported that Sapiens achieved higher humanness improvement than Hu-mAb and comparable results to experimental methods, suggesting AA substitutions that were experimentally validated to be advantageous for sequence humanization, while maintaining mAb specificity and binding affinity.…”
Section: Capacity To Modularly Learn Antibody Design Parametersmentioning
confidence: 99%
“…AntiBERTa is pre-trained via MLM, which has been used elsewhere (26,29,35). Briefly, 15% of amino acids are chosen for perturbation.…”
Section: Antiberta Pre-trainingmentioning
confidence: 99%
“…AntiBERTa is pre-trained using a self-supervised MLM task, like other transformer-based protein LMs (26,29,35). Briefly, 15% amino acids within the input BCR sequence are randomly perturbed, and the model determines the correct amino acid in place of these masked positions.…”
Section: Antiberta Learns a Meaningful Representation Of Bcr Sequencesmentioning
confidence: 99%
See 2 more Smart Citations