Amino acid mutations that lower a protein's thermodynamic stability are implicated in numerous diseases, and engineered proteins with enhanced stability are important in research and medicine. Computational methods for predicting how mutations perturb protein stability are therefore of great interest. Despite recent advancements in protein design using deep learning, in silico prediction of stability changes has remained challenging, in part due to a lack of large, high-quality training datasets for model development. Here we introduce ThermoMPNN, a deep neural network trained to predict stability changes for protein point mutations given an initial structure. In doing so, we demonstrate the utility of a newly released mega-scale stability dataset for training a robust stability model. We also employ transfer learning to leverage a second, larger dataset by using learned features extracted from a deep neural network trained to predict a protein's amino acid sequence given its three-dimensional structure. We show that our method achieves competitive performance on established benchmark datasets using a lightweight model architecture that allows for rapid, scalable predictions. Finally, we make ThermoMPNN readily available as a tool for stability prediction and design.
Recent attempts at utilizing deep learning for structure-based virtual screening have focused on training models to predict binding affinity from protein-ligand complexes with known crystal structures. The PDBbind dataset is the current standard for training such models, but its small size (less than 20K binding affinity measurements) leads to models failing to generalize to new targets, and model performance is typically on par with those trained with only ligand information. The CrossDocked dataset expands binding pose data for protein-ligand complexes but does not introduce new affinity data. ChEMBL, on the other hand, contains a wealth of binding affinity information but contains no information about the binding poses. We introduce BigBind, a dataset that maps ChEMBL activity data to protein targets from CrossDocked. This dataset comprises 851K ligand binding affinities and 3D pocket structures. After augmenting this dataset with an equal number of putative inactives for each target, we train BANANA (BAsic NeurAl Network for binding Affinity) to classify actives from inactives. The resulting model achieved an AUC of 0.72 on BigBind’s test set, while a ligand-only model achieved an AUC of 0.64. Our model achieves competitive performance on the LIT-PCBA benchmark (median EF1% 2.06) while running 16,000 times faster than molecular docking with GNINA. Notably, we achieve a state-of-the-art EF1% of 4.95 when we use BANANA to filter out 90% of the compounds prior to docking with GNINA. We hope that BANANA and future models trained on this dataset will prove useful for prospective virtual screening tasks.
Recent attempts at utilizing deep learning for structure-based virtual screening have focused on training models to predict binding affinity from protein-ligand complexes with known crystal structures. The PDBbind dataset is the current standard for training such models, but its small size (less than 20K binding affinity measurements) leads to models failing to generalize to new targets, and model performance is typically on par with those trained with only ligand information. The CrossDocked dataset expands binding pose data for protein-ligand complexes but does not introduce new affinity data. ChEMBL, on the other hand, contains a wealth of binding affinity information but contains no information about the binding poses. We introduce BigBind, a dataset that maps ChEMBL activity data to protein targets from CrossDocked. This dataset comprises 851K ligand binding affinities and 3D pocket structures. After augmenting this dataset with an equal number of putative inactives for each target, we train BANANA (BAsic NeurAl Network for binding Affinity) to classify actives from inactives. The resulting model achieved an AUC of 0.72 on BigBind’s test set, while a ligand-only model achieved an AUC of 0.64. Our model achieves competitive performance on the LIT-PCBA benchmark (median EF1% 2.06) while running 16,000 times faster than molecular docking with GNINA. Notably, we achieve a state-of-the-art EF1% of 4.95 when we use BANANA to filter out 90% of the compounds prior to docking with GNINA. We hope that BANANA and future models trained on this dataset will prove useful for prospective virtual screening tasks.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.