2023
DOI: 10.1073/pnas.2220778120
|View full text |Cite
|
Sign up to set email alerts
|

Contrastive learning in protein language space predicts interactions between drugs and protein targets

Abstract: Sequence-based prediction of drug–target interactions has the potential to accelerate drug discovery by complementing experimental screens. Such computational prediction needs to be generalizable and scalable while remaining sensitive to subtle variations in the inputs. However, current computational techniques fail to simultaneously meet these goals, often sacrificing performance of one to achieve the others. We develop a deep learning model, ConPLex, successfully leveraging the advances in pretrained protein… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
38
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
6
1
1

Relationship

0
8

Authors

Journals

citations
Cited by 50 publications
(45 citation statements)
references
References 74 publications
0
38
0
Order By: Relevance
“…In the absence of experimentally determined protein 3D structures, computational modeling plays a cost-effective role in structure-based drug discovery. In this study, for 3D model generation, a fully automated protein structure homology modeling server SWISS-MODEL was used. One of the advantages of SWISS-MODEL over other modeling software is that it follows the proper natural assembly of the template protein, expels all anticorrosive amino residues, and further adjusts the target-template arrangement in DeepView program. , One important feature of SWISS-MODEL is ProMod3, a central script-based platform for increasing the accuracy of the produced model as well as for quality estimation based on QMEANDisCo Global values.…”
Section: Resultsmentioning
confidence: 99%
“…In the absence of experimentally determined protein 3D structures, computational modeling plays a cost-effective role in structure-based drug discovery. In this study, for 3D model generation, a fully automated protein structure homology modeling server SWISS-MODEL was used. One of the advantages of SWISS-MODEL over other modeling software is that it follows the proper natural assembly of the template protein, expels all anticorrosive amino residues, and further adjusts the target-template arrangement in DeepView program. , One important feature of SWISS-MODEL is ProMod3, a central script-based platform for increasing the accuracy of the produced model as well as for quality estimation based on QMEANDisCo Global values.…”
Section: Resultsmentioning
confidence: 99%
“…Similarly, another approach called ChemCPA 54 also encodes the chemical structure of the drug and non-linearly scales its dose and combines it with the drug representation. On this front, our drug module could also incorporate a non-linear encoder to represent the chemical structure of drugs and combine it with targets' representations, by building upon ideas presented in OmegaFold 55 and AlphaFold2 56 , in order to infer potential drug-target interactions, similar to what has been recently proposed in the ConPLex model 57 , after training models to ultimately predict the transcriptional profile of a cell.…”
Section: Discussionmentioning
confidence: 99%
“…We use a tanh decay with a restart schedule to gradually reduce this margin. 52 Specifically, for every E max = 10 contrastive training epochs, the margin is reset to its initial value. This allows us to dynamically adjust the margin throughout training for a better model performance (eq 16).…”
Section: The Complex-basedmentioning
confidence: 99%
“…For a biased model, it may correctly predict the binding affinity based on the incorrect model. Noteworthy, contrastive learning has showcased competitive results in tasks like small molecule property prediction, 49−51 sequences-based prediction of drug−target interactions, 52 similarity-based virtual screening, 53 reaction classification, 54 and enzyme function prediction. 55 Contrastive learning can be applied in a multimodal setting, which enhances the learning of joint representations from varied modalities and thus bolsters model's performance.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation