2022
DOI: 10.21203/rs.3.rs-1454132/v1
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Accurate Protein-Ligand Complex Structure Prediction using Geometric Deep Learning

Abstract: Understanding the structure of the protein-ligand complex is crucial to drug development. However, existing virtual structure measurement methods are mainly docking and its derived methods combined with deep learning, which have restricted performance and efficiency due to their sampling and scoring methodology. Here we show the complex structure can be directly predicted using our proposed LigPose based on geometric deep learning in an end-to-end manner. By representing the ligand and the protein as a complet… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
30
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
4

Relationship

1
8

Authors

Journals

citations
Cited by 16 publications
(30 citation statements)
references
References 50 publications
0
30
0
Order By: Relevance
“…Unfortunately, we also find that, while masked language model pretraining is effective in imparting models with structural knowledge, the relationship between model size, pretraining loss, and downstream performance is less stable for out-of-domain protein engineering tasks, indicating that masked language modeling may not be effective for at least some types of tasks and emphasizing a need for more effective pretraining tasks. While we evaluate the effects of masked language model pretraining, numerous other pretraining tasks have been proposed including autoregressive language model pretraining , pairwise masked language modeling (He et al, 2021), and combining structural information (Mansoor et al, 2021;Zhang et al, 2022;McPartlon et al, 2022;Hsu et al, 2022;Chen et al, 2022;Wang et al, 2022) or functional annotations (Brandes et al, 2021). Together, our work demonstrates the importance of disentangling pretraining task and architecture.…”
Section: Conclusion and Discussionmentioning
confidence: 83%
See 1 more Smart Citation
“…Unfortunately, we also find that, while masked language model pretraining is effective in imparting models with structural knowledge, the relationship between model size, pretraining loss, and downstream performance is less stable for out-of-domain protein engineering tasks, indicating that masked language modeling may not be effective for at least some types of tasks and emphasizing a need for more effective pretraining tasks. While we evaluate the effects of masked language model pretraining, numerous other pretraining tasks have been proposed including autoregressive language model pretraining , pairwise masked language modeling (He et al, 2021), and combining structural information (Mansoor et al, 2021;Zhang et al, 2022;McPartlon et al, 2022;Hsu et al, 2022;Chen et al, 2022;Wang et al, 2022) or functional annotations (Brandes et al, 2021). Together, our work demonstrates the importance of disentangling pretraining task and architecture.…”
Section: Conclusion and Discussionmentioning
confidence: 83%
“…We hope that this work is the first step in investigating the independent and interaction effects of pretraining and architecture for protein sequence modeling. While we evaluate the effects of masked language model pretraining, transformers have also been used for autoregressive language model pretraining (Madani et al, 2020) and pairwise masked language modeling (He et al, 2021), and combining structural information (Mansoor et al, 2021; Zhang et al, 2022; McPartlon et al, 2022; Hsu et al, 2022; Chen et al, 2022; Wang et al, 2022) or functional annotations (Brandes et al, 2021) offers further directions for protein pretraining tasks.…”
Section: Discussionmentioning
confidence: 99%
“…However, there are still some DL-based docking models developed for fair comparison. In the last experiment, we discuss the results of the pocket-aware docking methods, from traditional approaches, such as AutoDock GPU, Glide SP, and Uni-Dock, to the deep-learning models, such as LigPose 27 and TankBind 14 . Besides, three baselines are also present for demonstrating that the DL-based models truly model the interaction between the ligands and pockets.…”
Section: Methods Comparsion In Binding Pose Prediction Given Pocketsmentioning
confidence: 99%
“…While we condition on structure and reconstruct sequence, there are other methods for incorporating protein structural information, such as predicting structure similarity between protein sequences (Bepler & Berger, 2019), corrupting and reconstructing the structure in addition to the sequence (Mansoor et al, 2021; Chen et al, 2022), encoding surface features (Townshend et al, 2019), contrastive learning (Zhang et al, 2022; Cao et al, 2021), or a graph encoder without sequence decoding (Somnath et al, 2021; Fuchs et al, 2020). LM-GVP uses the same architecture as MIF-ST consisting of a pretrained language model feeding into a GNN that encodes backbone structure (Wang et al, 2022).…”
Section: Related Workmentioning
confidence: 99%