2022
DOI: 10.21203/rs.3.rs-1749921/v1
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

PGMG: A Pharmacophore-Guided Deep Learning Approach for Bioactive Molecular Generation

Abstract: The rational design of novel molecules with desired bioactivity is a critical but challenging task in drug discovery, especially when treating a novel target family or understudied targets. Here, we propose PGMG, a pharmacophore-guided deep learning approach for bioactivate molecule generation. Through the guidance of pharmacophore, PGMG provides a flexible strategy to generate bioactive molecules matching given pharmacophore models. PGMG uses a graph neural network to encode pharmacophore features and spatial… Show more

Help me understand this report
View published versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(2 citation statements)
references
References 43 publications
0
2
0
Order By: Relevance
“…Vision-Language Model Recently, many works have taken advantage of CLIP's flexible text manipulation and visual alignment capabilities to enhance the open detection or generalization performance of specific tasks, such as Det-CLIP (Yao et al 2022), DenseCLIP (Rao et al 2022), CLIP-Gap (Vidit, Engilberge, and Salzmann 2023), Ordinalclip (Li et al 2022), CLIP-Cluster (Shen et al 2023) and so on. Furthermore, to improve the performance of vision-language models on downstream tasks, a more effective approach is to learn continuous text prompts by text prompt tuning (Zhou et al 2022b,a).…”
Section: Related Workmentioning
confidence: 99%
“…Vision-Language Model Recently, many works have taken advantage of CLIP's flexible text manipulation and visual alignment capabilities to enhance the open detection or generalization performance of specific tasks, such as Det-CLIP (Yao et al 2022), DenseCLIP (Rao et al 2022), CLIP-Gap (Vidit, Engilberge, and Salzmann 2023), Ordinalclip (Li et al 2022), CLIP-Cluster (Shen et al 2023) and so on. Furthermore, to improve the performance of vision-language models on downstream tasks, a more effective approach is to learn continuous text prompts by text prompt tuning (Zhou et al 2022b,a).…”
Section: Related Workmentioning
confidence: 99%
“…While much attention has been devoted to classification tasks using VLMs and dataefficient learning methods (Radford et al, 2021;Zhou et al, 2022a;Gabeff et al, 2023), comparatively little has been done to develop methods compatible with regression tasks. Li et al introduced OrdinalCLIP, which uses an ordinal output space in order to utilise the classification-based few-shot methods (Li et al, 2022). Hentschel et al (Hentschel et al, 2022) trained a linear probe on CLIP image features on the regression task of image photographic aesthetics understanding in a few-shot setting, with competitive results compared to a fully-trained baseline.…”
Section: Vision-language Modelsmentioning
confidence: 99%