2023
DOI: 10.1088/2632-2153/ace67b
|View full text |Cite
|
Sign up to set email alerts
|

Prediction of molecular field points using SE(3)-transformer model

Abstract: Due to their computational efficiency, 2D fingerprints are typically used in similarity-based high-content screening. The interaction of a ligand with its target protein, however, relies on its physicochemical interactions in 3D space. Thus, ligands with different 2D scaffolds can bind to the same protein if these ligands share similar interaction patterns. Molecular fields can represent those interaction profiles. For efficiency, the extrema of those molecular fields, named field points, are used to quantify the ligand … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
2
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(2 citation statements)
references
References 11 publications
(14 reference statements)
0
2
0
Order By: Relevance
“…An encoding module was used to encode node features of 10 atom types into 16 scalar features. Node and edge features are processed with an interaction module, which performs interaction between nodes with 6 SE(3)-Transformer blocks 50 followed by a ConvSE3 50 operation. SE(3)- Transformer blocks consist of a sequence of SE(3)-Transformer, layer normalization(LayerNorm) 51 , and exponential linear unit(ELU) activation 52 , where SE(3)-…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…An encoding module was used to encode node features of 10 atom types into 16 scalar features. Node and edge features are processed with an interaction module, which performs interaction between nodes with 6 SE(3)-Transformer blocks 50 followed by a ConvSE3 50 operation. SE(3)- Transformer blocks consist of a sequence of SE(3)-Transformer, layer normalization(LayerNorm) 51 , and exponential linear unit(ELU) activation 52 , where SE(3)-…”
Section: Methodsmentioning
confidence: 99%
“…After SE(3)-Transformer, node features are processed into 64 scalars, 64 vectors, and 64 traceless matrices, and ConvSE3 transforms node features into 64 scalars and 32 vectors. Finally, the structure module converts node features into two scalars and two vectors via six linear blocks, where linear blocks consist of a sequence of LayerNorm, ELU, and LinearSE3 50 , where LinearSE3 performs a 1x1 convolution-like operation on each node. The resulting node features on probe nodes are trained to predict the probability of neighboring water molecules near the probe and the displacement vector from the probe to the predicted position.…”
Section: Methodsmentioning
confidence: 99%