2023
DOI: 10.1109/tcbb.2023.3253862
|View full text |Cite
|
Sign up to set email alerts
|

Molecular Joint Representation Learning via Multi-Modal Information of SMILES and Graphs

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 9 publications
(5 citation statements)
references
References 41 publications
0
4
0
Order By: Relevance
“…Overall, these results suggest that the MolPROP fusion strategy is predominantly beneficial for regression tasks. Future work may explore alternative fusion strategies to improve the stability of multimodal fusion on classification tasks such as graph pretraining [ 11 , 14 ], attention mechanisms [ 33 ] or convolutional feature extraction of the language representation [ 34 ].
Fig.
…”
Section: Resultsmentioning
confidence: 99%
See 3 more Smart Citations
“…Overall, these results suggest that the MolPROP fusion strategy is predominantly beneficial for regression tasks. Future work may explore alternative fusion strategies to improve the stability of multimodal fusion on classification tasks such as graph pretraining [ 11 , 14 ], attention mechanisms [ 33 ] or convolutional feature extraction of the language representation [ 34 ].
Fig.
…”
Section: Resultsmentioning
confidence: 99%
“…This is a simple and effective strategy for exploring the fusion of language and graph representations for small molecules, however, future work may explore strategies that include hydrogens and/ or a dynamic mapping between the token and graph representations. Moreover, alternative strategies for graph and language fusion may utilize graph pretraining [11,14], an attention mechanism [33], or convolutional feature extraction of the language representation before concatenating to the graph nodes [34].…”
Section: Language and Graph Model Fusionmentioning
confidence: 99%
See 2 more Smart Citations
“…This approach enhances the model's ability to comprehend spatial relationships within molecular structures.However, its geometric focus may not be suitable for all types of molecular attributes, especially those unrelated to spatial configurations. Lastly, MMSG (Wu et al, 2023) stands out as a multimodal learning framework for molecular graph representation. By integrating information from different modalities, such as SMILES and molecular graphs, it aims to produce more robust and versatile representations.…”
Section: Semi-supervised Learning On Graphmentioning
confidence: 99%