2023
DOI: 10.1016/j.neunet.2023.03.034
|View full text |Cite
|
Sign up to set email alerts
|

Few-shot Molecular Property Prediction via Hierarchically Structured Learning on Relation Graphs

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
7
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
7

Relationship

0
7

Authors

Journals

citations
Cited by 23 publications
(8 citation statements)
references
References 38 publications
0
7
0
Order By: Relevance
“…We conduct a comparative analysis of our approach against two categories of baselines across the four few-shot MoleculeNet benchmark tasks. Methods without a pretrained molecular graph encoder, such as HSL-RG – , ADKF-IFT, PAR, IterRefLSTM, EGNN, TPN, MAML, ProtoNet, and Siamese Methods with a pretrained molecular graph encoder, such as Pre-PAR + +MTA, HSL-RG, Pre-GNN, Meta-MGNN, Pre-PAR, and Pre-ADKF-IFT . It is worth noting that all methods within this category employ a pretrained GIN, and the pretrained GIN weights are provided by Hu et al …”
Section: Resultsmentioning
confidence: 99%
See 3 more Smart Citations
“…We conduct a comparative analysis of our approach against two categories of baselines across the four few-shot MoleculeNet benchmark tasks. Methods without a pretrained molecular graph encoder, such as HSL-RG – , ADKF-IFT, PAR, IterRefLSTM, EGNN, TPN, MAML, ProtoNet, and Siamese Methods with a pretrained molecular graph encoder, such as Pre-PAR + +MTA, HSL-RG, Pre-GNN, Meta-MGNN, Pre-PAR, and Pre-ADKF-IFT . It is worth noting that all methods within this category employ a pretrained GIN, and the pretrained GIN weights are provided by Hu et al …”
Section: Resultsmentioning
confidence: 99%
“…(1) Methods without a pretrained molecular graph encoder, such as HSL-RG − , 22 ADKF-IFT, 20 PAR, 19 Iter-RefLSTM, 18 EGNN, 38 TPN, 39 MAML, 15 ProtoNet, 14 and Siamese. 40 (2) Methods with a pretrained molecular graph encoder, such as Pre-PAR + +MTA, 23 HSL-RG, 22 Pre-GNN, 41 Meta-MGNN, 17 Pre-PAR, 19 and Pre-ADKF-IFT. 20 It is worth noting that all methods within this category employ a pretrained GIN, and the pretrained GIN weights are provided by Hu et al 41 Evaluation Procedure and Performance.…”
Section: ■ Results and Discussionmentioning
confidence: 99%
See 2 more Smart Citations
“…Unlike sequence-based neural networks or array-based neural networks, these models use not only features but also connection information between nodes to enhance accuracy by extracting more information [ 20 , 21 ].…”
Section: Introductionmentioning
confidence: 99%