“…We conduct a comparative analysis of our approach against two categories of baselines across the four few-shot MoleculeNet benchmark tasks. - Methods without a pretrained molecular graph encoder, such as HSL-RG – , ADKF-IFT, PAR, IterRefLSTM, EGNN, TPN, MAML, ProtoNet, and Siamese
- Methods with a pretrained molecular graph encoder, such as Pre-PAR + +MTA, HSL-RG, Pre-GNN, Meta-MGNN, Pre-PAR, and Pre-ADKF-IFT . It is worth noting that all methods within this category employ a pretrained GIN, and the pretrained GIN weights are provided by Hu et al
…”