2021
DOI: 10.48550/arxiv.2109.07258
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Federated Learning of Molecular Properties with Graph Neural Networks in a Heterogeneous Setting

Abstract: Chemistry research has both high material and computational costs to conduct experiments. Institutions thus consider chemical data to be valuable and there have been few efforts to construct large public datasets for machine learning. Another challenge is that different intuitions are interested in different classes of molecules, creating heterogeneous data that cannot be easily joined by conventional distributed training. In this work, we introduce federated heterogeneous molecular learning to address these c… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
5
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(5 citation statements)
references
References 32 publications
0
5
0
Order By: Relevance
“…In addition to regularization, another strategy in loss function designing is instance reweighting. For instance, FILT+ [144] pulls the local model closer to the global by minimizing the loss discrepancy between a local model and the global model. Specifically, FILT+ reweights instances on client c k by putting more weights on samples with less confidence in the loss function…”
Section: Single Global Model-based Methodsmentioning
confidence: 99%
“…In addition to regularization, another strategy in loss function designing is instance reweighting. For instance, FILT+ [144] pulls the local model closer to the global by minimizing the loss discrepancy between a local model and the global model. Specifically, FILT+ reweights instances on client c k by putting more weights on samples with less confidence in the loss function…”
Section: Single Global Model-based Methodsmentioning
confidence: 99%
“…Data-based approaches target to decrease the statistical heterogeneity of client data distributions using sample reweighting, clustering, manifold learning, and so on. FLIT [55] solves the non-i.i.d. data problem by reweighting samples based on their prediction confidence.…”
Section: A Horizontal Fedgnnsmentioning
confidence: 99%
“…There are several benchmark datasets developed for GNNs, including citation network datasets, social network datasets, and chemical property datasets. FedGNNs test their algorithms on these datasets [17], [24], [33], [33], [38], [47], [49], [51], [53], [54], [55], [55], [56], [56], [61], [62], [63], [64], [71], [74], [78], [79], [94], [95], [97], [98], [99] with various data partition methods. FedGNNs also explore many GNN applications in a decentralized setting with privacy concerns.…”
Section: A Applicationsmentioning
confidence: 99%
See 2 more Smart Citations