2021
DOI: 10.48550/arxiv.2105.03170
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

FedGL: Federated Graph Learning Framework with Global Self-Supervision

Abstract: Graph data are ubiquitous in the real world. Graph learning (GL) tries to mine and analyze graph data so that valuable information can be discovered. Existing GL methods are designed for centralized scenarios. However, in practical scenarios, graph data are usually distributed in different organizations, i.e., the curse of isolated data islands. To address this problem, we incorporate federated learning into GL and propose a general Federated Graph Learning framework FedGL, which is capable of obtaining a high… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
12
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
5
3

Relationship

1
7

Authors

Journals

citations
Cited by 9 publications
(12 citation statements)
references
References 47 publications
0
12
0
Order By: Relevance
“…The server in [Chen et al, 2021a] generates a global pseudo graph with node embeddings uploaded by FL clients, and distributes it to the clients to update their local graphs for GCN model training. Besides, the server also generates global pseudo node labels based on the uploaded node predictions, which are incorporated into the self-supervised learning loss of the clients to mitigate the non-IID data problem.…”
Section: Updating Clients' Local Graphsmentioning
confidence: 99%
“…The server in [Chen et al, 2021a] generates a global pseudo graph with node embeddings uploaded by FL clients, and distributes it to the clients to update their local graphs for GCN model training. Besides, the server also generates global pseudo node labels based on the uploaded node predictions, which are incorporated into the self-supervised learning loss of the clients to mitigate the non-IID data problem.…”
Section: Updating Clients' Local Graphsmentioning
confidence: 99%
“…Hsieh et al [26] also propose an adversarial defense mechanism against attribute inference attacks on GNNs by maintaining the accuracy of target label classification and reducing the accuracy of private label classification. There is also increasing trend in using federated and split learning to address the privacy issues of GNNs in distributed learning settings [5,6,23,34,35,39,44,53,56,60,66,67]. However, none of the aforementioned works employ the notion of DP, and thus they do not provide provable privacy guarantees.…”
Section: Privacy-preserving Gnnsmentioning
confidence: 99%
“…To solve this issue, numerous efforts have been made to build privacy-preserving graph learning (PPGL) [95,53,17,46,165] which aims to build graph learning framework while protecting users' data privacy. In this survey, we categorize previous research work of PPGL into three directions, namely, federated graph learning, privacy inference attack, and private graph learning.…”
Section: Overview and Taxonomymentioning
confidence: 99%
“…A typical example is user behavior modeling in online social platform, where each terminal user has a local social network and server wants to train a model to describe the user's behavior (e.g., fraud user detection and recommendation). In this case, horizontal intra-graph FL enables to train a powerful global model by leveraging information from all terminal users without violating their privacy [117,17,136]. Here, we introduce a prior work [126] focusing on federated recommendation modeling under this horizontal setting.…”
Section: Federated Graph Learningmentioning
confidence: 99%