2023
DOI: 10.1360/nso/20220043
|View full text |Cite
|
Sign up to set email alerts
|

DeceFL: a principled fully decentralized federated learning framework

Abstract: Traditional machine learning relies on a centralized data pipeline for model training in various applications; however, data are inherently fragmented. Such a decentralized nature of databases presents the serious challenge for collaboration: sending all decentralized datasets to a central server raises serious privacy concerns. Although there has been a joint effort in tackling such a critical issue by proposing privacy-preserving machine learning frameworks, such as federated learning, most state-of-the-art … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
1
0
1

Year Published

2023
2023
2024
2024

Publication Types

Select...
7

Relationship

1
6

Authors

Journals

citations
Cited by 8 publications
(2 citation statements)
references
References 31 publications
0
1
0
1
Order By: Relevance
“…Privacy protection has become a hot topic in the health care AI research field [ 74 ], with numerous studies dedicated to developing innovative privacy-preserving solutions without compromising the performance of big data–driven AI models. These include developing privacy-enhancing technologies, such as homomorphic encryption [ 75 ], securing multiparty computation and differential privacy [ 76 ], and exploring new training methods and data governance models, such as distributed federated machine learning using synthesized data from multiple organizations [ 77 ], data-sharing pools [ 78 ], data trusts [ 79 ], and data cooperatives [ 80 ]. Second, the lack of clarity in accountability and regulation has also been universally identified in prior research as a key obstacle to the application of AI in health care [ 81 - 83 ].…”
Section: Discussionmentioning
confidence: 99%
“…Privacy protection has become a hot topic in the health care AI research field [ 74 ], with numerous studies dedicated to developing innovative privacy-preserving solutions without compromising the performance of big data–driven AI models. These include developing privacy-enhancing technologies, such as homomorphic encryption [ 75 ], securing multiparty computation and differential privacy [ 76 ], and exploring new training methods and data governance models, such as distributed federated machine learning using synthesized data from multiple organizations [ 77 ], data-sharing pools [ 78 ], data trusts [ 79 ], and data cooperatives [ 80 ]. Second, the lack of clarity in accountability and regulation has also been universally identified in prior research as a key obstacle to the application of AI in health care [ 81 - 83 ].…”
Section: Discussionmentioning
confidence: 99%
“…Chen 等人 [19] 提出了一种基于差分加权联邦平均的联 邦迁移学习框架,协同训练诊断模型。Maet [20] 在联 邦学习中采用扩展 Kalman 作为模型更新算法来解决 流量预测中潜在的攻击。然而,这些方法在隐私保护 过程中严重依赖于中央服务器。此外,由于数据加密 而导致的全局模型精度下降以及计算和通信开销的 增加等问题也不容忽视。 Warnat [21] 提出了一种群学习 方法, 该方法在每次迭代中根据特定的规则选择一个 变化的领导者,从而实现去中心化。然而,该网络只 是转移了中央服务器,而没有实现完全的去中心化。 同时,该框架没有解决网络的异构问题。为了解决上 述问题,本文采用去中心化联邦学习 [22] DeceFL,利…”
Section: 中央服务器所访问,形成了隐私保护的协同训练。unclassified