2021
DOI: 10.1109/jsac.2021.3118344
|View full text |Cite
|
Sign up to set email alerts
|

Semi-Decentralized Federated Learning With Cooperative D2D Local Model Aggregations

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
28
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
2
1

Relationship

2
6

Authors

Journals

citations
Cited by 77 publications
(31 citation statements)
references
References 42 publications
0
28
0
Order By: Relevance
“…This migrates away from the star topology of FedL and paves the road to more decentralized distributed ML architectures. This approach is complementary to recent works that utilize D2D in distributed ML to conduct model consensus [17]- [20]. II.…”
Section: Summary Of Contributionsmentioning
confidence: 97%
See 1 more Smart Citation
“…This migrates away from the star topology of FedL and paves the road to more decentralized distributed ML architectures. This approach is complementary to recent works that utilize D2D in distributed ML to conduct model consensus [17]- [20]. II.…”
Section: Summary Of Contributionsmentioning
confidence: 97%
“…Additionally, there is a literature on fully decentralized FedL over mesh network architectures without a centralized server, where device-server communications are replaced with device-to-device (D2D) communications [17], [18]. Building upon this, semi-decentralized architectures for FedL have also been proposed, where D2D communications are exploited in conjunction with device-server interactions to improve the model training performance [19], [20]. In this literature, D2D communications are solely used for distributed model parameter aggregation across the nodes.…”
Section: A Related Workmentioning
confidence: 99%
“…Although HFL has certain advantages, this framework requires employing multiple PSs that may not be practical in certain scenarios. Instead, the idea of hierarchical collaborative learning can be redesigned to combine hierarchical and decentralized learning concepts, which is referred to as semi-decentralized FL, where the local consensus follows decentralized learning with D2D communications, whereas the global consensus is orchestrated by the PS [43,44]. One of the major challenges in FL that is not considered in the aforementioned works on semi-decentralized FL is the partial client connectivity [45,46].…”
Section: Related Workmentioning
confidence: 99%
“…Distributed ML over wireless networks: Recent literature concerning ML by wireless networks has shifted towards federated learning [23], [24], and is mostly focused on studying the convergence and behavior of federated learning over wireless networks [9], [10], [12], [25], [26], [27], [28], [29], [30], [31], [32], [33]. Conventional federated learning assumes training a single ML model for all the engaged devices.…”
Section: Related Workmentioning
confidence: 99%
“…Constraint (32) ensures that the total energy consumed by each worker UAV j for data processing, parameter transmission and flying is less than E Ba j (s) − E Th j , where E Ba j (s) represents the battery energy at j at the start of the s-th training sequence and E Th j encompasses both (i) surplus idle energy needed for extra hovering time caused by potential asynchronocity due to the heterogeneity of leader UAV to AP travel times, and (ii) the minimum energy threshold for j to reach the nearest recharging station after the conclusion of the training sequence. Constraint (33) imposed on the coordinator UAVs is similar to (32), except that coordinator UAVs only conduct data transmission while flying. Constraint (34) imposed on the leader UAVs guarantees that there is enough energy remaining after parameter broadcasting and flying to reach the nearest recharging station.…”
Section: Joint Energy and Performance Optimizationmentioning
confidence: 99%