2022
DOI: 10.1016/j.future.2022.02.024
|View full text |Cite
|
Sign up to set email alerts
|

Dynamic and adaptive fault-tolerant asynchronous federated learning using volunteer edge devices

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
8

Relationship

0
8

Authors

Journals

citations
Cited by 16 publications
(6 citation statements)
references
References 25 publications
0
6
0
Order By: Relevance
“…The algorithm is based on the stochastic gradient descent algorithm improvement, which initializes the weights and distributes them to each device or computing node for training through a central server. This method optimizes model training to improve the efficiency and accuracy of data processing while also protecting privacy [17][18][19] . After each iteration, a certain proportion of participants will be selected from the iteration results for optimization.…”
Section: Data Imbalance Processing On the Grounds Of Fedavgmentioning
confidence: 99%
“…The algorithm is based on the stochastic gradient descent algorithm improvement, which initializes the weights and distributes them to each device or computing node for training through a central server. This method optimizes model training to improve the efficiency and accuracy of data processing while also protecting privacy [17][18][19] . After each iteration, a certain proportion of participants will be selected from the iteration results for optimization.…”
Section: Data Imbalance Processing On the Grounds Of Fedavgmentioning
confidence: 99%
“…Therefore, potential faults can be predicted based on the characteristics of tasks executed in the long term. Resources should be provided to address the faults, reducing the energy consumption [139]. Thereby, prediction for errors in tasks is also a future research direction of service fault-tolerant scheduling.…”
Section: Opportunities In Service Faulttolerant Schedulingmentioning
confidence: 99%
“…TensorFlow.js is used as a proof-of-concept to train a recurrent neural network that produces or predicts the following letter of an input text. In a follow-up study [ 83 ], a federated learning system that is dynamic and adaptable has been implemented and evaluated to train common machine learning models. The system has been evaluated with up to 24 desktops working together via web browsers to train common machine learning models, while allowing users to join or leave the computation and keeping personal data stored locally.…”
Section: Front-end Deep Learning Web Appsmentioning
confidence: 99%