2022
DOI: 10.1109/jiot.2022.3188556
|View full text |Cite
|
Sign up to set email alerts
|

A Triple-Step Asynchronous Federated Learning Mechanism for Client Activation, Interaction Optimization, and Aggregation Enhancement

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
21
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
3
1

Relationship

1
7

Authors

Journals

citations
Cited by 36 publications
(21 citation statements)
references
References 26 publications
0
21
0
Order By: Relevance
“…In simple terms, the client parameters are sent to the central server after the local data may have undergone multiple epochs of local training. The parameter server then immediately aggregates the model parameters without waiting for any other clients and then returns the aggregated parameters to the client [10]. That is, when the server receives any update parameters from the client, an aggregation is done.…”
Section: Methods Based On Asynchronous Aggregationmentioning
confidence: 99%
See 1 more Smart Citation
“…In simple terms, the client parameters are sent to the central server after the local data may have undergone multiple epochs of local training. The parameter server then immediately aggregates the model parameters without waiting for any other clients and then returns the aggregated parameters to the client [10]. That is, when the server receives any update parameters from the client, an aggregation is done.…”
Section: Methods Based On Asynchronous Aggregationmentioning
confidence: 99%
“…There are many algorithms for federated learning based on server-side aggregation optimization, i.e., optimizing model aggregation, so there are also many directions for federated aggregation optimization. From the perspective of asynchronous aggregation, such as AsyncFedAvg [10], FedAsync [11]; from the perspective of hierarchical aggregation, such as FedPER [12], FedMA [13]; there are also server-side optimization methods, such as FedAvgM [14], which introduces momentum in the server, and adaptive server optimization methods such as FedAdagrad [15], FedYogi [15], and FedAdam [15]. There are other optimization methods that will not be discussed in this paper.…”
Section: Improvement On Aggregationmentioning
confidence: 99%
“…In contrast to synchronous FL, the asynchronous scheme leads to faster convergence under unstable networks especially with millions of devices [28]. An increasing number of asynchronous FL works have been published in recent years, with focuses on client selection [17,24,29,63,64], weight aggregation [52,53,60] and transmission scheduling [35]. Semi-asynchronous mechanisms are developed to aggregate buffered updates [14,16,45,54,61].…”
Section: Related Workmentioning
confidence: 99%
“…Furthermore, the willingness-to-pay [44] utility equations are used to define the common estimation task as presented in Equations ( 23)- (25).…”
Section: Data and Model Structurementioning
confidence: 99%
“…As a novel solution to address the above challenges, federated learning (FL) [22][23][24] is now in the spotlight to learn AI models in a collaborative and privacy-preserving manner. Instead of being transmitted directly to the server, raw data are processed locally at each client for the learning parameters (i.e., gradients), which will be uploaded and aggregated at the server to generate a global model [25]. If such a process can be implemented in PMS, data islands and idle resources can be bridged and utilized to assist the learning of the intelligent core.…”
Section: Introductionmentioning
confidence: 99%