2022
DOI: 10.1109/jiot.2022.3197317
|View full text |Cite
|
Sign up to set email alerts
|

Resource-Efficient Federated Learning With Non-IID Data: An Auction Theoretic Approach

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
6
1

Relationship

1
6

Authors

Journals

citations
Cited by 12 publications
(5 citation statements)
references
References 26 publications
0
5
0
Order By: Relevance
“…Deep learning-based client selec-tion does not need this a priori information but can result in communication overhead [11]. Resource-based client selection does not necessarily aim for better handling of Non-IID data but can help in this regard [16], [17]. All methods have in common that maintaining one global model tends to produce a bias and overftting towards "good" clients.…”
Section: State-of-the-art Taxonomymentioning
confidence: 99%
“…Deep learning-based client selec-tion does not need this a priori information but can result in communication overhead [11]. Resource-based client selection does not necessarily aim for better handling of Non-IID data but can help in this regard [16], [17]. All methods have in common that maintaining one global model tends to produce a bias and overftting towards "good" clients.…”
Section: State-of-the-art Taxonomymentioning
confidence: 99%
“…Data sharing is advocated as a means to enhance performance resilience when confronted with non-IID datasets featuring highly skewed distributions [37]. Remarkable performance gains can be obtained using a minimal amount of shared data [1]. The proponents argue that when a minuscule amount of shared data (i.e., 0.8% of the total data) can drasti-FIGURE 3: The accuracy of four well-known model-driven learning methods -FedAvg, Fedprox, Scaffold, and Moon -was evaluated based on their performance with data distributions having α values of 0.005, 0.02, 0.04, and 10 (which represents IID data) using the CIFAR100 dataset.…”
Section: Background and Related Workmentioning
confidence: 99%
“…Additionally, the use of a data-sharing technique significantly reduces model convergence time and improves the overall accuracy of the trained model. Notably, this method is highly efficient and requires minimal input costs to generate or purchase marginal data while yielding substantial outputs in terms of time and accuracy improvements, presented in [1].…”
Section: Introductionmentioning
confidence: 99%
“…Zhao et al [28] improved training on non-IID data by constructing a small, globally shared, uniformly distributed data subset for all clients. Similarly, Seo et al [29] mitigated the quality degradation problem in FL via data sharing, using an auction approach to effectively reduce the cost, while satisfying system requirements for maximizing model quality and resource efficiency. In [30], the authors assume that a small segment of clients are willing to share their datasets, and the server collects data from these clients in a centralized manner to aid in updating the global model.…”
Section: Data-based Approachesmentioning
confidence: 99%