Proceedings of the Web Conference 2021 2021
DOI: 10.1145/3442381.3449851
|View full text |Cite
|
Sign up to set email alerts
|

Characterizing Impacts of Heterogeneity in Federated Learning upon Large-Scale Smartphone Data

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

1
61
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
1

Relationship

1
5

Authors

Journals

citations
Cited by 86 publications
(72 citation statements)
references
References 20 publications
1
61
0
Order By: Relevance
“…Celeba dataset contains face attributes of 915 users with 19,923 samples. We use CNN models for both datasets as in previous work [68]. • Natural Language Processing (NLP): We evaluate Fed-Balancer on two NLP tasks each on different dataset: next-word prediction on Reddit [10] dataset and nextcharacter prediction on Shakespeare [57] dataset.…”
Section: Methodsmentioning
confidence: 99%
See 4 more Smart Citations
“…Celeba dataset contains face attributes of 915 users with 19,923 samples. We use CNN models for both datasets as in previous work [68]. • Natural Language Processing (NLP): We evaluate Fed-Balancer on two NLP tasks each on different dataset: next-word prediction on Reddit [10] dataset and nextcharacter prediction on Shakespeare [57] dataset.…”
Section: Methodsmentioning
confidence: 99%
“…Method. We first ran FedAvg+1𝑇 on five datasets until convergence, with the number of rounds that are suggested by previous works [10,11,61,68]: 1000, 100, 600, 40, and 300 rounds for FEMNIST, Celeba, Reddit, Shakespeare, and UCI-HAR. Based on the user trace data of FLASH, we measured the wall clock time which FedAvg+1𝑇 ran for each dataset, and ran experiments with other baselines and FedBalancer until the same wall clock time.…”
Section: Methodsmentioning
confidence: 99%
See 3 more Smart Citations