2022
DOI: 10.1007/978-3-031-16452-1_64
|View full text |Cite
|
Sign up to set email alerts
|

Suppressing Poisoning Attacks on Federated Learning for Medical Imaging

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3
2

Relationship

0
5

Authors

Journals

citations
Cited by 7 publications
(1 citation statement)
references
References 21 publications
0
1
0
Order By: Relevance
“…In addition, for Byzantine-tolerant FL, researchers employed a distance-based outlier suppression approach in which an aggregation server computes the cosine and Euclidean distances between distinct centre updates and assigns outlier scores to each centre. 39 Finally, the weighted average is calculated using the outlier ratings from each centre. On two medical imaging datasets (CheXpert and HAM10000), its outlier identification effectively protects against model poisoning attacks in both IID and Non-IID FL contexts.…”
Section: Research On Fl For Medical Imagingmentioning
confidence: 99%
“…In addition, for Byzantine-tolerant FL, researchers employed a distance-based outlier suppression approach in which an aggregation server computes the cosine and Euclidean distances between distinct centre updates and assigns outlier scores to each centre. 39 Finally, the weighted average is calculated using the outlier ratings from each centre. On two medical imaging datasets (CheXpert and HAM10000), its outlier identification effectively protects against model poisoning attacks in both IID and Non-IID FL contexts.…”
Section: Research On Fl For Medical Imagingmentioning
confidence: 99%