2022
DOI: 10.48550/arxiv.2205.01840
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

FedMix: Mixed Supervised Federated Learning for Medical Image Segmentation

Abstract: The purpose of federated learning is to enable multiple clients to jointly train a machine learning model without sharing data. However, the existing methods for training an image segmentation model have been based on an unrealistic assumption that the training set for each local client is annotated in a similar fashion and thus follows the same image supervision level. To relax this assumption, in this work, we propose a label-agnostic unified federated learning framework, named FedMix, for medical image segm… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
4
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
4
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(9 citation statements)
references
References 38 publications
0
4
0
Order By: Relevance
“…We compare our QA-SplitFed against five baseline methods: Naive SplitFed (W = 1 [8] (SGD optimizer and momentum of 0.9), SplitAVG [6], and FedMix [7] (β = 1.5, λ = 10). All models are trained and validated over E = 12 local epochs and G = 10 global epochs.…”
Section: Resultsmentioning
confidence: 99%
See 2 more Smart Citations
“…We compare our QA-SplitFed against five baseline methods: Naive SplitFed (W = 1 [8] (SGD optimizer and momentum of 0.9), SplitAVG [6], and FedMix [7] (β = 1.5, λ = 10). All models are trained and validated over E = 12 local epochs and G = 10 global epochs.…”
Section: Resultsmentioning
confidence: 99%
“…Different types of data heterogeneity could occur in the context of collaborative learning, e.g. unbalanced data [2], different feature map distributions [6], or different levels of annotation accuracies over clients [7]. In the literature, these are commonly referred to as non-IID (Independent and Identically Distributed) scenarios.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…The authors also suggested a new federated cross-ensemble learning technique that together trains and sets up various models. Wicaksana et al [213] proposed FedMix, an FL strategy that employed mixed image labels specifically to segment anatomical region-of-interest from medical images. These labels incorporated substantial pixel-wise annotations, weak bounding boxes, and image-wise class annotations.…”
Section: E Federated Learningmentioning
confidence: 99%
“…The challenge of data heterogeneity and domain shifting was recently tackled in novel ways by, for example, federated disentanglement learning via disentangling the parameter space into shape and appearance [6] and automated federated averaging based on Dirichlet distribution [22]. Dynamic Re-Weighting mechanisms [12], federated cross ensemble learning [24], and label-agnostic (mixed labels) unified FL formed by a mixture of the client distributions [21] have been recently proposed to relax an unrealistic assumption that each client's training set will be annotated similarly and therefore follows the same image supervision level during the training of an image segmentation model. Although extensive research has been carried out on FL, there is still a need for methods to enable the development of more generalized FL models for clinical use which can effectively deal with statistical heterogeneity in weight aggregation, communication efficiency, and privacy with security.…”
Section: Federated Servermentioning
confidence: 99%