Proceedings of the 2022 International Conference on Multimedia Retrieval 2022
DOI: 10.1145/3512527.3531372
|View full text |Cite
|
Sign up to set email alerts
|

FedNKD: A Dependable Federated Learning Using Fine-tuned Random Noise and Knowledge Distillation

Abstract: Multimedia retrieval models need the ability to extract useful information from large-scale data for clients. As an important part of multimedia retrieval, image classification model directly affects the efficiency and effect of multimedia retrieval. We need a lot of data to train a image classification model applied to multimedia retrieval task. However, with the protection of data privacy, the data used to train the model often needs to be kept on the client side. Federated learning is proposed to use data f… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 6 publications
(4 citation statements)
references
References 24 publications
0
4
0
Order By: Relevance
“…In [81], the clients adapt their local model by having some local parameters used for local adaption. Knowledge distillation using a teacher-student model is also a technique that can be applied on the server side [71,124] or the client side [111,134]. The regularization technique is a technique used on the client side in [55,126,138] Collaboration between clients and servers is sometimes necessary for certain techniques, particularly when it comes to data sharing.…”
Section: Discussionmentioning
confidence: 99%
“…In [81], the clients adapt their local model by having some local parameters used for local adaption. Knowledge distillation using a teacher-student model is also a technique that can be applied on the server side [71,124] or the client side [111,134]. The regularization technique is a technique used on the client side in [55,126,138] Collaboration between clients and servers is sometimes necessary for certain techniques, particularly when it comes to data sharing.…”
Section: Discussionmentioning
confidence: 99%
“…Generative model-based methods mainly solve the knowledge fusion problem by training GAN networks to generate client data samples. For example, Zhu et al proposed the FedGEN [ 10 ] method to achieve client-side model aggregation by generating a lightweight data generator on the server side. Such as, Zhang et al proposed the FedFTG method [ 11 ] to transfer knowledge from local models to global models by exploring the input space of local models through generators.…”
Section: Related Workmentioning
confidence: 99%
“…On the other hand, we cannot share clients’ private data in the federated learning environment, so training a teacher model while preserving client privacy remains the new issue. Current research in this direction can be broadly categorized into two main approaches: the public dataset method [ 7 , 8 , 9 ] and the method based on Generative Adversarial Networks (GANs) [ 10 , 11 , 12 ]. Nevertheless, the GAN-based method requires clients to possess significant computational resources, and GAN training is a time-consuming process, limiting its accessibility to some participants.…”
Section: Introductionmentioning
confidence: 99%
“…• Developing FL: 11 (16%) papers were categorised as developing FL further, this included a federated clustering framework [35] and seven papers [36][37][38][39][40][41][42] where clients share synthetic data with the server, rather than model parameters/weights (this might allow quicker model training and reduce communication costs).…”
Section: Categorising the Papersmentioning
confidence: 99%