Proceedings of the ACM Web Conference 2022 2022
DOI: 10.1145/3485447.3511988
|View full text |Cite
|
Sign up to set email alerts
|

FedKC: Federated Knowledge Composition for Multilingual Natural Language Understanding

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
2
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4
1

Relationship

0
5

Authors

Journals

citations
Cited by 7 publications
(3 citation statements)
references
References 29 publications
0
2
0
Order By: Relevance
“…According to the principle of knowledge distillation, using the prediction output of the model can still transfer the knowledge learned by the model through the training data, so we can quote the idea of knowledge distillation and use the prediction output of the client model to replace the model parameters for knowledge transfer. Some studies [15][16][17] have introduced the training method of knowledge distillation to improve the original federated learning, replacing the model parameters as the interactive information between the client and the server with the client prediction output, and showed that the federated learning using this method can also obtain acceptable performance. However, in these works [15][16][17], the aggregation method of Federated learning has not been changed.…”
Section: Knowledge Distillationmentioning
confidence: 99%
See 1 more Smart Citation
“…According to the principle of knowledge distillation, using the prediction output of the model can still transfer the knowledge learned by the model through the training data, so we can quote the idea of knowledge distillation and use the prediction output of the client model to replace the model parameters for knowledge transfer. Some studies [15][16][17] have introduced the training method of knowledge distillation to improve the original federated learning, replacing the model parameters as the interactive information between the client and the server with the client prediction output, and showed that the federated learning using this method can also obtain acceptable performance. However, in these works [15][16][17], the aggregation method of Federated learning has not been changed.…”
Section: Knowledge Distillationmentioning
confidence: 99%
“…Some studies [15][16][17] have introduced the training method of knowledge distillation to improve the original federated learning, replacing the model parameters as the interactive information between the client and the server with the client prediction output, and showed that the federated learning using this method can also obtain acceptable performance. However, in these works [15][16][17], the aggregation method of Federated learning has not been changed. It is still the weighted average of the output results uploaded by each client, and whether the effect of the aggregation method is really the most effective has not been explained too much.…”
Section: Knowledge Distillationmentioning
confidence: 99%
“… 15 Multilingual FL has been recently explored in different language tasks, as it provides an interesting and natural setting to examine non-IID data, of which different languages are an obvious application. 16 , 17 , 18…”
Section: Introductionmentioning
confidence: 99%