2023
DOI: 10.1007/s10845-023-02126-z
|View full text |Cite
|
Sign up to set email alerts
|

Federated transfer learning for auxiliary classifier generative adversarial networks: framework and industrial application

Abstract: Machine learning with considering data privacy-preservation and personalized models has received attentions, especially in the manufacturing field. The data often exist in the form of isolated islands and cannot be shared because of data privacy in real industrial scenarios. It is difficult to gather the data to train a personalized model without compromising data privacy. To address this issue, we proposed a Federated Transfer Learning framework based on Auxiliary Classifier Generative Adversarial Networks na… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
1
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 10 publications
(2 citation statements)
references
References 55 publications
0
1
0
Order By: Relevance
“…Lastly, model interpretability-the understanding and explanation of a model's decision-making process-is crucial for the iterative optimization of sensors and their broader application across more scenarios. To address these issues, it is necessary to employ techniques such as data augmentation [140] and adversarial training [141] during the data collection process to optimize the quality and quantity of datasets. Additionally, introducing noise and interference during model training can enhance the generalization capability to unknown data.…”
Section: Discussionmentioning
confidence: 99%
“…Lastly, model interpretability-the understanding and explanation of a model's decision-making process-is crucial for the iterative optimization of sensors and their broader application across more scenarios. To address these issues, it is necessary to employ techniques such as data augmentation [140] and adversarial training [141] during the data collection process to optimize the quality and quantity of datasets. Additionally, introducing noise and interference during model training can enhance the generalization capability to unknown data.…”
Section: Discussionmentioning
confidence: 99%
“…The discriminator model serves as the environment, receiving the sequence generated by the generator and discriminating against it, thereby generating a reward signal, and transmitting it to the generator model. The generator model updates its own parameters based on reward signals to better generate the next sequence [20]. The basic idea of Seq-GAN originates from GAN networks, which generate high-quality discrete data such as speech sequences, text sequences, and time series by incorporating reinforcement learning ideas.…”
Section: Sequence Generation Based On Gan Networkmentioning
confidence: 99%