Federated learning is gaining significant interests as it enables model training over a large volume of data that is distributedly stored over many users. However, Malicious or dishonest aggregator still possible to infer sensitive information and even restore private data from local model updates even destroy the process of training. To solve the problem, researchers have proposed many excellent methods based on privacy protection technologies, such as secure multiparty computation (MPC), homomorphic encryption (HE), and differential privacy. But these methods don't only ignore users' address and identity privacy, but also include nothing about a feasible scheme to trace malicious users and malicious gradients.In this papper, we propose a decentralized Byzantine-fault-tolerant federated learning protocol based on traceable ring signature and undirectional proxy re-encryption, named RFL, which consists of an one-time dynamic proxy protocol, an robust aggregation protocol and an gradient calibration protocol. In our protocol, all users owns a aggregator, a proxy and a trainer while there is no central server. These users can achieve anonymous sharing of local gradients, robust aggregation of global gradients, and tracing of malicious gradients through the protocol. We design, implement, and evaluate a practical system to jointly learn an accurate model under semi-honest and malicious adversary security, respectively. The experiments show our protocols achieve the best overall performance as well.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.