“…tributed Federated Learning. Homomorphic encryption [10] and local differential privacy [11] have been proposed and applied in distributed data collection to reduce the risk of data leakage. For example, Phong et al [12] applied homomorphic encryption to the process of model learning and constructed a more secure distributed federated learning system, alleviating the data leakage from incompletely trusted third-party servers.…”
Section: Research Status Of Privacy Protection Methods In Dis-mentioning
Distributed federated learning models are vulnerable to membership inference attacks (MIA) because they remember information about their training data. Through a comprehensive privacy analysis of distributed federated learning models, we design an attack model based on generative adversarial networks (GAN) and member inference attacks (MIA). Malicious participants (attackers) utilize the attack model to successfully reconstruct training sets of other regular participants without any negative impact on the global model. To solve this problem, we apply the differential privacy method to the training process of the model, which effectively reduces the accuracy of member inference attacks by clipping the gradient and adding noise to it. In addition, we manage the participants hierarchically through the method of trust domain division to alleviate the performance degradation of the model caused by differential privacy processing. Experimental results show that in distributed federated learning, our designed scheme can effectively defend against member inference attacks in white-box scenarios and maintain the usability of the global model, realizing an effective trade-off between privacy and usability.
“…tributed Federated Learning. Homomorphic encryption [10] and local differential privacy [11] have been proposed and applied in distributed data collection to reduce the risk of data leakage. For example, Phong et al [12] applied homomorphic encryption to the process of model learning and constructed a more secure distributed federated learning system, alleviating the data leakage from incompletely trusted third-party servers.…”
Section: Research Status Of Privacy Protection Methods In Dis-mentioning
Distributed federated learning models are vulnerable to membership inference attacks (MIA) because they remember information about their training data. Through a comprehensive privacy analysis of distributed federated learning models, we design an attack model based on generative adversarial networks (GAN) and member inference attacks (MIA). Malicious participants (attackers) utilize the attack model to successfully reconstruct training sets of other regular participants without any negative impact on the global model. To solve this problem, we apply the differential privacy method to the training process of the model, which effectively reduces the accuracy of member inference attacks by clipping the gradient and adding noise to it. In addition, we manage the participants hierarchically through the method of trust domain division to alleviate the performance degradation of the model caused by differential privacy processing. Experimental results show that in distributed federated learning, our designed scheme can effectively defend against member inference attacks in white-box scenarios and maintain the usability of the global model, realizing an effective trade-off between privacy and usability.
“…Furthermore, cyphertext expansion issues put stringent constraints on Hardware/RAM requirements. Numerous efforts were deployed to curb the challenges associated with FHE [23,24], but to the best of our knowledge, the technology can never achieve near real-time performance as it was not initially designed with such requirement in mind.…”
The appealing properties of secure hardware solutions such as trusted execution environment (TEE) including low computational overhead, confidentiality guarantee, and reduced attack surface have prompted considerable interest in adopting them for secure stream processing applications. In this paper, we revisit the design of parallel stream join algorithms on multicore processors with TEEs. In particular, we conduct a series of profiling experiments to investigate the impact of alternative design choices to parallelize stream joins on TEE including: (1) execution approaches, (2) partitioning schemes, and (3) distributed scheduling strategies. From the profiling study, we observe three major high-performance impediments: (a) the computational overhead introduced with cryptographic primitives associated with page swapping operations, (b) the restrictive Enclave Page Cache (EPC) size that limits the supported amount of in-memory processing, and (c) the lack of vertical scalability to support the increasing workload often required for near real-time applications. Addressing these issues allowed us to design SecJoin, a more efficient parallel stream join algorithm that exploits modern scale-out architectures with TEEs rendering no trade-offs on security whilst optimizing performance. We present our model-driven parameterization of SecJoin and share our experimental results which have shown up to 4-folds of improvements in terms of throughput and latency.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.