Empirical attacks on Federated Learning (FL) systems indicate that FL is fraught with numerous attack surfaces throughout the FL execution. These attacks can not only cause models to fail in specific tasks, but also infer private information. While previous surveys have identified the risks, listed the attack methods available in the literature or provided a basic taxonomy to classify them, they mainly focused on the risks in the training phase of FL. In this work, we survey the threats, attacks and defenses to FL throughout the whole process of FL in three phases, including Data and Behavior Auditing Phase, Training Phase and Predicting Phase. We further provide a comprehensive analysis of these threats, attacks and defenses, and summarize their issues and taxonomy. Our work considers security and privacy of FL based on the viewpoint of the execution process of FL. We highlight that establishing a trusted FL requires adequate measures to mitigate security and privacy threats at each phase. Finally, we discuss the limitations of current attacks and defense approaches and provide an outlook on promising future research directions in FL.
The power of deep learning and the enormous effort and money required to build a deep learning model makes stealing them a hugely worthwhile and highly lucrative endeavor. Worse still, model theft requires little more than a high-school understanding of computer functions, which ensures a healthy and vibrant black market full of choice for any would-be pirate. As such, estimating how many neural network models are likely to be illegally reproduced and distributed in future is almost impossible. Therefore, we propose an embedded ' identity bracelet ' for deep neural networks that acts as proof of a model's owner. Our solution is an extension to the existing trigger-set watermarking techniques that embeds a post-cryptographic-style serial number into the base deep neural network (DNN). Called a DNN-SN, this identifier works like an identity bracelet that proves a network's rightful owner. Further, a novel training method based on non-related multitask learning ensures that embedding the DNN-SN does not compromise model performance. Experimental evaluations of the framework confirm that a DNN-SN can be embedded into a model when training from scratch or in the student network component of Net2Net. INDEX TERMS deep neural network, ownership verification, security and privacy, serial number, watermarking.
Machine Learning-as-a-Service systems (MLaaS) have been largely developed for cybersecurity-critical applications, such as detecting network intrusions and fake news campaigns. Despite effectiveness, their robustness against adversarial attacks is one of the key trust concerns for MLaaS deployment. We are thus motivated to assess the adversarial robustness of the Machine Learning models residing at the core of these securitycritical applications with categorical inputs. Previous research efforts on accessing model robustness against manipulation of categorical inputs are specific to use cases and heavily depend on domain knowledge, or require white-box access to the target ML model. Such limitations prevent the robustness assessment from being as a domain-agnostic service provided to various real-world applications. We propose a provably optimal yet computationally highly efficient adversarial robustness assessment protocol for a wide band of ML-driven cybersecurity-critical applications. We demonstrate the use of the domain-agnostic robustness assessment method with substantial experimental study on fake news detection and intrusion detection problems.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.