Abstract:Federated learning (FL), an effective distributed machine learning framework, implements model training and meanwhile protects local data privacy. It has been applied to a broad variety of practical areas due to their great performance and appreciable profits. Who really owns the model, and how to protect the copyright has become a real problem. Intuitively, the existing property rights protection methods in centralized scenarios (e.g., watermark embedding and model fingerprints) are possible solutions for FL.… Show more
“…Table 3 gives a clear overview of existing methods depending on the previously described characteristics. Black-Box Server -FedIPR [18] White-Box and Black-Box Client(s) FedTracker [94] White-Box and Black-Box Server -Liu et al [95] Black-Box Client --FedCIP [96] White-Box Client(s) FedRight [97] White-Box Server -Yang et al [98] Black-Box Client --FedZKP [96] White-Box Client(s) Merkle-Sign [99] White-Box and Black-Box Server -…”
Section: Related Workmentioning
confidence: 99%
“…FedRight [97] is a solution for the server to fingerprint the model in the FL framework (S 2 ). DNN fingerprinting is a process in which instead of embedding a watermark in the model, we extract a fingerprint to identify this model [104].…”
Section: Fedrightmentioning
confidence: 99%
“…Even if with their solution they have no important impact on the watermarking, no testing has yet been performed on (S 1 ). Another attack that is specific to FL as described in FedRight [97] and WAFFLE [17], consists of the fact that multiple clients will use their models and private datasets. As mentioned in Section 4.1, an evasion attack works better when multiple datasets are used to train the detector.…”
Section: Attacks From Clients And/or Server Sidesmentioning
Federated learning (FL) is a technique that allows multiple participants to collaboratively train a Deep Neural Network (DNN) without the need to centralize their data. Among other advantages, it comes with privacy-preserving properties, making it attractive for application in sensitive contexts, such as health care or the military. Although the data are not explicitly exchanged, the training procedure requires sharing information about participants’ models. This makes the individual models vulnerable to theft or unauthorized distribution by malicious actors. To address the issue of ownership rights protection in the context of machine learning (ML), DNN watermarking methods have been developed during the last five years. Most existing works have focused on watermarking in a centralized manner, but only a few methods have been designed for FL and its unique constraints. In this paper, we provide an overview of recent advancements in federated learning watermarking, shedding light on the new challenges and opportunities that arise in this field.
“…Table 3 gives a clear overview of existing methods depending on the previously described characteristics. Black-Box Server -FedIPR [18] White-Box and Black-Box Client(s) FedTracker [94] White-Box and Black-Box Server -Liu et al [95] Black-Box Client --FedCIP [96] White-Box Client(s) FedRight [97] White-Box Server -Yang et al [98] Black-Box Client --FedZKP [96] White-Box Client(s) Merkle-Sign [99] White-Box and Black-Box Server -…”
Section: Related Workmentioning
confidence: 99%
“…FedRight [97] is a solution for the server to fingerprint the model in the FL framework (S 2 ). DNN fingerprinting is a process in which instead of embedding a watermark in the model, we extract a fingerprint to identify this model [104].…”
Section: Fedrightmentioning
confidence: 99%
“…Even if with their solution they have no important impact on the watermarking, no testing has yet been performed on (S 1 ). Another attack that is specific to FL as described in FedRight [97] and WAFFLE [17], consists of the fact that multiple clients will use their models and private datasets. As mentioned in Section 4.1, an evasion attack works better when multiple datasets are used to train the detector.…”
Section: Attacks From Clients And/or Server Sidesmentioning
Federated learning (FL) is a technique that allows multiple participants to collaboratively train a Deep Neural Network (DNN) without the need to centralize their data. Among other advantages, it comes with privacy-preserving properties, making it attractive for application in sensitive contexts, such as health care or the military. Although the data are not explicitly exchanged, the training procedure requires sharing information about participants’ models. This makes the individual models vulnerable to theft or unauthorized distribution by malicious actors. To address the issue of ownership rights protection in the context of machine learning (ML), DNN watermarking methods have been developed during the last five years. Most existing works have focused on watermarking in a centralized manner, but only a few methods have been designed for FL and its unique constraints. In this paper, we provide an overview of recent advancements in federated learning watermarking, shedding light on the new challenges and opportunities that arise in this field.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.