<p>Machine learning (ML) models, at the core of Artificial intelligence (AI), are widely applied in our digital world. To build these models, huge amounts of data must be collected, and many assets must be protected under privacy law. Data privacy is a critical issue when training and testing ML models. For privacy concerns to be adequately addressed in today’s ML systems, there needs to be considered privacy gaps in ML, as trained ML models can be vulnerable to adversarial attacks. In this regard, federated learning and blockchain networks are the new paradigms that have emerged with the promise of privacy-preserving by design while utilizing ML models. The new paradigms have promising privacy-preserving potential; however, they neglect several fundamental privacy and security issues the fact that adversaries can exploit shared gradients and global parameters, and the parameter server may drop gradients that have been mistakenly or deliberately updated. Also, enough data is available to train models. The proposed research addresses privacy concerns in federated and distributed environments. This is a general overview to establish a privacy-preserving framework that can provide privacy in federated and blockchain-based networks while address to fundamental privacy and security issues.</p>
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.