Cloud computing brings convenience to the users by providing computational resources and services. However, it comes with security challenges, such as unreliable cloud service providers that could threaten users’ data integrity. Therefore, we need a data verification protocol to ensure users’ data remains intact in the cloud storage. The data verification protocol has three important properties: public verifiability, privacy preservation, and blockless verification. Unfortunately, various existing signcryption schemes do not fully provide those important properties. As a result, we propose an improved version of a signcryption technique based on the short signature ZSS that can fulfill the aforementioned data verification important properties. Our computational cost and time complexity assessment demonstrates that our suggested scheme can offer more characteristics at the same computational cost as another ZSS signcryption scheme.
Since the inception of the Internet of Things (IoT), we have adopted centralized architecture for decades. With the vastly growing number of IoT devices and gateways, this architecture struggles to cope with the high demands of state-of-the-art IoT services, which require scalable and responsive infrastructure. In response, decentralization becomes a considerable interest among IoT adopters. Following a similar trajectory, this paper introduces an IoT architecture re-work that enables three spheres of IoT workflows (i.e., computing, storage, and networking) to be run in a distributed manner. In particular, we employ the blockchain and smart contract to provide a secure computing platform. The distributed storage network maintains the saving of IoT raw data and application data. The software-defined networking (SDN) controllers and SDN switches exist in the architecture to provide connectivity across multiple IoT domains. We envision all of those services in the form of separate yet integrated peer-to-peer (P2P) overlay networks, which IoT actors such as IoT domain owners, IoT users, Internet Service Provider (ISP), and government can cultivate. We also present several IoT workflow examples showing how IoT developers can adapt to this new proposed architecture. Based on the presented workflows, the IoT computing can be performed in a trusted and privacy-preserving manner, the IoT storage can be made robust and verifiable, and finally, we can react to the network events automatically and quickly. Our discussions in this paper can be beneficial for many people ranging from academia, industries, and investors that are interested in the future of IoT in general.
AI has been implemented in many sectors such as security, health, finance, national defense, etc. However, together with AI’s groundbreaking improvement, some people exploit AI to do harmful things. In parallel, there is rapid development in cloud computing technology, introducing a cloud-based AI system. Unfortunately, the vulnerabilities in cloud computing will also affect the security of AI services. We observe that compromising the training data integrity means compromising the results in the AI system itself. From this background, we argue that it is essential to keep the data integrity in AI systems. To achieve our goal, we build a data integrity architecture by following the National Institute of Standards and Technology (NIST) cybersecurity framework guidance. We also utilize blockchain technology and smart contracts as a suitable solution to overcome the integrity issue because of its shared and decentralized ledger. Smart contracts are used to automate policy enforcement, keep track of data integrity, and prevent data forgery. First, we analyze the possible vulnerabilities and attacks in AI and cloud environments. Then we draw out our architecture requirements. The final result is that we present five modules in our proposed architecture that fulfilled NIST framework guidance to ensure continuous data integrity provisioning towards secure AI environments.
Federated learning enables multiple users to collaboratively train a global model using the users’ private data on users’ local machines. This way, users are not required to share their training data with other parties, maintaining user privacy; however, the vanilla federated learning proposal is mainly assumed to be run in a trusted environment, while the actual implementation of federated learning is expected to be performed in untrusted domains. This paper aims to use blockchain as a trusted federated learning platform to realize the missing “running on untrusted domain” requirement. First, we investigate vanilla federate learning issues such as client’s low motivation, client dropouts, model poisoning, model stealing, and unauthorized access. From those issues, we design building block solutions such as incentive mechanism, reputation system, peer-reviewed model, commitment hash, and model encryption. We then construct the full-fledged blockchain-based federated learning protocol, including client registration, training, aggregation, and reward distribution. Our evaluations show that the proposed solutions made federated learning more reliable. Moreover, the proposed system can motivate participants to be honest and perform best-effort training to obtain higher rewards while punishing malicious behaviors. Hence, running federated learning in an untrusted environment becomes possible.
The federated learning (FL) approach in machine learning preserves user privacy during data collection. However, traditional FL schemes still rely on a centralized server, making them vulnerable to security risks, such as data breaches and tampering of models caused by malicious actors attempting to gain access by masquerading as trainers. To address these issues that hamper the trustability of federated learning, requirements were analyzed for several of these problems. The findings revealed that issues, such as the lack of accountability management, malicious actor mitigation, and model leakage, remained unaddressed in prior works. To fill this gap, a blockchain-based trustable FL scheme, MAM-FL, is proposed with the focus on providing accountability to trainers. MAM-FL established a group of voters responsible for evaluating and verifying the validity of the model updates submitted. The effectiveness of MAM-FL was tested based on the reduction of malicious actors present on both trainers’ and voters’ sides and the ability to handle colluding participants. Experiments show that MAM-FL succeeded at reducing the number of malicious actors, despite the test case involving initial collusion in the system.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.