Over the years, the flourish of crowd computing has enabled enterprises to accomplish computing tasks through crowdsourcing in a large-scale and high-quality manner, and therefore how to efficiently and securely implement crowd computing becomes a hotspot. Some recent work innovatively adopted a P2P (peer-to-peer) network as the communication environment of crowdsourcing. Based on its decentralized control, issues like single-point-of-failure or DDoS attack can be overcome to some extent, but the huge computing capacity and storage costs required by this scheme is always unbearable. Federated learning is a distributed machine learning that supports local storage of data, and clients implement training through interactive gradient values. In our work, we combine blockchain with federated learning and propose a crowdsourcing framework named CrowdSFL, that users can implement crowdsourcing with less overhead and higher security. In addition, to protect the privacy of participants, we design a new re-encryption algorithm based on Elgamal to ensure that interactive values and other information will not be exposed to other participants outside the workflow. Finally, we have proved through experiments that our framework is superior to some similar work in accuracy, efficiency, and overhead.
Ciphertext-policy attribute-based encryption (CP-ABE) is a promising approach to achieve fine-grained access control over the outsourced data in Internet of Things (IoT). However, in the existing CP-ABE schemes, the access policy is either appended to the ciphertext explicitly or only partially hidden against public visibility, which results in privacy leakage of the underlying ciphertext and potential recipients. In this paper, we propose a fine-grained data access control scheme supporting expressive access policy with fully attribute hidden for cloudbased IoT. Specifically, the attribute information is fully hidden in access policy by using randomizable technique, and a fuzzy attribute positioning mechanism based on garbled Bloom filter is developed to help the authorized recipients locate their attributes efficiently and decrypt the ciphertext successfully. Security analysis and performance evaluation demonstrate that the proposed scheme achieves effective policy privacy preservation with low storage and computation overhead. As a result, no valuable attribute information in the access policy will be disclosed to the unauthorized recipients.
The flourishing deep learning on distributed training datasets arouses worry about data privacy. The recent work related to privacy-preserving distributed deep learning is based on the assumption that the server and any learning participant do not collude. Once they collude, the server could decrypt and get data of all learning participants. Moreover, since the private keys of all learning participants are the same, a learning participant must connect to the server via a distinct TLS/SSL secure channel to avoid leaking data to other learning participants. To fix these problems, we propose a privacy-preserving distributed deep learning scheme with the following improvements:(1) no information is leaked to the server even if any learning participant colludes with the server;(2) learning participants do not need different secure channels to communicate with the server; and (3) the deep learning model accuracy is higher. We achieve them by introducing a key transform server and using homomorphic re-encryption in asynchronous stochastic gradient descent applied to deep learning. We show that our scheme adds tolerable communication cost to the deep learning system, but achieves more security properties. The computational cost of learning participants is similar. Overall, our scheme is a more secure and more accurate deep learning scheme for distributed learning participants.
With the advent of big data era, clients lack of computational and storage resources tends to outsource data mining tasks to cloud computing providers in order to improve efficiency and save costs. Generally, different clients choose different cloud companies for the sake of security, business cooperation, location, and so on. However, due to the rise of privacy leakage issues, the data contributed by clients should be encrypted under their own keys. This paper focuses on privacy-preserving k-nearest neighbor (kNN) computation over the databases distributed among multiple cloud environments. Unfortunately, existing secure outsourcing protocols are either restricted to a single key setting or quite inefficient because of frequent client-to-server interactions, making it impractical for wide application. To address these issues, we propose a set of secure building blocks and outsourced collaborative kNN protocol. Theoretical analysis shows that our scheme not only preserves the privacy of distributed databases and kNN query but also hides access patterns in the semi-honest model. Experimental evaluation demonstrates its significant efficiency improvements compared with existing methods.INDEX TERMS Big data, privacy-preserving data mining, k-nearest neighbor, multiple keys, multiple clouds.
Cloud computing has been envisioned as the next generation architecture of the IT enterprise, but there exist many security problems. A significant problem encountered in the context of cloud storage is whether there exists some potential vulnerabilities towards cloud storage system after introducing third parties. Public verification enables a third party auditor (TPA), on behalf of users who lack the resources and expertise, to verify the integrity of the stored data. Many existing auditing schemes always assume TPA is reliable and independent. This work studies the problem what if certain TPAs are semi-trusted or even potentially malicious in some situations. Actually, the authors consider the task of allowing such a TPA to involve in the audit scheme. They propose a feedback-based audit scheme via which users are relaxed from interacting with cloud service provider (CSP) and can check the integrity of stored data by themselves instead of TPA yet. Specifically, TPA generates the feedback through processing the proof from CSP and returns it to user which is yet unforgeable to TPA and checked exclusively by user. Through detailed security and performance analysis, the author's scheme is shown to be more secure and lightweight. Securing the cloud storage audit service: defending the uneven distribution of verification time or denial of
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.