The escalating growth of distributed big data in hybrid cloud storage architecture introduces a new set of challenges. Constantly, content enrichment puts pressure on capacity. Nonetheless, the explosion of user data places a significant strain on broadband and storage capacity. Consequently, many cloud storage providers will implement deduplication to compress data, reduce transfer bandwidth, and reduce cloud storage space. In cloud storage systems, it is a data compression and storage optimization method. By locating and removing redundant data, it can save storage space and bandwidth. An MTHDedup deduplication strategy based on the Merkle hash tree is presented in a hybrid cloud environment to address the issue of convergent encryption algorithms being susceptible to brute-force attacks and ciphertext computation time overhead. Merkle hash trees are constructed using additional encryption algorithms to generate encryption keys during file- and block-level deduplication, ensuring that generated ciphertexts are unpredictable. The method is effective against both internal and external brute-force attacks, thereby increasing data security. Our method reduces the computational burden of ciphertext generation and the key storage space, and the performance advantage increases with the number of privilege sets.
This paper presents a backpropagation neural network algorithm for data compression and data storage. Data compression or establishing model ten coding is the most basic idea of traditional data compression. The traditionally designed ideas are mainly based on reducing the redundancy in the information and focus on the coding design, and its compression ratio has been hovering around dozens of percent. After the traditional coding compression of information, it is difficult to further compress by similar methods. In order to solve the above problems, the information that takes up less signal space can be used to represent the information that takes up more signal space to realize data compression. This new design idea of data compression breaks through the traditional limitation of relying only on coding to reduce data redundancy and achieves a higher compression ratio. At the same time, the information after such compression can be repeatedly compressed, and it has a very good performance. This is the basic idea of the combination of neural network and data compression introduced in this paper. According to the theory of multiobjective function optimization, this paper puts forward the theoretical model of multiobjective optimization neural network and studies a multiobjective data compression method based on neural network. According to the change of data characteristics, this method automatically adjusts the structural parameters (connection weight and bias value) of neural network to obtain the largest amount of data compression at the cost of small information loss. This method has the characteristics of strong adaptability, parallel processing, knowledge distributed storage, and anti-interference. Experimental results show that, compared with other methods, the proposed method has significant advantages in performance index, compression time and compression effect, high efficiency. and high-quality robustness.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.