2020
DOI: 10.1089/big.2019.0120
|View full text |Cite
|
Sign up to set email alerts
|

SecDedoop: Secure Deduplication with Access Control of Big Data in the HDFS/Hadoop Environment

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
5
1
1

Relationship

0
7

Authors

Journals

citations
Cited by 11 publications
(6 citation statements)
references
References 27 publications
0
6
0
Order By: Relevance
“…HDFS realizes the distributed storage of data, which solves a large number of problems existing in many distributed file systems. It includes the amount of data that can be stored (TB level or petabyte level), the reliability of storage, and better integration with Hadoop's Map Reduce framework [13].…”
Section: Hadoop Big Data Processingmentioning
confidence: 99%
“…HDFS realizes the distributed storage of data, which solves a large number of problems existing in many distributed file systems. It includes the amount of data that can be stored (TB level or petabyte level), the reliability of storage, and better integration with Hadoop's Map Reduce framework [13].…”
Section: Hadoop Big Data Processingmentioning
confidence: 99%
“…Many researchers have also begun to use this framework to optimize and improve traditional machine learning algorithms. Some research has studied clustering algorithms under the Hadoop cloud platform [ 10 ]. Some research used the MapReduce distributed framework to parallelize the improvement of the traditional ant colony algorithm, which makes the traditional ant colony algorithm faster and more effective when processing large-scale data sets [ 11 , 12 ].…”
Section: Related Workmentioning
confidence: 99%
“…It is a measure of a test's accuracy and is known as the harmonic mean that conveys the balances between precision and recall. The F1-Score is calculated by using Equation (8).…”
Section: F1-measurementioning
confidence: 99%
“…MapReduce is a component of the Hadoop ecosystem that can process a large amount of information parallelly of the cluster of commodity hardware in a reliable manner. 8 Data processing is quite different in big data as compared with conventional data processing. The data are processed by using the divide-and-conquered approach to solve the assigned task.…”
Section: Introductionmentioning
confidence: 99%