Proceedings of the 6th International Systems and Storage Conference on - SYSTOR '13 2013
DOI: 10.1145/2485732.2485748
|View full text |Cite
|
Sign up to set email alerts
|

Block locality caching for data deduplication

Abstract: Data deduplication systems discover and remove redundancies between data blocks by splitting the data stream into chunks and comparing a hash of each chunk with all previously stored hashes. Storing the corresponding chunk index on hard disks immediately limits the achievable throughput, as these devices are unable to support the high number of random IOs induced by this index. Several approaches to overcome this chunk lookup disk bottleneck have been proposed. Often, the approaches try to capture the locality… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
16
0

Year Published

2014
2014
2021
2021

Publication Types

Select...
4
3
3

Relationship

1
9

Authors

Journals

citations
Cited by 29 publications
(16 citation statements)
references
References 26 publications
0
16
0
Order By: Relevance
“…Fatema Rashid et al [2] proposed a framework that enables the file to be divided first into smaller units and then those blocks gets encrypted using an effective encryption technique however their framework wasn't tested in real cloud environment, AvaniWildaniet. al [3] demonstrated the effectiveness of their approach employing a straightforward neighborhood grouping that needs solely timestamp and block number, that makes it better to be used with various kind of storage systems without the need to modify host file systems.Dirk Meister, et.al [4] projected a way within which earlier backup information was used to predict the future backup. This technique increase the lookup performance.…”
Section: ) Cost Reductionmentioning
confidence: 99%
“…Fatema Rashid et al [2] proposed a framework that enables the file to be divided first into smaller units and then those blocks gets encrypted using an effective encryption technique however their framework wasn't tested in real cloud environment, AvaniWildaniet. al [3] demonstrated the effectiveness of their approach employing a straightforward neighborhood grouping that needs solely timestamp and block number, that makes it better to be used with various kind of storage systems without the need to modify host file systems.Dirk Meister, et.al [4] projected a way within which earlier backup information was used to predict the future backup. This technique increase the lookup performance.…”
Section: ) Cost Reductionmentioning
confidence: 99%
“…Block Locality Caching (BLC) [24] captures the backup and always uses the latest locality information to achieve better performance for data deduplication systems. The File Access corRelation Mining and Evaluation Reference model (FARMER) [42] optimizes the large scale file system by correlating access patterns and semantic attributes.…”
Section: F Broader Impactmentioning
confidence: 99%
“…Currently, the eWave cache implements a simple LRU eviction policy. Anyway, other policies can be introduced to improve superchunk caching effectiveness [18,19].…”
Section: Energy-aware Data Managementmentioning
confidence: 99%