2012 IEEE 20th International Symposium on Modeling, Analysis and Simulation of Computer and Telecommunication Systems 2012
DOI: 10.1109/mascots.2012.32
|View full text |Cite
|
Sign up to set email alerts
|

Assuring Demanded Read Performance of Data Deduplication Storage with Backup Datasets

Abstract: Data deduplication has been widely adopted in contemporary backup storage systems. It not only saves storage space considerably, but also shortens the data backup time significantly. Since the major goal of the original data deduplication lies in saving storage space, its design has been focused primarily on improving write performance by removing as many duplicate data as possible from incoming data streams. Although fast recovery from a system crash relies mainly on read performance provided by deduplication… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
19
0

Year Published

2013
2013
2020
2020

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 48 publications
(19 citation statements)
references
References 7 publications
0
19
0
Order By: Relevance
“…They enumerate the spatial area by using a selective duplication threshold value. Their experiments with the actual backup datasets determine that the proposed scheme achieves requested read performance in most cases at the realistic cost of write performance [8].…”
Section: International Journal Of Computer Applications (0975 -8887)mentioning
confidence: 99%
“…They enumerate the spatial area by using a selective duplication threshold value. Their experiments with the actual backup datasets determine that the proposed scheme achieves requested read performance in most cases at the realistic cost of write performance [8].…”
Section: International Journal Of Computer Applications (0975 -8887)mentioning
confidence: 99%
“…J. Nam, D. Park, and D. H. Du [2], author proposed a novel indicator for dedupe scheme. They proposed scheme provides two-fold approach, first, a novel indicator for dedupe scheme called cache-aware Chunk Fragmentation Level (CFL) monitor and second selective duplication for improvement read performance.…”
Section: Related Workmentioning
confidence: 99%
“…Proposed scheme assures demanded read performance of each data stream while completing its write performance at a practical level and also certain a target system recovery time. Major drawback of selective duplication is that it requires extra memory space called in-memory temp container [2].…”
Section: Comparisonmentioning
confidence: 99%
“…It also provides space-efficient VM image storage since VM images have high content similarities [5]. However, deduplication has a drawback of introducing fragmentation [6,8,10,14,15], since some blocks of a file may now refer to other identical blocks of a different file. To illustrate, Figure 1(a) shows three snapshots of a VM, denoted by VM 1 , VM 2 , and VM 3 , which are to be written to disk that initially has no data.…”
Section: Introductionmentioning
confidence: 99%
“…On the other hand, we believe that achieving high read throughput is necessary in any backup system. For instance, a fast restore operation can minimize the system downtime during disaster recovery [10,16].…”
Section: Introductionmentioning
confidence: 99%