2012
DOI: 10.1002/cpe.2887
|View full text |Cite
|
Sign up to set email alerts
|

ISABELA for effective in situ compression of scientific data

Abstract: Exploding dataset sizes from extreme-scale scientific simulations necessitates efficient data management and reduction schemes to mitigate I/O costs. With the discrepancy between I/O bandwidth and computational power, scientists are forced to capture data infrequently, thereby making data collection an inherently lossy process. Although data compression can be an effective solution, the random nature of real-valued scientific datasets renders lossless compression routines ineffective. These techniques also imp… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
50
0

Year Published

2012
2012
2024
2024

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 75 publications
(52 citation statements)
references
References 24 publications
(50 reference statements)
0
50
0
Order By: Relevance
“…NUMARCK (Chen et al, 2014), for example, approximates the differences between snapshots by VQ. ISABELA (Lakshminarasimhan et al, 2013) converts the multidimensional data to a sorted data series and then performs B-spline interpolation. ZFP (Lindstrom, 2014) involves more complicated techniques such as fixed-point integer conversion, block transform, and binary representation analysis with bit-plane encoding.…”
Section: Research Background and Design Motivationmentioning
confidence: 99%
See 1 more Smart Citation
“…NUMARCK (Chen et al, 2014), for example, approximates the differences between snapshots by VQ. ISABELA (Lakshminarasimhan et al, 2013) converts the multidimensional data to a sorted data series and then performs B-spline interpolation. ZFP (Lindstrom, 2014) involves more complicated techniques such as fixed-point integer conversion, block transform, and binary representation analysis with bit-plane encoding.…”
Section: Research Background and Design Motivationmentioning
confidence: 99%
“…They noted that the characteristics of the compression error must be carefully considered in the context of the underlying physics being modeled. Lakshminarasimhan et al (2013) proposed the ISABELA lossy compressor. It performs data compression by B-spline interpolation after sorting the data series.…”
Section: Autocorrelation Of Compression Errorsmentioning
confidence: 99%
“…In contrast to the file system level compression, however, the compression has to be performed by the applications themselves; even though the actual process is usually encapsulated in some I/O library such as HDF5, the involved compression overhead can negatively influence application performance. Application-specific knowledge can also be used by lossy compression schemes such as the floating point compressors proposed by Lindstrom and Isenburg [20], ISABELA [19] or GRIB and APAX to achieve larger gains for climate data [15]; this, however, forces the scientist to define boundaries for the tolerated loss in precision [15]. Recently, Baker et al presented an approach to determine the required precision for climate data; they achieve a compression ratio of 5 without noticeable impact on the scientific result [4].…”
Section: Compressionmentioning
confidence: 99%
“…Their new contribution is to give preference to jobs with higher priority, as defined by a job selection function. They explore two different such functions and validate their benefits with simulation experiments.Topic 5 on Parallel and Distributed Data Management is represented by the paper ISABELA for effective in situ compression of scientific data, authored by Sriram Lakshminarasimhan, Neil Shah, Stephane Ethier, Seung-Hoe Ku, C. S. Chang, Scott Klasky, Robert Latham, Robert Ross, and Nagiza F. Samatova [2]. The authors address the fundamental problem of compressing the terabytes of numerical data produced by modern large-scale scientific simulations on high performance computing (HPC) systems.…”
mentioning
confidence: 99%
“…Topic 16 on GPU and Accelerators Computing is represented by the paper Iterative sparse matrix-vector multiplication for accelerating the block Wiedemann algorithm over GF (2) on multi-GPU systems, authored by Bertil Schmidt, Hans Aribowo and Hoang Vu Dang [6]. Integer factorization constitutes the core of RSA cryptographic methods.…”
mentioning
confidence: 99%