2009 Data Compression Conference 2009
DOI: 10.1109/dcc.2009.65
|View full text |Cite
|
Sign up to set email alerts
|

An Adaptive Sub-sampling Method for In-memory Compression of Scientific Data

Abstract: A current challenge in scientific computing is how to curb the growth of simulation datasets without losing valuable information. While wavelet based methods are popular, they require that data be decompressed before it can analyzed, for example, when identifying time-dependent structures in turbulent flows. We present Adaptive Coarsening, an adaptive subsampling compression strategy that enables the compressed data product to be directly manipulated in memory without requiring costly decompression. We demonst… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
7
0

Year Published

2012
2012
2023
2023

Publication Types

Select...
3
2
2

Relationship

0
7

Authors

Journals

citations
Cited by 11 publications
(7 citation statements)
references
References 14 publications
(17 reference statements)
0
7
0
Order By: Relevance
“…The boundary saving scheme is a solution (Berkhout 1988;Clapp 2008), but at the cost of an additional wavefield extrapolation. Also, techniques based on wavefield compression either temporally or spatially or both are viable alternate solutions maintaining a balance between computational overhead and time (Unat et al 2009;Dalmau et al 2014;Boehm et al 2016). Although promising, these kind of techniques are also not a panacea in complex models, involving trade-offs in the degree of compression and the amount of distortion while decompressing (Mittal & Vetter 2016).…”
Section: Introductionmentioning
confidence: 99%
“…The boundary saving scheme is a solution (Berkhout 1988;Clapp 2008), but at the cost of an additional wavefield extrapolation. Also, techniques based on wavefield compression either temporally or spatially or both are viable alternate solutions maintaining a balance between computational overhead and time (Unat et al 2009;Dalmau et al 2014;Boehm et al 2016). Although promising, these kind of techniques are also not a panacea in complex models, involving trade-offs in the degree of compression and the amount of distortion while decompressing (Mittal & Vetter 2016).…”
Section: Introductionmentioning
confidence: 99%
“…In general, there is no need for the compression error to be much smaller than the discretization or truncation errors of the computation. Lossy compression schemes that have been proposed for scientific floating point data in different contexts include ISABELA 3 (In-situ Sort-And-B-spline Error-bounded Lossy Abatement) [13], SQE [14], zfp 4 [15][16][17], SZ 5 1.1 [18] and 1.4 [19,20], multilevel transform coding on unstructured grids (TCUG) [21,22], adaptive thinning (AT) [23,24] and adaptive coarsening (AC) [25,26], TuckerMPI [27,28], TTHRESH [29], MGARD [30], HexaShrink [31], and hybrids of different methods [32].…”
Section: High Entropy Data Necessitates Lossy Compressionmentioning
confidence: 99%
“…AC is an extension of the adaptive sub-sampling technique first introduced for transmitting HDTV signals [2], which is based on down-sampling a mesh in areas which can be reconstructed within some error tolerance and storing at full resolution the others. In [12], the authors use AC to compress data on structured grids and compare the results to wavelet methods. Even though AC can potentially be extended for unstructured grids [11], current implementations are limited to structured grids.…”
Section: Related Workmentioning
confidence: 99%