2007 Data Compression Conference (DCC'07) 2007
DOI: 10.1109/dcc.2007.44
|View full text |Cite
|
Sign up to set email alerts
|

High Throughput Compression of Double-Precision Floating-Point Data

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
49
0
1

Year Published

2008
2008
2024
2024

Publication Types

Select...
3
2
2

Relationship

1
6

Authors

Journals

citations
Cited by 79 publications
(54 citation statements)
references
References 13 publications
0
49
0
1
Order By: Relevance
“…gFPC outperforms FPC size , especially for larger predictor sizes, because FPC size uses the same configuration for all inputs whereas gFPC individually tunes itself for each in- 1 The result is not guaranteed to be optimal but is assumed to be at least close. put.…”
Section: Compression Ratio Comparisonmentioning
confidence: 99%
See 2 more Smart Citations
“…gFPC outperforms FPC size , especially for larger predictor sizes, because FPC size uses the same configuration for all inputs whereas gFPC individually tunes itself for each in- 1 The result is not guaranteed to be optimal but is assumed to be at least close. put.…”
Section: Compression Ratio Comparisonmentioning
confidence: 99%
“…gFPC is based on FPC [1,2] and compresses linear sequences of IEEE 754 doubleprecision floating-point values by sequentially predicting each value, xoring the true value with the predicted value, and leading-zero compressing the result. As illustrated in Figure 1, it uses variants of an fcm [3] and a dfcm [4] value predictor to predict the doubles.…”
Section: The Gfpc Algorithmmentioning
confidence: 99%
See 1 more Smart Citation
“…Each row holds the vertices of an element that span the three spatial dimensions. Though the example in Table 1 is overly simplistic, the runs of constant strides v A more general stride-based approach is the differential finite context method (DFCM) [13], which has been used successfully for trace file and floating-point compression [7,12]. The basic DFCM predictor is a hash table that maps a set of recent strides to the current, predicted stride.…”
Section: Connectivity Compressionmentioning
confidence: 99%
“…For example, the methods proposed in [4,5] require memory on the order of several hundreds to over a thousand bytes per hexahedron. Second, for checkpointing purposes and accurate analysis, the compression scheme must be lossless [6,7]. Not only does this requirement rule out quantization, but it also implies that the geometry and connectivity arrays may not be reordered so that the simulation state can be perfectly recovered, e.g.…”
Section: Introductionmentioning
confidence: 99%