2018 IEEE International Conference on Big Data (Big Data) 2018
DOI: 10.1109/bigdata.2018.8622520
|View full text |Cite
|
Sign up to set email alerts
|

Error-Controlled Lossy Compression Optimized for High Compression Ratios of Scientific Datasets

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

3
143
0

Year Published

2019
2019
2021
2021

Publication Types

Select...
4
3
1

Relationship

2
6

Authors

Journals

citations
Cited by 189 publications
(161 citation statements)
references
References 16 publications
3
143
0
Order By: Relevance
“…The mathematical formula of least-squares linear regression gives the best fit solution [46]. The least-squares regression defines the values of a and b that will minimize the mean squared residual, 2 ̅̅̅ , where e is a residual:…”
Section: Pla Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…The mathematical formula of least-squares linear regression gives the best fit solution [46]. The least-squares regression defines the values of a and b that will minimize the mean squared residual, 2 ̅̅̅ , where e is a residual:…”
Section: Pla Methodsmentioning
confidence: 99%
“…High-performance compression techniques are essential to today's scientific research due to the massive volume of data with limited storage space and limited I/O bandwidth energy to access [2]. Mainstream compression modes are basically lossless compression and lossy compression.…”
Section: Introductionmentioning
confidence: 99%
“…This model tries to predict each data point as accurately as possible based on its neighborhood in spatial or temporal dimension and then shrinks the data size by some coding algorithm such as data quantization [62] and bit-plane truncation. A typical example compressor is SZ [44], which involves four compression steps: (1) data prediction, (2) linear-scaling quantization, (3) entropy-encoding, and (4) lossless compression. The errors are introduced and controlled at step (2).…”
Section: Data Compression Techniquesmentioning
confidence: 99%
“…Unlike traditional compression methods used on DNNs, we perform error-bounded lossy compression on the pruned weights, an approach that can significantly reduce the data size while restricting the loss of inference accuracy. Specifically, we adapt the SZ lossy compression framework developed by us previously [11,28,39] to fit the context of DNN compression. In this compression framework, each data point's value would be predicted based on its neighboring data points by an adaptive, best-fit prediction method (either a Lorenzo predictor or linear regression-based predictor [28]).…”
Section: Introductionmentioning
confidence: 99%
“…Specifically, we adapt the SZ lossy compression framework developed by us previously [11,28,39] to fit the context of DNN compression. In this compression framework, each data point's value would be predicted based on its neighboring data points by an adaptive, best-fit prediction method (either a Lorenzo predictor or linear regression-based predictor [28]). Then, each floating-point weight value would be converted to an integer number by a linearscaling quantization based on the difference between the real value and predicted value and a specific error bound.…”
Section: Introductionmentioning
confidence: 99%