2012 Data Compression Conference 2012
DOI: 10.1109/dcc.2012.19
|View full text |Cite
|
Sign up to set email alerts
|

Gipfeli - High Speed Compression Algorithm

Abstract: Abstract. Gipfeli is a high-speed compression algorithm that uses backward references with a 16-bit sliding window, based on 1977 paper by Lempel and Ziv, enriched with an ad-hoc entropy coding for both literals and backward references. We have implemented it in C++ and fine-tuned for very high performance. The compression ratio is similar to Zlib in the fastest mode, but Gipfeli is more than three times faster. This positions it as an ideal solution for many bandwidth-bound systems, intermediate data storage … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

1
12
0

Year Published

2014
2014
2021
2021

Publication Types

Select...
5
4
1

Relationship

0
10

Authors

Journals

citations
Cited by 17 publications
(13 citation statements)
references
References 7 publications
1
12
0
Order By: Relevance
“…This was also experimentally confirmed on real data gathered from network traffic [9]. Note that a combination of memory-assisted compression and parallel compression techniques that achieve high compression rate as well as high compression speed make compression-based redundancy elimination feasible on high rate links as well [14], [15].…”
Section: Introductionsupporting
confidence: 59%
“…This was also experimentally confirmed on real data gathered from network traffic [9]. Note that a combination of memory-assisted compression and parallel compression techniques that achieve high compression rate as well as high compression speed make compression-based redundancy elimination feasible on high rate links as well [14], [15].…”
Section: Introductionsupporting
confidence: 59%
“…The main goal in the design of such high-speed algorithms has been to adapt LZ77 compression to achieve highest possible speed and through this process compression performance is traded for speed. As such, the compression performance of high-speed algorithms suffers, for example, compression of the first 1GB of the English Wikipedia using Snappy [42], and Gipfeli [41] has resulted in 530MB (in 2.8sec) and 410MB (in 4.3sec), respectively. However, the implementation of gzip that we experimented on would compress the same input to 320MB (in 41.7sec).…”
Section: Compression Complexitymentioning
confidence: 99%
“…Several techniques are available to improve the throughput, such as hardware acceleration [3], algorithmic approximations, and computer architecture optimizations [4][5][6][7]. Although these acceleration, approximation, and optimization techniques may accelerate compression, there are many systems where these do not suffice either due to limited speed up or poor compression quality.…”
mentioning
confidence: 99%