Proceedings of the Fourth Workshop on General Purpose Processing on Graphics Processing Units 2011
DOI: 10.1145/1964179.1964189
|View full text |Cite
|
Sign up to set email alerts
|

Floating-point data compression at 75 Gb/s on a GPU

Abstract: Numeric simulations often generate large amounts of data that need to be stored or sent to other compute nodes. This paper investigates whether GPUs are powerful enough to make real-time data compression and decompression possible in such environments, that is, whether they can operate at the 32-or 40-Gb/s throughput of emerging network cards. The fastest parallel CPUbased floating-point data compression algorithm operates below 20 Gb/s on eight Xeon cores, which is significantly slower than the network speed … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
16
0

Year Published

2012
2012
2021
2021

Publication Types

Select...
6
2
1

Relationship

0
9

Authors

Journals

citations
Cited by 46 publications
(16 citation statements)
references
References 9 publications
0
16
0
Order By: Relevance
“…Our results show that the compression of generic data on the GPU for the purpose of minimizing bus transfer time is far from being a viable option; however, many domain-specific compression techniques on the GPU have proven to be beneficial [13,14,21] and may be a better option.…”
Section: Discussionmentioning
confidence: 99%
“…Our results show that the compression of generic data on the GPU for the purpose of minimizing bus transfer time is far from being a viable option; however, many domain-specific compression techniques on the GPU have proven to be beneficial [13,14,21] and may be a better option.…”
Section: Discussionmentioning
confidence: 99%
“…Taking the image compression field as an example, many improvements in image compression and transmission performance have been made by GPU. O'Neil and Burtscher [27] proposed a parallel compression algorithm based on a GPU platform specifically for double precision floating point data (GFC), whose compression speed was raised by about two orders of magnitude compared with BZIP2 and GZIP running on the CPU platform.…”
Section: Related Workmentioning
confidence: 99%
“…Compared to CPPCA, the number of bytes used for storing the Bs and Qs is smaller, and the reconstruction only involves matrix-matrix multiplication. The only possible bottleneck might be the residual coding, but the recent development in floating point coding has seen throughputs reaching as much as 75 Gb/s [23] on a graphic processing unit (GPU), while even on an eight-Xeon-core computer we have seen throughput at 20 Gb/s, and both would be sufficiently fast to code the required amount of HSI data within 10 seconds.…”
Section: Compression and Reconstruction Of Hsi Data By Rsvdmentioning
confidence: 99%