2020 IEEE 36th International Conference on Data Engineering (ICDE) 2020
DOI: 10.1109/icde48307.2020.00119
|View full text |Cite
|
Sign up to set email alerts
|

Two-Level Data Compression using Machine Learning in Time Series Database

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
10
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 22 publications
(10 citation statements)
references
References 17 publications
0
10
0
Order By: Relevance
“…In time-series data, the pattern can change over time and a fixed compression scheme may not work well for the entire duration. Yu et al [55] propose a two-level compression framework, where a scheme space is constructed by extracting global features at the top level and a compression schema is selected for each point at the bottom level. The proposed AMMMO framework incorporates compression primitives and the control parameters, which define the compression scheme space.…”
Section: Data Compressionmentioning
confidence: 99%
“…In time-series data, the pattern can change over time and a fixed compression scheme may not work well for the entire duration. Yu et al [55] propose a two-level compression framework, where a scheme space is constructed by extracting global features at the top level and a compression schema is selected for each point at the bottom level. The proposed AMMMO framework incorporates compression primitives and the control parameters, which define the compression scheme space.…”
Section: Data Compressionmentioning
confidence: 99%
“…The study in [18] proposes an algorithm which combines a recurrent neural network predictor and a lossless compression method; and uses genomic and text datasets and achieves around 20% reduction over the traditional compression method, i.e., Gzip. The work in [19] presents a two-level approach that selects a compression framework for individual data points in time-series and then proposes a neural network structure that tunes parameter values automatically. As a result, the framework is capable of improving compression ratio by up to 120% compared to other traditional compression methods.…”
Section: State Of the Artmentioning
confidence: 99%
“…The study in [18] uses genomic and text datasets and combines a recurrent neural network predictor and lossless compression method; and achieves around 20% reduction over Gzip on the datasets. The work in [19] uses a reinforcement learning-based approach for compression of time-series data and improves compression ratio by up to 120% (with an average of 50%), compared to compression methods called Gorilla, MO (Middle-Out), and Snappy. Then, the work studies bandwidth utilization of the method using a combined CPU and GPU platform for Gorilla, MO, and Snappy.…”
Section: F Comparison With the State-of-the-artmentioning
confidence: 99%
“…Therefore, according to [ 39 ], there are many universal compression techniques, such as (static or adaptive) Huffman and arithmetic coding, which are ubiquitous in real-world applications. In addition, it is known that they can be categorized into two types: lossy compression and lossless compression.…”
Section: Data Compressionmentioning
confidence: 99%
“…In the study [ 39 ], the author proposed a two-level compression model that selects a proper compression scheme for each individual point, making it possible for diverse patterns to be captured with fine granularity. Based on this model, they designed and implemented the Adaptive Multi-Model Middle-Out (AMMMO) framework, which provides access to a set of control parameters to categorize data patterns.…”
Section: Related Workmentioning
confidence: 99%