2009 ETP International Conference on Future Computer and Communication 2009
DOI: 10.1109/fcc.2009.54
|View full text |Cite
|
Sign up to set email alerts
|

Study on ECG Data Lossless Compression Algorithm Based on K-means Cluster

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
9
0

Year Published

2014
2014
2020
2020

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 10 publications
(9 citation statements)
references
References 3 publications
0
9
0
Order By: Relevance
“…However, the K-means cluster of Zhou's method [11] needs to square each point when matching templates, and the Huffman codebook in Zhou's method contains 2048 items the compress the ECG signal of the ARRDB (which has 11-bit resolution), more than the variable number of the proposed S system; the Takagi-Sugeno Fuzzy Neural Network in [12] contains many multiply-accumulate operations when predicting each point; and both [13] and [14] need integer division operations. So the computational complexity of [11]- [14] are higher than that of the proposed method. [3], [5]- [9] used simple prediction and entropy encoding algorithms, but the proposed S system achieves significantly higher CR than these methods so that this method can save more transmission power or storage space.…”
Section: Comparison With Other Methodsmentioning
confidence: 99%
See 2 more Smart Citations
“…However, the K-means cluster of Zhou's method [11] needs to square each point when matching templates, and the Huffman codebook in Zhou's method contains 2048 items the compress the ECG signal of the ARRDB (which has 11-bit resolution), more than the variable number of the proposed S system; the Takagi-Sugeno Fuzzy Neural Network in [12] contains many multiply-accumulate operations when predicting each point; and both [13] and [14] need integer division operations. So the computational complexity of [11]- [14] are higher than that of the proposed method. [3], [5]- [9] used simple prediction and entropy encoding algorithms, but the proposed S system achieves significantly higher CR than these methods so that this method can save more transmission power or storage space.…”
Section: Comparison With Other Methodsmentioning
confidence: 99%
“…Actually, since only integer operations can be performed in low-power devices, we set t to 4 times the predicted value to improve the precision. Second, k is calculated by (11) to encode M [n]. For low power ASICs or embedded systems, log 2 can be calculated by searching the most significant bit of t. When k = 0, the code length increases significantly with the increase of M [n], thus causing a great penalty when t 4 is lower than M [n].…”
Section: B Context-based Error Modelingmentioning
confidence: 99%
See 1 more Smart Citation
“…CS theory is a mathematical framework in acquiring and recovering sparse signals with the help of an incoherent 978-1-4244-7929-0/14/$26.00 ©2014 IEEE projecting basis that provides insight into how a high resolution dataset can be inferred from a relatively small and random number of measurements using simple random linear process [16,17]. Thus, rather than measuring each sample and then computing a compressed representation, CS suggests that we can measure a compressed representation directly [18].…”
Section: Advanced K-means Algorithmmentioning
confidence: 99%
“…Schematic diagram of run-length coding is shown in Figure 8. The characteristics of run-length coding algorithm is: run-length coding algorithm is very precise, one of the symbols has an error, which affects the entire encoding sequence so that the run-length encoding cannot be restored to the original data (Huang et al, 2005;Zhou, 2009). The compression rate obtained by the run-length coding depends largely on the characteristics of the data itself.…”
Section: Run-length Coding Algorithmmentioning
confidence: 99%