2015
DOI: 10.1016/j.dsp.2015.06.006
|View full text |Cite
|
Sign up to set email alerts
|

High-rate compression of ECG signals by an accuracy-driven sparsity model relying on natural basis

Abstract: Long duration recordings of ECG signals require high compression ratios, in particular when storing on portable devices. Most of the ECG compression methods in literature are based on wavelet transform while only few of them rely on sparsity promotion models. In this paper we propose a novel ECG signal compression framework based on sparse representation using a set of ECG segments as natural basis. This approach exploits the signal regularity, i.e. the repetition of common patterns, in order to achieve high c… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
8
0

Year Published

2016
2016
2023
2023

Publication Types

Select...
9
1

Relationship

3
7

Authors

Journals

citations
Cited by 17 publications
(8 citation statements)
references
References 31 publications
0
8
0
Order By: Relevance
“…Sparsity techniques have been already applied to the compression of electrocardiogram (ECG) signals [31, 32]. To highlight the benefit of the dictionary learning approach in this task, we tested the two methods K-SVD and R-SVD recasting the compression as a problem of sparse approximation with a dictionary.…”
Section: Experiments On Natural Datamentioning
confidence: 99%
“…Sparsity techniques have been already applied to the compression of electrocardiogram (ECG) signals [31, 32]. To highlight the benefit of the dictionary learning approach in this task, we tested the two methods K-SVD and R-SVD recasting the compression as a problem of sparse approximation with a dictionary.…”
Section: Experiments On Natural Datamentioning
confidence: 99%
“…The first are aimed at uniformly enhancing the sparseness level by shrinking effects, while the latter to project back into the feasible space of solutions. A motivated reason to use k - LiMapS is that we have already demonstrated in past works its ability to find low-rank approximate solutions in tasks such as biomedical signal compression [35] and FR problems with very few training samples [36,37], and FR in presence of partial occlusions [38]. Here we show how to apply it to the SSPP problem which is one of the most challenging task in the realm of face analysis, as highlighted at the beginning of this paper.…”
Section: Methodsmentioning
confidence: 99%
“…Procedurally, for EDA we perform a standard continuous decomposition analysis aiming at unbiased scores of phasic and tonic activity, thus retaining only the phasic data. In regards to the HRV signal, once denoised the ECG signal, we achieve the beat-to-beat fluctuations as RR-interval time series from ECG using standard techniques [42]. After preprocessing stage, including signal segmentation which divides the signal into 4-sec overlapped windows, we select empirically a suitable level of Daubechies 3 (db3), following the rule…”
Section: Methodsmentioning
confidence: 99%