2022 International Joint Conference on Neural Networks (IJCNN) 2022
DOI: 10.1109/ijcnn55064.2022.9892600
|View full text |Cite
|
Sign up to set email alerts
|

Analysis of Augmentations for Contrastive ECG Representation Learning

Abstract: This paper presents a systematic investigation into the effectiveness of Self-Supervised Learning (SSL) methods for Electrocardiogram (ECG) arrhythmia detection. We begin by conducting a novel distribution analysis on three popular ECGbased arrhythmia datasets: PTB-XL, Chapman, and Ribeiro. To the best of our knowledge, our study is the first to quantify these distributions in this area. We then perform a comprehensive set of experiments using different augmentations and parameters to evaluate the effectivenes… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
8
0

Year Published

2023
2023
2025
2025

Publication Types

Select...
4
4
1
1

Relationship

0
10

Authors

Journals

citations
Cited by 18 publications
(8 citation statements)
references
References 52 publications
0
8
0
Order By: Relevance
“…Unlike knowledge distillation (Hinton et al, 2015), mutual learning doesn't need a powerful teacher network which is not always available. Mutual learning is first proposed to leverage information from multiple models and allow effective dual knowledge transfer in image processing tasks (Zhang et al, 2018;Zhao et al, 2021) Contrastive learning Contrastive learning aims at learning example representations by minimizing the distance between the positive pairs in the vector space and maximizing the distance between the negative pairs (Saunshi et al, 2019;Liang et al, 2022;Liu et al, 2022a), which is first proposed in the field of computer vision (Chopra et al, 2005;Schroff et al, 2015;Sohn, 2016;Chen et al, 2020a;Wang and Liu, 2021). In the NLP area, contrastive learning is applied to learn sentence embeddings (Giorgi et al, 2021;Yan et al, 2021), translation (Pan et al, 2021;Ye et al, 2022) and summarization Cao and Wang, 2021).…”
Section: Related Workmentioning
confidence: 99%
“…Unlike knowledge distillation (Hinton et al, 2015), mutual learning doesn't need a powerful teacher network which is not always available. Mutual learning is first proposed to leverage information from multiple models and allow effective dual knowledge transfer in image processing tasks (Zhang et al, 2018;Zhao et al, 2021) Contrastive learning Contrastive learning aims at learning example representations by minimizing the distance between the positive pairs in the vector space and maximizing the distance between the negative pairs (Saunshi et al, 2019;Liang et al, 2022;Liu et al, 2022a), which is first proposed in the field of computer vision (Chopra et al, 2005;Schroff et al, 2015;Sohn, 2016;Chen et al, 2020a;Wang and Liu, 2021). In the NLP area, contrastive learning is applied to learn sentence embeddings (Giorgi et al, 2021;Yan et al, 2021), translation (Pan et al, 2021;Ye et al, 2022) and summarization Cao and Wang, 2021).…”
Section: Related Workmentioning
confidence: 99%
“…Mehari et al [95] compared standard contrastive learning approaches (e.g. SimCLR, BYOL, SwAV, CPC) to assess their ability to extract good representations from the ECG signal, while Soltanieh et al [97] focused on the efficacy of different data augmentation techniques. Lee et al [94] compared different contrastive Learning approaches as well, but at the same time proposed a variant of VICReg (VIbCReg).…”
Section: A Self-supervised Learning On Ecgmentioning
confidence: 99%
“…Although using negative samples has been effective at text representation learning, e.g., word2vec (Mikolov et al, 2013) and BERT (Devlin et al, 2019), there exist two major challenges for it to succeed in concrete language tasks. First, a suitable training objective is critical to avoid performance degradation (Saunshi et al, 2019). Second, it is nontrivial to construct "natural" samples that mimic the diverse errors made by state-of-the-art systems that vary in words and syntax (Goyal and Durrett, 2021).…”
Section: Xsum Articlementioning
confidence: 99%