2021
DOI: 10.1038/s41598-020-79512-7
|View full text |Cite
|
Sign up to set email alerts
|

Delineation of the electrocardiogram with a mixed-quality-annotations dataset using convolutional neural networks

Abstract: Detection and delineation are key steps for retrieving and structuring information of the electrocardiogram (ECG), being thus crucial for numerous tasks in clinical practice. Digital signal processing (DSP) algorithms are often considered state-of-the-art for this purpose but require laborious rule readaptation for adapting to unseen morphologies. This work explores the adaptation of the the U-Net, a deep learning (DL) network employed for image segmentation, to electrocardiographic data. The model was trained… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

1
35
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
1
1

Relationship

1
5

Authors

Journals

citations
Cited by 43 publications
(53 citation statements)
references
References 25 publications
1
35
0
Order By: Relevance
“…The ordering of operations after the convolutional operations was defined to agree with the image segmentation literature (non-linearity → batch normalization → dropout) [38,39]. All networks were trained using ECG-centered data augmentation, as described elsewhere [10], comprising additive white Gaussian noise, random periodic spikes, amplifier saturation, powerline noise, baseline wander and pacemaker spikes to enhance the model's generalizability. All executions were performed at the Universitat Pompeu Fabra's high performance computing environment, assigning the jobs to either an NVIDIA 1080Ti or NVIDIA Titan Xp GPU, and used the PyTorch library [40].…”
Section: Methodsmentioning
confidence: 99%
See 4 more Smart Citations
“…The ordering of operations after the convolutional operations was defined to agree with the image segmentation literature (non-linearity → batch normalization → dropout) [38,39]. All networks were trained using ECG-centered data augmentation, as described elsewhere [10], comprising additive white Gaussian noise, random periodic spikes, amplifier saturation, powerline noise, baseline wander and pacemaker spikes to enhance the model's generalizability. All executions were performed at the Universitat Pompeu Fabra's high performance computing environment, assigning the jobs to either an NVIDIA 1080Ti or NVIDIA Titan Xp GPU, and used the PyTorch library [40].…”
Section: Methodsmentioning
confidence: 99%
“…The data and ground truth, either real or synthesized, were then represented as binary masks for their usage in DL-based segmentation architectures, where a mask of shape {0, 1} 3×N was T rue-valued whenever a specific sample n ∈ N was contained within a P, QRS or T wave (indices 0, 1 and 2, respectively) and F alse-valued otherwise [10], bridging the gap with the imaging literature. Finally, the joint training database was split into 5-fold cross-validation with strict subject-wise splitting, not sharing beats or leads of the same patient in the training and validation sets [10,31]. Given that the proposed method employs pseudo-synthetic data generation, the pseudo-ECGs were also generated using data uniquely from the training set for each fold, ensuring no cross-fold contamination.…”
Section: Databasesmentioning
confidence: 99%
See 3 more Smart Citations