ICASSP 2021 - 2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) 2021
DOI: 10.1109/icassp39728.2021.9413766
|View full text |Cite
|
Sign up to set email alerts
|

Low-Dimensional Denoising Embedding Transformer for ECG Classification

Abstract: The transformer based model (e.g., FusingTF) has been employed recently for Electrocardiogram (ECG) signal classification. However, the high-dimensional embedding obtained via 1-D convolution and positional encoding can lead to the loss of the signal's own temporal information and a large amount of training parameters. In this paper, we propose a new method for ECG classification, called low-dimensional denoising embedding transformer (LDTF), which contains two components, i.e., low-dimensional denoising embed… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
3
0

Year Published

2022
2022
2025
2025

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 16 publications
(4 citation statements)
references
References 17 publications
0
3
0
Order By: Relevance
“…Alternatively, hybrid architectures such as CNN-RNN have shown promising results in classifying hand movements [6], [25], as they benefit from advantages of different modules in extracting temporal and spatial features. Meanwhile, with the advent of the attention mechanism [10], transformers are being considered as a new ML technique for sequential data modeling [37], [38]. Capitalizing on the recent success of transformers in various fields such as machine translation [11], [39], speech recognition [40] and computer vision [12], we aim to examine its applicability and potentials for sEMGbased gesture recognition.…”
Section: Related Workmentioning
confidence: 99%
“…Alternatively, hybrid architectures such as CNN-RNN have shown promising results in classifying hand movements [6], [25], as they benefit from advantages of different modules in extracting temporal and spatial features. Meanwhile, with the advent of the attention mechanism [10], transformers are being considered as a new ML technique for sequential data modeling [37], [38]. Capitalizing on the recent success of transformers in various fields such as machine translation [11], [39], speech recognition [40] and computer vision [12], we aim to examine its applicability and potentials for sEMGbased gesture recognition.…”
Section: Related Workmentioning
confidence: 99%
“…Alternatively, hybrid architectures such as CNN-RNN have shown promising results in classifying hand movements [6], [16], as they benefit from advantages of different modules in extracting temporal and spatial features. Meanwhile, with the advent of the attention mechanism [22], transformers are being considered as a new ML technique for sequential data modeling [36], [37]. Capitalizing on the recent success of transformers in various fields such as machine translation [23], [38], speech recognition [39] and computer vision [24], we aim to examine its applicability and potentials for sEMGbased gesture recognition.…”
Section: Related Workmentioning
confidence: 99%
“…Transformer, the first sequence transduction model entirely based on attention, replacing the recurrent layers most commonly used in encoder-decoder architectures with multi-headed self-attention [23]. Existing studies have shown that the transformer can not only handle the problem in the field of translation, but can deal with the classification of temporal sequence [23], such as ECG sequence [33,34]. For the first time, we propose a DI-Transformer model to deal with the problem of ECG SQA, and its overall structure is shown in Fig.…”
Section: Proposed Dual-input Transformer Modelmentioning
confidence: 99%