2018 IEEE International Workshop on Signal Processing Systems (SiPS) 2018
DOI: 10.1109/sips.2018.8598447
|View full text |Cite
|
Sign up to set email alerts
|

Epileptic Seizure Detection using Deep Convolutional Autoencoder

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
9
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
4
3
1
1

Relationship

2
7

Authors

Journals

citations
Cited by 26 publications
(9 citation statements)
references
References 9 publications
0
9
0
Order By: Relevance
“…After passing through the entire input sequence, the average of the two outputs of both blocks concatenated together is computed and used for the classification task. Bi-LSTMs are useful in that they take into account the temporal dependence between the current input at a certain time and its previous and subsequent counterparts, which offers a strong advantage for enhancing the classification results (Abdelhameed et al, 2018b). Figure 8 shows a single-layer Bi-LSTM network unrolled over n time steps.…”
Section: Two-dimensional Deep Convolution Autoencoder + Bi-lstmmentioning
confidence: 99%
“…After passing through the entire input sequence, the average of the two outputs of both blocks concatenated together is computed and used for the classification task. Bi-LSTMs are useful in that they take into account the temporal dependence between the current input at a certain time and its previous and subsequent counterparts, which offers a strong advantage for enhancing the classification results (Abdelhameed et al, 2018b). Figure 8 shows a single-layer Bi-LSTM network unrolled over n time steps.…”
Section: Two-dimensional Deep Convolution Autoencoder + Bi-lstmmentioning
confidence: 99%
“…Using the Relu activation function has some advantages which have been discussed in previous studies [25,35] for example, it reduces the probability of vanishing gradient which often occurred when the model is deep. Another example is that it adds nonlinearity and guarantees the robustness of the system against noise in the input signals [2]. The output of the last layer in the decoding part is the reconstructed data of the original input where linear activation is used.…”
Section: Architecturementioning
confidence: 99%
“…They can be divided into seven types of architecture: (1) convolutional neural networks (CNNs), (2) recurrent neural networks (RNNs), (3) deep belief networks (DBNs), (4) autoencoders (AEs), (5) a new architecture formed by combining CNN with the DBNs or AEs, (6) transformer-based networks; among which 2D-CNNs are the most popular neural network architecture for automated seizure detection [14]. CNN was applied to EEG monitoring and seizure detection diagnosis by transforming EEG signals into one-dimensional or two-dimensional forms and feeding the transformed signals to the CNN model [15, 16, 17, 18, 19, 20, 21]. RNN and its extended models, long short-term memory (LSTM) and gated recurrent units (GRU), were used when the signal has a variation in lengths [22, 23, 24, 25, 26].…”
Section: Introductionmentioning
confidence: 99%