Proceedings of the 14th International Joint Conference on Biomedical Engineering Systems and Technologies 2021
DOI: 10.5220/0010320500910102
|View full text |Cite
|
Sign up to set email alerts
|

Robust Anomaly Detection in Time Series through Variational AutoEncoders and a Local Similarity Score

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
3
1

Relationship

2
7

Authors

Journals

citations
Cited by 11 publications
(5 citation statements)
references
References 11 publications
0
5
0
Order By: Relevance
“…We therefore group all the 4 abnormal classes into one class. Thus, the ECG5000 timeseries classification task is modeled as a binary classification task between healthy and unhealthy heartbeats; a few authors do the same (Matias et al, 2021;Oluwasanmi et al, 2021;Biloborodova et al, 2022).…”
Section: Ecgmentioning
confidence: 99%
“…We therefore group all the 4 abnormal classes into one class. Thus, the ECG5000 timeseries classification task is modeled as a binary classification task between healthy and unhealthy heartbeats; a few authors do the same (Matias et al, 2021;Oluwasanmi et al, 2021;Biloborodova et al, 2022).…”
Section: Ecgmentioning
confidence: 99%
“…Therefore, we decided to opt for an anomaly detection task that requires fewer data for training. Interestingly, Variational Autoencoders have been successfully employed for anomaly/change/novelty detection tasks, such as bird species [19], ultrasounds [16], time series [14], computer networks [11], etc.…”
Section: # Of Observationsmentioning
confidence: 99%
“…More recently, DL models have started to be applied in time series analysis (concretely concerning classification and anomaly detection tasks) [26][27][28][29][30] after having achieved a notable success in the computer vision field [31]. DNN models offer many advantages when compared to classical approaches, since they do not require an elaborate data pre-processing pipeline, they are capable of efficiently extract relevant feature maps (unlike hand-crafted methods, which require expert domain knowledge and can be computationally more expensive to extract [32,33]), and better handle abundant amounts of data.…”
Section: Related Workmentioning
confidence: 99%