2021
DOI: 10.1002/int.22582
|View full text |Cite
|
Sign up to set email alerts
|

Improved autoencoder for unsupervised anomaly detection

Abstract: Deep autoencoder-based methods are the majority of deep anomaly detection. An autoencoder learning on training data is assumed to produce higher reconstruction error for the anomalous samples than the normal samples and thus can distinguish anomalies from normal data. However, this assumption does not always hold in practice, especially in unsupervised anomaly detection, where the training data is anomaly

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
24
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 47 publications
(25 citation statements)
references
References 24 publications
0
24
0
Order By: Relevance
“…. In order to investigate the detection performance of the proposed algorithm for time-frequency overlapped interference, one method based on improved autoencoder (IAE) [28] and another method based on improved generative adversarial network (IGAN) [29] are compared in this paper, which are commonly used for anomaly detection in deep learning.…”
Section: Resultsmentioning
confidence: 99%
“…. In order to investigate the detection performance of the proposed algorithm for time-frequency overlapped interference, one method based on improved autoencoder (IAE) [28] and another method based on improved generative adversarial network (IGAN) [29] are compared in this paper, which are commonly used for anomaly detection in deep learning.…”
Section: Resultsmentioning
confidence: 99%
“…This contribution mainly investigates the use of the autoencoder (AE) for novelty detection [2,11]. The AE is a type of artificial neural network that learns the data representation in an unsupervised manner.…”
Section: Autoencodermentioning
confidence: 99%
“…Details are discussed in the following sections. (1) and ℓ (2) increases the total GCD do ⊳ ℓ (1) and ℓ (2) as in Equation ( 21) 32:…”
Section: Kernel-based Online Dependence Clusteringmentioning
confidence: 99%
“…If merging the two candidates, ℓ (1) and ℓ (2) , increases the total GCD, the two clusters become one, and the number of clusters decreases to l − 1. For every iteration, the merger step searches for two clusters with the smallest separation and determines whether they can be merged to increase the total GCD.…”
Section: Classifier Updatementioning
confidence: 99%
See 1 more Smart Citation