2021
DOI: 10.1007/978-3-030-68790-8_44
|View full text |Cite
|
Sign up to set email alerts
|

Supervised Autoencoder Variants for End to End Anomaly Detection

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
5
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
2
2

Relationship

2
2

Authors

Journals

citations
Cited by 4 publications
(5 citation statements)
references
References 41 publications
0
5
0
Order By: Relevance
“…While ATA incorporates not only toxic but also non-toxic samples in the training procedure, OCA's training routine exposes the model only to toxic samples, leading to underperformance even compared to the random BASE baseline. This is a well-known problem which especially arises when the inliers correlate with outliers in feature space [3,4]. Interestingly, all methods perform superior on the insult test set, compared to the others.…”
Section: Resultsmentioning
confidence: 95%
See 1 more Smart Citation
“…While ATA incorporates not only toxic but also non-toxic samples in the training procedure, OCA's training routine exposes the model only to toxic samples, leading to underperformance even compared to the random BASE baseline. This is a well-known problem which especially arises when the inliers correlate with outliers in feature space [3,4]. Interestingly, all methods perform superior on the insult test set, compared to the others.…”
Section: Resultsmentioning
confidence: 95%
“…In order to address this issue of diverse and previously unknown toxicity types, we present a comparative analysis of classification and outlier detection In this work, we consider three different types of methods for toxicity detection, namely a) representation learning based outlier detectors, b) ensemble methods and c) traditional deep neural networks. In the first case, a representation of the normal class (here, toxic class) is being learned and any sample that is very dissimilar from this representation is being rejected as an outlier [3,4,5,6]. In practice, this methodology has been successfully applied within a wide spectrum of domains, such as medicine [7], fraud detection [8] or intrusion detection [9].…”
Section: Introductionmentioning
confidence: 99%
“…In the related OCC setting with its focus on outlier detection, DNN-based approaches have been researched from three angles: (1) combining kernel methods [65] with DNN methods [14,60,71], (2) generative models (e.g., generative adversarial networks [22] or variational autoencoders [35])based outlier detectors [50,64,72], and (3) based on (semi-) supervised autoencoders [8,10,26,44,45,47]. Here, the key idea is to learn a representation of the inlier distribution and subsequently to estimate the outlierness of a sample via its reconstruction error.…”
Section: Related Workmentioning
confidence: 99%
“…Similar to existing autoencoder-based approaches for outlier detection [8,26,45], the decoupling autoencoder (DAE) method learns the outlierness of a sample via its reconstruction error. Existing approaches estimate the decision boundary via brute-force algorithms [1,45] or learn the decision boundary via a subsequent downstream layer [44]. In contrast, DAE learns the decision boundary end-to-end while optimizing for a pessimistic decision boundary that is most close to the inlier samples without compromising generalization performance.…”
Section: Decoupling Autoencodersmentioning
confidence: 99%
See 1 more Smart Citation