2018 IEEE International Conference on Data Mining (ICDM) 2018
DOI: 10.1109/icdm.2018.00088
|View full text |Cite
|
Sign up to set email alerts
|

Adversarially Learned Anomaly Detection

Abstract: Anomaly detection is a significant and hence wellstudied problem. However, developing effective anomaly detection methods for complex and high-dimensional data remains a challenge. As Generative Adversarial Networks (GANs) are able to model the complex high-dimensional distributions of real-world data, they offer a promising approach to address this challenge. In this work, we propose an anomaly detection method, Adversarially Learned Anomaly Detection (ALAD) based on bi-directional GANs, that derives adversar… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
232
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
4
3
2

Relationship

0
9

Authors

Journals

citations
Cited by 325 publications
(279 citation statements)
references
References 14 publications
0
232
0
Order By: Relevance
“…Existing deep anomaly detection 1 methods [2,7,19,20,22,29,30] address these two challenges by using unsupervised deep learning to model the normal class in a two-step approach (i.e., the pipeline (a) in Figure 1): they first learn to represent data with new representations, e.g., intermediate representations in autoencoders [2,7,30], latent spaces in generative adversarial networks (GANs) [22,29], or distance metric spaces in [19,20]; and then they use the learned representations to define anomaly scores using reconstruction errors [2,7,22,29,30] or distance-based measures [19,20]. However, in most of these methods [2,7,22,29,30], the representation learning is separate from anomaly detection methods, so it may yield representations that are suboptimal or even irrelevant w.r.t. specific anomaly detection methods.…”
Section: Introductionmentioning
confidence: 99%
“…Existing deep anomaly detection 1 methods [2,7,19,20,22,29,30] address these two challenges by using unsupervised deep learning to model the normal class in a two-step approach (i.e., the pipeline (a) in Figure 1): they first learn to represent data with new representations, e.g., intermediate representations in autoencoders [2,7,30], latent spaces in generative adversarial networks (GANs) [22,29], or distance metric spaces in [19,20]; and then they use the learned representations to define anomaly scores using reconstruction errors [2,7,22,29,30] or distance-based measures [19,20]. However, in most of these methods [2,7,22,29,30], the representation learning is separate from anomaly detection methods, so it may yield representations that are suboptimal or even irrelevant w.r.t. specific anomaly detection methods.…”
Section: Introductionmentioning
confidence: 99%
“…GANs [6] created a new branch in the development of image anomaly detection. GAN-based approaches [7][8][9][10][11] differ in two parts: (i) how to find latent vectors that correspond to the input images, (ii) how to estimate abnormality based Figure 2: Comparison of four anomaly detection models. G denotes the generator, E the encoder, D * the discriminators, "rec.…”
Section: Related Workmentioning
confidence: 99%
“…For ADGAN [8], OCGAN [9], ALAD [11], GPND [20], LSA [21] we used results as reported in the corresponding publications. Results for OC-SVM, KDE, AnoGAN were obtained from [21].…”
Section: Algorithm 3 Select Weighting Parametermentioning
confidence: 99%
“…After training, the generator produces examples from the distribution of the original data. The discriminator, whereas, can detect novelties and outliers in the data [20]. Often, GANs employ the Wasserstein distance [21] or the binary cross entropy (BCE) [22] as loss function.…”
Section: Generative Adversarial Networkmentioning
confidence: 99%