2019 18th IEEE International Conference on Machine Learning and Applications (ICMLA) 2019
DOI: 10.1109/icmla.2019.00171
|View full text |Cite
|
Sign up to set email alerts
|

An Encoder-Decoder Based Approach for Anomaly Detection with Application in Additive Manufacturing

Abstract: We present a novel unsupervised deep learning approach that utilizes the encoder-decoder architecture for detecting anomalies in sequential sensor data collected during industrial manufacturing. Our approach is designed not only to detect whether there exists an anomaly at a given time step, but also to predict what will happen next in the (sequential) process. We demonstrate our approach on a dataset collected from a real-world Additive Manufacturing (AM) testbed. The dataset contains infrared (IR) images col… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
24
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
7

Relationship

0
7

Authors

Journals

citations
Cited by 28 publications
(24 citation statements)
references
References 15 publications
(15 reference statements)
0
24
0
Order By: Relevance
“…The architecture of the autoencoder in Fig. 2 was derived by a grid search based hyperparameter optimization (parameters: layers, code-size, window size, regularization by batch normalization and/or drop-out) using as starting point an architecture proposed by Tan et al [36]. The objectives of the optimization were to decrease the reconstruction error for normal data and increase it for abnormal data.…”
Section: Temporal Contextmentioning
confidence: 99%
“…The architecture of the autoencoder in Fig. 2 was derived by a grid search based hyperparameter optimization (parameters: layers, code-size, window size, regularization by batch normalization and/or drop-out) using as starting point an architecture proposed by Tan et al [36]. The objectives of the optimization were to decrease the reconstruction error for normal data and increase it for abnormal data.…”
Section: Temporal Contextmentioning
confidence: 99%
“…1. An AE consists of two main stages, an encoder and a decoder [23]. An encoder maps the given input into a compressed representation, and a decoder transforms the compressed data back into the original input.…”
Section: Architecture Recurrent Outlier Detectionmentioning
confidence: 99%
“…Meanwhile, image features have also been classified using convolutional neural networks (CNNs) with labelled data (Zhang et al 2018). Furthermore, autoencoders have been used to detect anomalies (Tan et al 2019). Though these studies use signals from a highly dynamic process, an i.i.d.…”
Section: Literature Reviewmentioning
confidence: 99%