2018
DOI: 10.1016/j.promfg.2018.10.023
|View full text |Cite
|
Sign up to set email alerts
|

A Convolutional Autoencoder Approach for Feature Extraction in Virtual Metrology

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
21
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
5
4

Relationship

0
9

Authors

Journals

citations
Cited by 74 publications
(23 citation statements)
references
References 22 publications
0
21
0
Order By: Relevance
“…First, a CAE model learns the pattern of normal signal data. CAE is one of the most advanced algorithms in the field of time series anomaly detection [37] and can capture the pattern of high-dimensional time series data without manual feature engineering in advance [39]. Convolutional layers learn spatially invariant features and capture spatially local correlations.…”
Section: A Channelwise Reconstructionmentioning
confidence: 99%
“…First, a CAE model learns the pattern of normal signal data. CAE is one of the most advanced algorithms in the field of time series anomaly detection [37] and can capture the pattern of high-dimensional time series data without manual feature engineering in advance [39]. Convolutional layers learn spatially invariant features and capture spatially local correlations.…”
Section: A Channelwise Reconstructionmentioning
confidence: 99%
“…ere are many variants of the CNN architecture (Terzi et al, 2017;Lee and Kim, 2018;Maggipinto et al, 2018a;Tsutsui and Matsuzawa, 2019) for chemical processes in the literature, but the general structure of a CNN mainly comprises two parts. e rst part is used for feature extraction and is made up of convolution and sub-sampling layers arranged alternatively, which are then followed by an activation function and a batch normalization layer, altogether forming multi-layer convolving lters (Figure 1(a)).…”
Section: Preliminaries: Cnnmentioning
confidence: 99%
“…More importantly, they are tuned to extract appropriate features for making suitable regression models. However, the applications of deep structures in VM (Terzi et al, 2017;Lee and Kim, 2018;Maggipinto et al, 2018aMaggipinto et al, , 2018b) focus on single-stage data. Indeed, the integration of deep structures with VM applications for raw multi-stage process data is worthy of investigating.…”
Section: Introductionmentioning
confidence: 99%
“…37,38 Convolutional DAEs that set hidden layers as convolutional layers receive increasing studies in feature extraction for image classification. 39,40 Actually, for time series, it has been demonstrated that the advantages of RNN over 1D CNN are largely absent in practice. 41 Therefore, 1D convolutional layers are utilized as the main hidden layers in DCDAE, where each convolutional layer is followed with a pooling layer.…”
Section: Dcdaementioning
confidence: 99%