2023
DOI: 10.1016/j.engappai.2023.105964
|View full text |Cite
|
Sign up to set email alerts
|

Time-series anomaly detection with stacked Transformer representations and 1D convolutional network

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
9
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
8
2

Relationship

0
10

Authors

Journals

citations
Cited by 45 publications
(17 citation statements)
references
References 10 publications
0
9
0
Order By: Relevance
“…Transformer-based methods [15], [38] that capture temporal dependencies in sequential data have been proposed in various fields. Jina et al [38] proposed a time-series anomaly detection method that employs a stacked transformer encoder and one-dimensional convolutional neural network (1D-CNN) [39]-based decoder, to capture global trends and local variability in time-series data. ALSP [15] is a stock selection method that uses a transformer encoder to capture long-and short-term stock patterns.…”
Section: Transformer-based Methodsmentioning
confidence: 99%
“…Transformer-based methods [15], [38] that capture temporal dependencies in sequential data have been proposed in various fields. Jina et al [38] proposed a time-series anomaly detection method that employs a stacked transformer encoder and one-dimensional convolutional neural network (1D-CNN) [39]-based decoder, to capture global trends and local variability in time-series data. ALSP [15] is a stock selection method that uses a transformer encoder to capture long-and short-term stock patterns.…”
Section: Transformer-based Methodsmentioning
confidence: 99%
“…To make full use of temporal patterns, a frequency attention module is designed to extract periodic oscillation features. Kim, J. et al [28] proposed an unsupervised prediction-based time series anomaly detection method using the transformer, which learns the dynamic patterns of sequential data through a self-attentive mechanism. The output representation of each transformer layer is accumulated in the encoder to obtain a representation with multiple levels and rich information.…”
Section: The Methods For Spatial-temporal Correlation Fusionmentioning
confidence: 99%
“…This is because each subsequence undergoes the same input modification. Temporal translation invariance [29] allows for a pattern learned at one point in a sequence to be recognized at a different location later.…”
Section: Pv Power Prediction Model 221 One-dimensional Convolutionmentioning
confidence: 99%