2022
DOI: 10.48550/arxiv.2201.04828
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Multi-Scale Adaptive Graph Neural Network for Multivariate Time Series Forecasting

Abstract: Multivariate time series (MTS) forecasting plays an important role in the automation and optimization of intelligent applications. It is a challenging task, as we need to consider both complex intra-variable dependencies and inter-variable dependencies. Existing works only learn temporal patterns with the help of single inter-variable dependencies. However, there are multi-scale temporal patterns in many real-world MTS. Single inter-variable dependencies make the model prefer to learn one type of prominent and… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
14
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(15 citation statements)
references
References 21 publications
1
14
0
Order By: Relevance
“…Hewage et al applied a temporal convolutional network (TCN) to weather forecasting [18]. To overcome the grid operation of CNN, Chen et al proposed a multi-scale adaptive graph neural network that jointly considers intra-and inter-variable dependencies [19].…”
Section: A Deep Learning Methods For Time-series Forecastingmentioning
confidence: 99%
“…Hewage et al applied a temporal convolutional network (TCN) to weather forecasting [18]. To overcome the grid operation of CNN, Chen et al proposed a multi-scale adaptive graph neural network that jointly considers intra-and inter-variable dependencies [19].…”
Section: A Deep Learning Methods For Time-series Forecastingmentioning
confidence: 99%
“…Three main T-operator families are identified: (1) CNN-based T-operators [8,13,21,28,46,61,62,65], specifically Temporal Convolutional Networks (TCNs), that apply dilated causal convolutions to time series data; (2) RNN-based T-operators, such as long short term memory networks (LSTMs) [49] and gated recurrent unit networks (GRUs) [4,6,9,13,36], that process time series based on a recursive that adopt the attention mechanism to establish self-interactions of input time steps, enabling weighted temporal information extraction over long sequences. While all T-operator families have the same space complexity, the time complexity of the operators in the Transformer family is larger than those of the operators in the CNN and RNN families because of their large-size matrix multiplication [60].…”
Section: S/t-operatorsmentioning
confidence: 99%
“…There are roughly two S-operator families: (1) GCN-based Soperators, specifically Chebyshev GCNs [8,13,21,46] or Diffusion GCNs [36,60,62], utilize predefined or learned spatial adjacency matrices to capture high-order spatial correlations and (2) Transformerbased S-operators [20,44,60,64] cast attention operations across different time series to obtain their weighted spatial correlations. Theoretically, GCNs and Transformers incur the same space and time complexities for S-operators (see Table 2).…”
Section: S/t-operatorsmentioning
confidence: 99%
See 2 more Smart Citations