2020
DOI: 10.1109/access.2019.2963449
|View full text |Cite
|
Sign up to set email alerts
|

A Gated Dilated Causal Convolution Based Encoder-Decoder for Network Traffic Forecasting

Abstract: The accurate estimation of future network traffic is a key enabler for early warning of network degradation and automated orchestration of network resources. The long short-term memory neural network (LSTM) is a popular architecture for network traffic forecasting, and has been successfully used in many applications. However, it has been observed that LSTMs suffer from limited memory capacity problems when the sequence is long. In this paper, we propose a gated dilated causal convolution based encoder-decoder … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
3
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 21 publications
(5 citation statements)
references
References 23 publications
0
3
0
Order By: Relevance
“…Causal convolution [26] serves two essential functions in TCN [27]: ensuring that the network generates an output of the same length as the input and preventing leakage from future to past. Causal convolution differs from a standard convolutional neural network in that it utilizes a unidirectional structure, ensuring that the model input and output are the same size.…”
Section: Causal Convolutionmentioning
confidence: 99%
“…Causal convolution [26] serves two essential functions in TCN [27]: ensuring that the network generates an output of the same length as the input and preventing leakage from future to past. Causal convolution differs from a standard convolutional neural network in that it utilizes a unidirectional structure, ensuring that the model input and output are the same size.…”
Section: Causal Convolutionmentioning
confidence: 99%
“…This formula is simple to categorize for multi-dimension signals, but for the sake of brevity, it will not be included it here. To guarantee length continuity, padding (zero as well as recreate) to the dimension of k − 1 is added to a left tail of a transmitter [26]. To give so every component a larger receptive ground, we combined several causal convolution layers.…”
Section: Dilated Causal Convolutionmentioning
confidence: 99%
“…44 LSTM as an extension of RNN is proposed to resolve the vulnerability of normal RNNs against the gradient exploding/vanishing problem caused by long-term dependencies. [45][46][47] LSTM with some innovations in its architecture including a triple gate mechanism to control inputs to cells, and a feedback loop for data retention, can learn long-term dependencies and remove invalid inputs that cause perturbation in cell's outputs. 6,48 In practice, an implemented LSTM model usually consists of a set of blocks where each block contains several LSTM cells.…”
Section: Techniques For Ntpmentioning
confidence: 99%
“…Different RNN‐based architectures can be defined as per adopted activation function and how the neurons connect to each other, namely that of Fully Recurrent Neural Network (FRNN), Bidirectional Neural Networks (BNN), stochastic neural networks, and the well‐known paradigm Long short‐term memory (LSTM) 44 . LSTM as an extension of RNN is proposed to resolve the vulnerability of normal RNNs against the gradient exploding/vanishing problem caused by long‐term dependencies 45‐47 . LSTM with some innovations in its architecture including a triple gate mechanism to control inputs to cells, and a feedback loop for data retention, can learn long‐term dependencies and remove invalid inputs that cause perturbation in cell's outputs 6,48 .…”
Section: Basic Conceptsmentioning
confidence: 99%