2019
DOI: 10.1016/j.future.2019.04.014
|View full text |Cite
|
Sign up to set email alerts
|

TIDE: Time-relevant deep reinforcement learning for routing optimization

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
33
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
5
3

Relationship

1
7

Authors

Journals

citations
Cited by 84 publications
(33 citation statements)
references
References 10 publications
0
33
0
Order By: Relevance
“…1b. Prior studies with indirect routing representations, e.g., through link weights, have typically only considered single-path routing [19], [20], see Fig. 1a.…”
Section: B Contribution: Direct Flow Routing Representation For Flowmentioning
confidence: 99%
See 1 more Smart Citation
“…1b. Prior studies with indirect routing representations, e.g., through link weights, have typically only considered single-path routing [19], [20], see Fig. 1a.…”
Section: B Contribution: Direct Flow Routing Representation For Flowmentioning
confidence: 99%
“…The so-called "sample inefficiency" of deep reinforcement learning [9], [10] (in contrast to the classical tabular Q-Learning [7]) leads to very long training times for the deep neural network, resulting in potentially excessive solution computation times. For instance, the recent TIDE study [19] considered around one hundred training episodes, each with one thousand time steps, whereby each time step is on the order of one second, resulting in training times on the order of days. Novel computation strategies can reduce the time complexity with appropriate initialization [24], [25], however timely deep reinforcement learning continues to be a challenge.…”
Section: B Contribution: Direct Flow Routing Representation For Flowmentioning
confidence: 99%
“…Mestres et al [112] Sun et al [114] proposed an intelligent network control architectural framework that employs DRL to dynamically optimize routing plans in an SDN-enabled network without the need for human involvement. The proposed framework in called TIDE.…”
Section: B ML and Dl Techniques For Routing Optimization In Sdnmentioning
confidence: 99%
“…Based on the supervised ML module, LSTM-RNN algorithm is employed to extract shortterm network data traffic variabilities and periodicities, resulting in the meaningful features which are combined at the integration step to ensure traffic flow prediction and energy-efficient routing with guaranteed QoS performance. On the other hand, the DRL module performs learning from the existing historical data or right from scratch by iteratively interfacing with the defined network setting [114][115][116][117][118] [130]. Using publicly available dataset, the module can be evaluated in terms of accuracy and convergence speed.…”
Section: B Description Of the Supervised ML And Drl Frameworkmentioning
confidence: 99%
“…With the explosive growth of next-generation Internet of Things (IoT) applications, the delay requirements of users for content are becoming increasingly higher, such as virtual reality application [1], [2]. However, due to the distance between the content server and users, high communication delay is caused, which can not meet the user's delay requirements [3]. Fortunately, with the introduction of edge cloud, content can be cached on edge cloud to satisfy the user's quality of service (QoE) [4], [5].…”
Section: Introductionmentioning
confidence: 99%