2019
DOI: 10.1109/access.2019.2953498
|View full text |Cite
|
Sign up to set email alerts
|

Reinforcement Learning for Service Function Chain Reconfiguration in NFV-SDN Metro-Core Optical Networks

Abstract: The work leading to these results has been supported by the European Community under grant agreement no. 761727 Metro-Haul Project.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
17
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
3
3
1

Relationship

1
6

Authors

Journals

citations
Cited by 39 publications
(17 citation statements)
references
References 34 publications
(50 reference statements)
0
17
0
Order By: Relevance
“…The action performed by the Fig. 1.3 Schema of the proposed RL system [18] agent will produce another state of the environment and a reward at a time t + 1 and so on. The environment, namely network environment, consists of a multi-layer model formulation based on Mixed ILP (MILP) that, given a set of SFC requests, finds the optimal VNF placement and Routing and Wavelength Assignment (RWA).…”
Section: Reinforcement Learning For Adaptive Network Resource Allocationmentioning
confidence: 99%
See 1 more Smart Citation
“…The action performed by the Fig. 1.3 Schema of the proposed RL system [18] agent will produce another state of the environment and a reward at a time t + 1 and so on. The environment, namely network environment, consists of a multi-layer model formulation based on Mixed ILP (MILP) that, given a set of SFC requests, finds the optimal VNF placement and Routing and Wavelength Assignment (RWA).…”
Section: Reinforcement Learning For Adaptive Network Resource Allocationmentioning
confidence: 99%
“…The methodology and the detailed results can be found in Chap. 6 of the Ph.D. thesis and in the following paper: [18].…”
Section: Reinforcement Learning For Adaptive Network Resource Allocationmentioning
confidence: 99%
“…Sang Il Kim et al considered the consumption of CPU and memory resources, then utilized reinforcement learning to solve SFC optimization problem dynamically [ 17 ], yet the algorithm was not satisfied with delay sensitive SFC. Sebastian Troia et al investigated the application of reinforcement learning for performing dynamic SFC resources allocation in NFV-SDN enabled metro-core optical networks [ 18 ], they decided how to reconfigure the SFCs according to state of the network and historical traffic traces. However, reinforcement learning exists the limitation that it cannot satisfy the networks which has large-scale states.…”
Section: Related Workmentioning
confidence: 99%
“…However, traditional backup scheme is too simple to satisfy complex SFC requests. On the other hand, more advanced and powerful reinforcement learning models have significantly achieved performance gains in SFC deployment problem recently [ 16 , 17 , 18 ]. However, Q-learning, a classical algorithm of reinforcement learning, needs to maintain a quite large Q-table because of large state and action set, so that the computing power of the algorithm will be affected with wasting of CPU and memory.…”
Section: Introductionmentioning
confidence: 99%
“…The policy determines the action to be taken when the perceived states of the environment are given. The authors of [4]- [6] have proven the efficiency of DRL in the field of SFC.…”
Section: Introductionmentioning
confidence: 99%