2023
DOI: 10.1155/2023/4140594
|View full text |Cite
|
Sign up to set email alerts
|

Resource Allocation in Multicore Elastic Optical Networks: A Deep Reinforcement Learning Approach

Abstract: A deep reinforcement learning (DRL) approach is applied, for the first time, to solve the routing, modulation, spectrum, and core allocation (RMSCA) problem in dynamic multicore fiber elastic optical networks (MCF-EONs). To do so, a new environment was designed and implemented to emulate the operation of MCF-EONs - taking into account the modulation format-dependent reach and intercore crosstalk (XT) - and four DRL agents were trained to solve the RMSCA problem. The blocking performance of the trained agents w… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
2
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3
2

Relationship

0
5

Authors

Journals

citations
Cited by 6 publications
(2 citation statements)
references
References 57 publications
0
2
0
Order By: Relevance
“…In these works, DRL employs deep neural networks (DNNs) to extract network state information and optimise a long-term cumulative reward. In MCF-based SDM networks, the DRL-RMSCA algorithm [8] presented by Pinto-Ríos et al selects only aligned cores, failing to exploit the full core switching capability of MCF-EON. On the other hand, when developing a DRL-based RMSCA scheme able to switch between cores, handling the cardinality of the observation and the action spaces becomes challenging.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…In these works, DRL employs deep neural networks (DNNs) to extract network state information and optimise a long-term cumulative reward. In MCF-based SDM networks, the DRL-RMSCA algorithm [8] presented by Pinto-Ríos et al selects only aligned cores, failing to exploit the full core switching capability of MCF-EON. On the other hand, when developing a DRL-based RMSCA scheme able to switch between cores, handling the cardinality of the observation and the action spaces becomes challenging.…”
Section: Introductionmentioning
confidence: 99%
“…On the other hand, when developing a DRL-based RMSCA scheme able to switch between cores, handling the cardinality of the observation and the action spaces becomes challenging. Additionally, with an extremely large action space, it might not be possible to guarantee that invalid actions are not ignored by a trained DRL agent, which negatively impacts the blocking probability (BP), or will require a higher training cost [8].…”
Section: Introductionmentioning
confidence: 99%