2021
DOI: 10.3390/fi13110278
|View full text |Cite
|
Sign up to set email alerts
|

Online Service Function Chain Deployment for Live-Streaming in Virtualized Content Delivery Networks: A Deep Reinforcement Learning Approach

Abstract: Video delivery is exploiting 5G networks to enable higher server consolidation and deployment flexibility. Performance optimization is also a key target in such network systems. We present a multi-objective optimization framework for service function chain deployment in the particular context of Live-Streaming in virtualized content delivery networks using deep reinforcement learning. We use an Enhanced Exploration, Dense-reward mechanism over a Dueling Double Deep Q Network (E2-D4QN). Our model assumes to use… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
3
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
6
1
1

Relationship

1
7

Authors

Journals

citations
Cited by 12 publications
(6 citation statements)
references
References 44 publications
0
3
0
Order By: Relevance
“…A CDN can be a potent tool for enhancing website speed, guaranteeing greater availability, cutting expenses associated with bandwidth, and enhancing user experience. CDNs can assist in ensuring that web material is accessible and available to people worldwide, independent of their location or the device they are using, by dispersing it across a global network of servers [37].…”
Section: Parameter Based Comparison Of Protocolsmentioning
confidence: 99%
“…A CDN can be a potent tool for enhancing website speed, guaranteeing greater availability, cutting expenses associated with bandwidth, and enhancing user experience. CDNs can assist in ensuring that web material is accessible and available to people worldwide, independent of their location or the device they are using, by dispersing it across a global network of servers [37].…”
Section: Parameter Based Comparison Of Protocolsmentioning
confidence: 99%
“…Equally often, DQN solutions adopt prioritized experience replay, which assigns higher probabilities to actions that lead to higher rewards when sampling from the experience replay buffer. Finally, different DQN variants have been proposed to mitigate the reward overestimation tendencies of the algorithm [62], such as the Dueling-DQN [63] and Double-DQN [64].…”
Section: Algorithm 3 Grey-wolf Optimization (Gwo)mentioning
confidence: 99%
“…Whereas, in the online scenario, the SFC orchestration strategy is adapted according to dynamic network load and the related problem is solved using migration algorithms on real time basis. In such scenario, a PSN under various resource availability constraints is considered with a set of already deployed SFC requests and the new incoming SFC requests are processed on sequential basis (in a one-by-one modality) [10]. In online scenario, adopted in this paper, the challenge for each SFC request is to find a suitable SFC instance meeting its requirements.…”
Section: Nfv-ra and Sfc Orchestration: Scenarios Strategies And Deplo...mentioning
confidence: 99%