GLOBECOM 2022 - 2022 IEEE Global Communications Conference 2022
DOI: 10.1109/globecom48099.2022.10001412
|View full text |Cite
|
Sign up to set email alerts
|

Optimal Offloading Strategies for Edge-Computing via Mean-Field Games and Control

Abstract: Both data ferrying with disruption-tolerant networking (DTN) and mobile cellular base stations constitute important techniques for UAV-aided communication in situations of crises where standard communication infrastructure is unavailable. For optimal use of a limited number of UAVs, we propose providing both DTN and a cellular base station on each UAV. Here, DTN is used for large amounts of low-priority data, while capacity-constrained cell coverage remains reserved for emergency calls or command and control. … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
9
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
2

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(9 citation statements)
references
References 59 publications
0
9
0
Order By: Relevance
“…The finite time horizon can be considered the time until a cure is found. The original problem without major players has been used as a benchmark for MFG learning (Cui and Koeppl 2021;.…”
Section: Sis Epidemics Controlmentioning
confidence: 99%
See 1 more Smart Citation
“…The finite time horizon can be considered the time until a cure is found. The original problem without major players has been used as a benchmark for MFG learning (Cui and Koeppl 2021;.…”
Section: Sis Epidemics Controlmentioning
confidence: 99%
“…The general idea is to summarize many similar agents (players) and their interaction through their state distribution -the mean field (MF). Owing to the amenable complexity of MFGs, many recent efforts have formulated equilibrium learning algorithms for MFGs , including approaches based on regularization (Cui and Koeppl 2021;Guo, Xu, and Zariphopoulou 2022), optimization (Guo, Hu, and Zhang 2023;, fictitious play (Perrin et al 2020;Geist et al 2022) and online mirror descent (Pérolat et al 2022;Yardim et al 2023). For less-familiar readers, we refer to the survey of .…”
Section: Introductionmentioning
confidence: 99%
“…A few works have used deep RL methods to compute the best response. For example, DDPG have been used in [84], soft actor-critic (SAC) has been used for a flocking model in [208], while deep Q-learning or some variants of it has been used in [71,207,178]. Recently, several works have studied the advantages and the limitations brought by the regularization of the policy through penalization terms in the cost function [10,71,113].…”
Section: Reinforcement Learning For Mean-field Gamesmentioning
confidence: 99%
“…For example, DDPG have been used in [84], soft actor-critic (SAC) has been used for a flocking model in [208], while deep Q-learning or some variants of it has been used in [71,207,178]. Recently, several works have studied the advantages and the limitations brought by the regularization of the policy through penalization terms in the cost function [10,71,113]. We refer to [177] for a survey of learning algorithms and reinforcement learning methods to approximate MFG solutions.…”
Section: Reinforcement Learning For Mean-field Gamesmentioning
confidence: 99%
“…With the exception of [Angiuli et al, 2022b], which is the basis of the present paper, these methods focus on solving one of the two types of problems, MFG or MFC. On the one hand, to learn MFGs solutions, two classical families of methods are those relying on strict contraction and fixed point iterations (e.g., [Guo et al, 2019, Cui and Koeppl, 2021, Anahtarci et al, 2023 with tabular Qlearning or deep RL), and those relying on monotonicity and the structure of the game (e.g., , Perrin et al, 2020, Laurière et al, 2022 using fictitious play and tabular or deep RL). Two-timescale analysis to learn MFG solutions has been used in [Mguni et al, 2018, Subramanian andMahajan, 2019].…”
Section: Introductionmentioning
confidence: 99%