2019 International Conference on Information Networking (ICOIN) 2019
DOI: 10.1109/icoin.2019.8718194
|View full text |Cite
|
Sign up to set email alerts
|

Optimal Trajectory Learning for UAV-BS Video Provisioning System: A Deep Reinforcement Learning Approach

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
7
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
5
2

Relationship

3
4

Authors

Journals

citations
Cited by 11 publications
(7 citation statements)
references
References 6 publications
0
7
0
Order By: Relevance
“…III-B2. The proposed method assumes that the joint mobile charging and coverage-time extension computation conducts at a centralized controller which is connected to charging towers in order to guarantee the stabilized power supply (refer to [4], [8], [11], [12], [23], [24], [28], [30] and references therein for the detailed description).…”
Section: A Algorithm Design Rationalementioning
confidence: 99%
See 1 more Smart Citation
“…III-B2. The proposed method assumes that the joint mobile charging and coverage-time extension computation conducts at a centralized controller which is connected to charging towers in order to guarantee the stabilized power supply (refer to [4], [8], [11], [12], [23], [24], [28], [30] and references therein for the detailed description).…”
Section: A Algorithm Design Rationalementioning
confidence: 99%
“…As we mentioned above, drones widely used for many applications, e.g. video provisioning [23], mobile edge computing [24], and aerial information sensing [25], can be a key solution for MBS services in cellular networks. In order to use the drones for cellular networks, theoretical performance analysis under consideration of practical antenna configurations [26] and mobility patterns [27] is essentially required.…”
Section: Introductionmentioning
confidence: 99%
“…Based on the definitions of (10) and (11), the rewards R delay in (7) and R energy in (8) are defined as the difference between the value when the tasks are performed locally and the value by the decision algorithm based on DQN. As a result, the learning is carried out to minimize D dqn and E dqn while it maximizes R delay and R energy .…”
Section: Dqn-based Offloading and Compression Decisionmentioning
confidence: 99%
“…Among them, this paper considers reinforcement learning based approaches because this given problem is for stochastic sequential offloading decision-making. Among a lot of deep reinforcement learning (DRL) methodologies such as Q-learning, Markov decision process (MDP) [5], deep Q-network (DQN), and deep deterministic policy gradient (DDPG) [6,7], this paper designs a sequential offloading decision-making algorithm based on DQN. The reason why this paper considers DQN is that it is the function approximation of Q-learning using deep neural network (DNN) in order to take care of large-scale problem setting.…”
mentioning
confidence: 99%
“…Recently, many research results have been introduced to deal with the use of mobile stations for enhancing communications and networking performance, e.g., (i) mobile computing for surveillance monitoring and (ii) cellular network coverage extension using drones. Among them, the use of drone wireless communications and networking is promising and a lot of related research contributions are now available for various applications and settings [1][2][3][4][5][6][7][8][9].…”
Section: Introductionmentioning
confidence: 99%