2020
DOI: 10.3390/s20041146
|View full text |Cite
|
Sign up to set email alerts
|

Spectrum Handoff Based on DQN Predictive Decision for Hybrid Cognitive Radio Networks

Abstract: Spectrum handoff is one of the key techniques in a cognitive radio system. In order to improve the agility and the reliability of spectrum handoffs as well as the system throughput in hybrid cognitive radio networks (HCRNs) combing interweave mode with underlay mode, a predictive (or proactive) spectrum handoff scheme based on a deep Q-network (DQN) for HCRNs is proposed in this paper. In the proposed spectrum handoff approach, spectrum handoff success rate is introduced into an optimal spectrum resource alloc… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
7
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 10 publications
(7 citation statements)
references
References 23 publications
0
7
0
Order By: Relevance
“…Specifically, Q-learning approaches are developed in [63]- [65] to find the optimal handoff strategies for SUs. To speed up the learning process of the newly joined SUs, both [63] and [64] employ the offline Learning from Demonstration TL strategy. Particularly, the authors propose to transfer the complete Q-table of the nearest SU to serve as an initial point for the new SU.…”
Section: A Cognitive Radio Networkmentioning
confidence: 99%
See 2 more Smart Citations
“…Specifically, Q-learning approaches are developed in [63]- [65] to find the optimal handoff strategies for SUs. To speed up the learning process of the newly joined SUs, both [63] and [64] employ the offline Learning from Demonstration TL strategy. Particularly, the authors propose to transfer the complete Q-table of the nearest SU to serve as an initial point for the new SU.…”
Section: A Cognitive Radio Networkmentioning
confidence: 99%
“…Particularly, the authors propose to transfer the complete Q-table of the nearest SU to serve as an initial point for the new SU. Simulation results show that the TL approaches in [63] and [64] can improve the convergence rate of the new SU's learning process by up to 30% and 14%, respectively. Unlike [63] and [64], [65] proposes to transfer the knowledge of an expert SU selected based on its similarities with the new SU in terms of channel statistics, node statistics, and application statistics.…”
Section: A Cognitive Radio Networkmentioning
confidence: 99%
See 1 more Smart Citation
“…From the perspective of computational complexity, DQN and Q-learning have obvious advantages, because their mapping from state to action has been trained. Since there is no recursive calculation, the time complexity of DQN and Q-Learning are both (1) O . For h-DQN, the metacontroller and sub-controller separately calculates the Q value and the reward for each action in the next possible state.…”
Section: ) Computational Complexitymentioning
confidence: 99%
“…In wireless networks, spectrum resources are becoming more and more scarcer due to the increasing demand for wireless communication [1]. It is necessary to address the problem of spectrum underutilization and inefficiency.…”
Section: Introductionmentioning
confidence: 99%