2015
DOI: 10.1007/s11227-015-1420-1
|View full text |Cite
|
Sign up to set email alerts
|

Power control with reinforcement learning in cooperative cognitive radio networks against jamming

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
40
0

Year Published

2017
2017
2023
2023

Publication Types

Select...
5
2
1

Relationship

1
7

Authors

Journals

citations
Cited by 70 publications
(40 citation statements)
references
References 20 publications
0
40
0
Order By: Relevance
“…The regret of the CUJ algorithm is same as the CNJ algorithm. (1), where the values of T C , T O , and T J are given in (17), (18), and (19), respectively with 1 = 2 = γ exp(1).K . Proof: Channel Ranking Estimation This part is similar to channel ranking estimation of Section 6.…”
Section: Appendix C Analysis Of Cuj Algorithmmentioning
confidence: 99%
“…The regret of the CUJ algorithm is same as the CNJ algorithm. (1), where the values of T C , T O , and T J are given in (17), (18), and (19), respectively with 1 = 2 = γ exp(1).K . Proof: Channel Ranking Estimation This part is similar to channel ranking estimation of Section 6.…”
Section: Appendix C Analysis Of Cuj Algorithmmentioning
confidence: 99%
“…For instance, the Colonel Blotto anti-jamming game presented in [14] provides a power allocation strategy to improve the worst-case performance against jamming in cognitive radio networks. The power control Stackelberg game as presented in [15] formulates the interactions among a source node, a relay node and a jammer that choose their transmit power in sequence without interfering with primary users. The transmission Stackelberg game developed in [16] helps build a power allocation strategy to maximize the SINR of signals in wireless networks.…”
Section: Related Workmentioning
confidence: 99%
“…where α is the learning rate that represents the weight of the current Q-function. As a benchmark, we propose a greedy-based 2-D antijamming mobile communication scheme, which update the score of each feasible communication strategy according to Receive the SINR (k) and ψ (k+1) on the feedback channel 15 Obtain u (k) and s (k+1) = SINR (k) , ψ (k+1)…”
Section: Application and Performance Evaluationmentioning
confidence: 99%
See 1 more Smart Citation
“…The difference from [28] is that they consider relay nodes which help the source counteract a smart jammer. Furthermore, in [29] reinforcement learning can be applied to determine transmission powers against a jammer in a dynamic environment without knowing the underlying Game model. In [1], authors propose an anti-jamming Bayesian Stackelberg Game with incomplete information.…”
Section: Related Workmentioning
confidence: 99%