2016
DOI: 10.1109/twc.2015.2510643
|View full text |Cite
|
Sign up to set email alerts
|

Jamming Bandits—A Novel Learning Method for Optimal Jamming

Abstract: Abstract-Can an intelligent jammer learn and adapt to unknown environments in an electronic warfare-type scenario? In this paper, we answer this question in the positive, by developing a cognitive jammer that adaptively and optimally disrupts the communication between a victim transmitter-receiver pair. We formalize the problem using a multiarmed bandit framework where the jammer can choose various physical layer parameters such as the signaling scheme, power level and the on-off/pulsing duration in an attempt… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
59
0

Year Published

2017
2017
2024
2024

Publication Types

Select...
8

Relationship

0
8

Authors

Journals

citations
Cited by 68 publications
(59 citation statements)
references
References 35 publications
0
59
0
Order By: Relevance
“…We model the caching problem as a multi-armed bandit problem. Multi-armed bandit problems [35] have been applied to various scenarios in wireless communications before [36], such as cognitive jamming [37] or mobility management [38]. Our algorithm is based on contextual multi-armed bandit algorithms [39]- [42].…”
Section: Related Workmentioning
confidence: 99%
“…We model the caching problem as a multi-armed bandit problem. Multi-armed bandit problems [35] have been applied to various scenarios in wireless communications before [36], such as cognitive jamming [37] or mobility management [38]. Our algorithm is based on contextual multi-armed bandit algorithms [39]- [42].…”
Section: Related Workmentioning
confidence: 99%
“…Taking the communicators who adopt MQAM as an example [1]- [3], we assume that P T = 100 W, the noise power N 0 /2 = 1 W [13], P J ∈ [50, 300] W and ζ E = 0.38. To verify the jamming performance of the OD method, this method is compared with the AWGN jamming, 16QAM jamming, BPSK jamming and jamming bandit learning (JB learning) algorithms [12]. In the PSO algorithm, we set C 1 = 2, C 2 = 2, and employ a linear strategy for decreasing the inertia weight w [19].…”
Section: Simulationsmentioning
confidence: 99%
“…Although unconventional jamming schemes have better performance, they cannot be obtained by optimization, but only by trial and error. In this section, we apply the proposed new jamming schemes to [12] and compare the process of JB learning with conventional and unconventional jamming schemes. In the simulation, we assume that P J(min) = 10 W and P J(max) = 210 W, and the interval [10,210] W is evenly divided into 21 levels.…”
Section: Learning the Best Jamming Scheme By Trial And Errormentioning
confidence: 99%
See 1 more Smart Citation
“…Additional literature on antijamming using a RL framework that was published after the material in this chapter includes [131][132][133][134].…”
Section: Related Workmentioning
confidence: 99%