2021
DOI: 10.1109/mic.2021.3049190
|View full text |Cite
|
Sign up to set email alerts
|

Examining Machine Learning for 5G and Beyond Through an Adversarial Lens

Abstract: Spurred by the recent advances in deep learning to harness rich information hidden in large volumes of data and to tackle problems that are hard to model/solve (e.g., resource allocation problems), there is currently tremendous excitement in the mobile networks domain around the transformative potential of data-driven AI/ML based network automation, control and analytics for 5G and beyond. In this article, we present a cautionary perspective on the use of AI/ML in the 5G context by highlighting the adversarial… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
10
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
2
1

Relationship

1
7

Authors

Journals

citations
Cited by 25 publications
(10 citation statements)
references
References 11 publications
0
10
0
Order By: Relevance
“…In [268], the authors present a cautionary view on using ML in 5G/6G by highlighting the adversarial dimension spanning multiple types of ML (SL, UL, or RL) and support this through several case studies. They also examine various approaches to mitigate such adversarial ML attacks by evaluating ML models' robustness and calling attention to ML-oriented research issues in 5G/6G.…”
Section: End-to-end Aspectsmentioning
confidence: 86%
“…In [268], the authors present a cautionary view on using ML in 5G/6G by highlighting the adversarial dimension spanning multiple types of ML (SL, UL, or RL) and support this through several case studies. They also examine various approaches to mitigate such adversarial ML attacks by evaluating ML models' robustness and calling attention to ML-oriented research issues in 5G/6G.…”
Section: End-to-end Aspectsmentioning
confidence: 86%
“…Attacks against AI/ML-based modulation recognition models is probably the most well-studied category of the attacks against wireless communication systems based on adversarial example generation [74]- [77]. In a real world scenario, the adversary would have access to neither the exact input of the receiver nor the modulation type selected by the target model.…”
Section: B Adversarial Examples In 5gmentioning
confidence: 99%
“…To attack intelligent channel decoding frameworks, the adversary can try to generate a perturbation that causes decoding errors at the receiver [74], [78]. In white-box settings, this perturbation can be carried out e.g.…”
Section: B Adversarial Examples In 5gmentioning
confidence: 99%
“…In terms of adversarial attacks, adversarial examples may refer to fake data instances that are carefully crafted by adversaries. For example, Usama et al [30] utilized crafted noise to fool the modulation classifier. In addition to some intuitive changes, DNN can also be used to generate fake data for attack.…”
Section: Adversarial Attacksmentioning
confidence: 99%