2022
DOI: 10.1155/2022/8951961
|View full text |Cite
|
Sign up to set email alerts
|

Machine and Deep Learning for IoT Security and Privacy: Applications, Challenges, and Future Directions

Abstract: The integration of the Internet of Things (IoT) connects a number of intelligent devices with minimum human interference that can interact with one another. IoT is rapidly emerging in the areas of computer science. However, new security problems are posed by the cross-cutting design of the multidisciplinary elements and IoT systems involved in deploying such schemes. Ineffective is the implementation of security protocols, i.e., authentication, encryption, application security, and access network for IoT syste… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
17
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
3
1
1

Relationship

0
9

Authors

Journals

citations
Cited by 26 publications
(21 citation statements)
references
References 307 publications
0
17
0
Order By: Relevance
“…• Man-in-the-Middle. One of the earliest kinds of cyber threats was the man-in-the-middle (MiTM) assault (5). Impersonation and spoofing are examples of MiTM attacks.…”
Section: Security Threatsmentioning
confidence: 99%
See 1 more Smart Citation
“…• Man-in-the-Middle. One of the earliest kinds of cyber threats was the man-in-the-middle (MiTM) assault (5). Impersonation and spoofing are examples of MiTM attacks.…”
Section: Security Threatsmentioning
confidence: 99%
“…Owing to the fact that IoT development deals with huge information, a privacy hole is created due to data being improperly checked and insecurely sent. Such inherent weakness is prone to invoke the unique botnet "Mirai" a subject to widespread distributed denial of service (DDoS) attacks (4,5) . The four DL and ML privacy technologies viz.…”
Section: Introductionmentioning
confidence: 99%
“…Another type of adversarial attack is known as a poisoning attack [83], where the attacker attempts to manipulate the training data used to develop the machine learning model. This can involve introducing malicious data into the training set or modifying existing data to bias the model towards certain outcomes [12].…”
Section: Adversarial Attacksmentioning
confidence: 99%
“…Then central server distributes the aggregated model to each device for the next round of local SGD, and the whole procedure stops until certain termination conditions are met. Although FedSGD solved the challenges of data transmission and privacy leakage of sensitive data (Bharati and Podder, 2022 ; Bharati et al, 2022 ), frequent model uploading and distribution have greatly constrained the performance of federated learning, including slow convergence and low accuracy, and results in the problem of efficiency.…”
Section: Related Workmentioning
confidence: 99%