2020
DOI: 10.1109/access.2020.3022793
|View full text |Cite
|
Sign up to set email alerts
|

Impacts of Mobility Models on RPL-Based Mobile IoT Infrastructures: An Evaluative Comparison and Survey

Abstract: With the widespread use of IoT applications and the increasing trend in the number of connected smart devices, the concept of routing has become very challenging. In this regard, the IPv6 Routing Protocol for Low-power and Lossy Networks (PRL) was standardized to be adopted in IoT networks. Nevertheless, while mobile IoT domains have gained significant popularity in recent years, since RPL was fundamentally designed for stationary IoT applications, it could not well adjust with the dynamic fluctuations in mobi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
14
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
6
2
2

Relationship

1
9

Authors

Journals

citations
Cited by 39 publications
(14 citation statements)
references
References 253 publications
(324 reference statements)
0
14
0
Order By: Relevance
“…An extended survey on a significant amount of mobility models and their impact on RPL protocol is presented in [25]. The authors provide a comprehensive taxonomy, a classification of the mobility models and they conduct a comparison based on their main specification.…”
Section: Related Workmentioning
confidence: 99%
“…An extended survey on a significant amount of mobility models and their impact on RPL protocol is presented in [25]. The authors provide a comprehensive taxonomy, a classification of the mobility models and they conduct a comparison based on their main specification.…”
Section: Related Workmentioning
confidence: 99%
“…By using the historical data and learning from past events, it can improve the performance based on the dynamic changes [28]. The Q-learning/SARSA technique, which is recently been used in many emerging applications, such as robotics, and Unmanned Aerial Vehicles (UAV) [29,30], uses the RL technique to perform the runtime management/optimization of the system properties in single or multi-core processors. The general Q-learning/SARSA technique consists of the three main components [31,32], including: (1) a discrete set of states S = {s 1 , s 2 , ..., s l }, (2) a discrete set of actions A = {a 1 , a 2 , ..., a k }, and (3) reward function R. The states and actions determine the rows and columns of the Q-table of the learning-based algorithm, respectively (shown in Figure 2).…”
Section: Learning-based System Properties Optimizationmentioning
confidence: 99%
“…These protocols are not able to handle rapid topological changes in the network in a timely and accurate manner. Further, it has been shown that various mobility patterns impact the performance of the standard RPL protocol significantly [1]. This occurs due to the continuous relocation of mobile nodes and the delayed readjustments of RPL by the 'trickle' algorithm.…”
Section: Introductionmentioning
confidence: 99%