2021
DOI: 10.1155/2021/5589145
|View full text |Cite|
|
Sign up to set email alerts
|

EER-RL: Energy-Efficient Routing Based on Reinforcement Learning

Abstract: Wireless sensor devices are the backbone of the Internet of things (IoT), enabling real-world objects and human beings to be connected to the Internet and interact with each other to improve citizens’ living conditions. However, IoT devices are memory and power-constrained and do not allow high computational applications, whereas the routing task is what makes an object to be part of an IoT network despite of being a high power-consuming task. Therefore, energy efficiency is a crucial factor to consider when d… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
19
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 33 publications
(19 citation statements)
references
References 32 publications
0
19
0
Order By: Relevance
“…The base station then sends out a message to communicate its location coordinates. Individual nodes then keep the posi-tion of the base station after accepting the packet and use equations given to calculate the initial Q-value using residual, E min , E max , N H , and probabilistic parameter p. This work proposes a modest extension to [16] by introducing Shannon-entropy inspired modification in finding initial Q -value. We also assume that all nodes have varying degrees of energy.…”
Section: Network Initializationmentioning
confidence: 99%
See 1 more Smart Citation
“…The base station then sends out a message to communicate its location coordinates. Individual nodes then keep the posi-tion of the base station after accepting the packet and use equations given to calculate the initial Q-value using residual, E min , E max , N H , and probabilistic parameter p. This work proposes a modest extension to [16] by introducing Shannon-entropy inspired modification in finding initial Q -value. We also assume that all nodes have varying degrees of energy.…”
Section: Network Initializationmentioning
confidence: 99%
“…The action space is defined as the collection of all feasible neighbours via which packets to the sink can be relayed, and the way the devices in the net network or the way the agents behave is defined as a policy. Mutombo et al [16] and Mutombo et al [17] suggested that the policy iteration is then used to evaluate and improve the given policy which maps the state-action pair, maximizing the long-term reward to get the best policy.…”
mentioning
confidence: 99%
“…ETALGOR achieves an overall network lifetime increase of 30.54% over LOADng. ETALGOR accomplishes an improvement of 7.69% better lifetime over Energy Efficient Routing based on Reinforcement Learning (EER-RL) (6) . Network lifetime improvement of 71.64% is attained by REERS (7) .…”
Section: Fig 8 Pdr Ratio Of Etalgormentioning
confidence: 99%
“…Thus, energy-efficient routing methods manage device energy usage and increase network lifespan. The demerit of this method is that it consumes energy, causing higher energy consumption during the data aggregation process every round (6) . Heterogeneous sensor network-enabled applications have diverse performance requirements like https://www.indjst.org/ low energy usage and low latency.…”
Section: Introductionmentioning
confidence: 99%
“…Additionally, individually modelling each task not only hampers the network's scalability but also restricts a generic model from simultaneously addressing multiple routing optimization tasks. Therefore, it is anticipated that the RL [22][23][24][25][26][27][28][29] will offer a fresh approach to solving this issue. Deep learning models [30][31][32][33] are often able to provide better performance, but it is not always necessary or appropriate to use them.…”
Section: Introductionmentioning
confidence: 99%