2020 5th International Conference on Advanced Technologies for Signal and Image Processing (ATSIP) 2020
DOI: 10.1109/atsip49331.2020.9231883
|View full text |Cite
|
Sign up to set email alerts
|

Efficient Routing Protocol for Wireless Sensor Network based on Reinforcement Learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
8
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
3
3
1

Relationship

0
7

Authors

Journals

citations
Cited by 24 publications
(8 citation statements)
references
References 19 publications
0
8
0
Order By: Relevance
“…To compare the performance of DRSIR and RSIR, we also evaluate the consumption of CPU and storage memory of its DQN and Q-learning I briefly summarizes the work related to DRSIR by showing the type of control routing employed, the employed RL/DRL type of learning approach, the RL/DRL adopted action, and the metrics evaluated. The work in [13]- [17] explored RL techniques, such as Q-learning, in SDN either to employ actions for choosing the proper routing protocol within the environment state or at the next-hop node when building a routing path. In the following paragraphs, we compare DRSIR with the routing solutions proposed in [20]- [30], which employed DRL techniques for routing in SDN.…”
Section: ) Cpu and Memory Analysismentioning
confidence: 99%
See 1 more Smart Citation
“…To compare the performance of DRSIR and RSIR, we also evaluate the consumption of CPU and storage memory of its DQN and Q-learning I briefly summarizes the work related to DRSIR by showing the type of control routing employed, the employed RL/DRL type of learning approach, the RL/DRL adopted action, and the metrics evaluated. The work in [13]- [17] explored RL techniques, such as Q-learning, in SDN either to employ actions for choosing the proper routing protocol within the environment state or at the next-hop node when building a routing path. In the following paragraphs, we compare DRSIR with the routing solutions proposed in [20]- [30], which employed DRL techniques for routing in SDN.…”
Section: ) Cpu and Memory Analysismentioning
confidence: 99%
“…The solutions in [13]- [17] have employed Reinforcement Learning (RL) to optimize the selection of routing algorithms. Compared with supervised ML techniques, RL learns by trial and error in the interaction with the environment, and, thus, does not depend on labeled datasets.…”
Section: Introductionmentioning
confidence: 99%
“…The experimental evaluation confirmed that the developed algorithm had an enhanced stability period, pro-longed network lifetime, and reduced energy consumption in WSN. Bouzid et al 14 presented a novel routing protocol based on distributed reinforcement learning for WSN. The presented routing protocol effectively optimized energy consumption and the lifetime of WSN.…”
Section: Related Workmentioning
confidence: 99%
“…Q-routing is not designed for wireless sensor networks (WSN). Bouzid et al [12] proposed to adapt Q-routing to WSN context. In order to improve Q-routing, they changed the reward formula.…”
Section: Reinforcement Learning For Lt Optimisationmentioning
confidence: 99%
“…2 -routing kept the delay, however other coefficients computed with QoS metrics were added. In [4] and [12], the modified Q-routing offers the best performance compared to the original Q-routing. Finally, the quality of the measured delay was not discussed in any of these propositions.…”
Section: E Conclusionmentioning
confidence: 99%