Abstract:The traditional handover decision methods depend on the handover threshold and measurement reports, which cannot efficiently resolve the frequent handover issue and ping-pong effect in 5G (5 generation) ultradense networks. To reduce the unnecessary handover and improve the QoS (quality of service), combine with the analysis of dwell time, we propose a state aware-based prioritized experience replay (SA-PER) handover decision method. First, the cell dwell time is computed by the geometrical analysis of real-ti… Show more
“…Appendix A presents the detailed search results, methodology, and applied criteria. We could verify that many works in the literature apply well-known RL methods to approach different classes of complex and exciting problems such as: (i) geographical routing-decision process to assign sensing tasks to mobile users, (ii) anomaly detection in smart environments, (iii) cellular-connected unmanned aerial vehicles network, (iv) nonlinearities and uncertainties of biochemical reactions in wastewater treatment process control, (v) robotic lever control, (vi) handover decision in 5G Ultradense Networks, and (vii) automation of software test (Zhang and Qiu, 2022;Tao and Hafid, 2020;Fährmann et al, 2022;Koroglu and Sen, 2022;Crowder et al, 2021;Li et al, 2022b;Wu et al, 2022;Remman and Lekkas, 2021;Rosenbauer et al, 2020). However, we are more interested in those works that propose changes or new methods using Experience Replay.…”
Section: Research On Experience Replay and Some Directions For Future...mentioning
From the first theoretical propositions in the ’50s, inspired by the neu-roscience and psychology studies about the learning processes in humanbeings and animals, to its application in real-world problems on learn-ing to action, Reinforcement Learning (RL) is still being a fascinating,rich, and complex class of machine learning algorithms. In particular,we will start reviewing its fundamental principles and develop a discus-sion about how a technique called Experience Replay (ER) has beenof fundamental importance in making a variety of methods in most ofthe relevant problems and different domains more data efficient, usingagent experiences to improve its performance. We present some of themore relevant methods in the literature, which base most recent researchon improving RL with ER. Finally, we bring from the recent literaturesome of the main trends, challenges, and advances focused on reviewingand discussing ER formal basement and how to improve its propositionsto make it even more efficient in different methods and domains.
“…Appendix A presents the detailed search results, methodology, and applied criteria. We could verify that many works in the literature apply well-known RL methods to approach different classes of complex and exciting problems such as: (i) geographical routing-decision process to assign sensing tasks to mobile users, (ii) anomaly detection in smart environments, (iii) cellular-connected unmanned aerial vehicles network, (iv) nonlinearities and uncertainties of biochemical reactions in wastewater treatment process control, (v) robotic lever control, (vi) handover decision in 5G Ultradense Networks, and (vii) automation of software test (Zhang and Qiu, 2022;Tao and Hafid, 2020;Fährmann et al, 2022;Koroglu and Sen, 2022;Crowder et al, 2021;Li et al, 2022b;Wu et al, 2022;Remman and Lekkas, 2021;Rosenbauer et al, 2020). However, we are more interested in those works that propose changes or new methods using Experience Replay.…”
Section: Research On Experience Replay and Some Directions For Future...mentioning
From the first theoretical propositions in the ’50s, inspired by the neu-roscience and psychology studies about the learning processes in humanbeings and animals, to its application in real-world problems on learn-ing to action, Reinforcement Learning (RL) is still being a fascinating,rich, and complex class of machine learning algorithms. In particular,we will start reviewing its fundamental principles and develop a discus-sion about how a technique called Experience Replay (ER) has beenof fundamental importance in making a variety of methods in most ofthe relevant problems and different domains more data efficient, usingagent experiences to improve its performance. We present some of themore relevant methods in the literature, which base most recent researchon improving RL with ER. Finally, we bring from the recent literaturesome of the main trends, challenges, and advances focused on reviewingand discussing ER formal basement and how to improve its propositionsto make it even more efficient in different methods and domains.
“…Tere are various uncertainties in the running process of trains, which will bring potential risks to the handover of high-speed trains [21]. To solve this problem, a handover risk assessment model based on prospect theory is constructed, and the handover risk of trains of diferent handover positions is evaluated by using prospect theory [22].…”
Section: Handover Risk Assessment Based On Prospect Teorymentioning
confidence: 99%
“…Te nearness degree obtained by improving the CRITIC-TOPSIS joint handover decision method and the handover risk prospect value obtained by prospect theory is normalized and summed by equation (21), and the comprehensive utility values of trains in diferent high-speed rail scenarios are obtained, as shown in Figure 7.…”
Section: Solving the Comprehensive Utility Value Of Handovermentioning
For the 5G-R wireless communication system of the next-generation high-speed railway, there is a problem of single algorithm consideration when handover is carried out. In high-speed environment, it is easily affected by handover risk, which leads to the problem of low handover success. To solve the above problems, this study proposed a next-generation high-speed railway handover decision algorithm, which is based on improved Criteria Importance through Intercrieria Correlation and Technology for Order Preference by Similarity to an Ideal Solution (CRITIC-TOPSIS) theory. Firstly, considering the factors of reference signal receiving power (RSRP), reference signal receiving quality (RSRQ), and co-frequency interference, an improved CRITIC-TOPSIS multi-attribute joint handover decision method is proposed, which overcomes the problem of single consideration of handover. Then, a handover risk assessment model based on prospect theory is constructed, and the handover risks of trains triggered at different positions are analyzed. Finally, the comprehensive utility value of train handover is obtained by normalization, and the optimal handover position is recommended according to the comprehensive utility value, so as to complete the handover. The experimental results show that the success rate of train handover exceeds 99.5% in viaduct, urban area, open area, and mountainous area. In addition, under different operation scenarios, when the train runs at a speed of 200 km/h to 500 km/h, the handover success rate can be between 99.51% and 99.68%. The proposed method can meet the requirement that the success rate of quality of service (QoS) of 5G-R wireless communication system is greater than 99.5%. The research results provide a theoretical reference for the evolution of the next-generation 5G-R high-speed railway system.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.