Regarding the increasing importance of cloud computing in modern IT architecture, it is crucial to create highly efficient resource allocation algorithms. This work conducts an empirical investigation into the utilisation of several reinforcement learning methods for optimising resource allocation in cloud computing environments. Our objective is to assess the efficiency of RL algorithms in a dynamic environment with fluctuating workloads, focusing on resource utilisation, cost effectiveness, and optimality. This research examines the effects of altering cloud settings in order to integrate theoretical reinforcement learning concepts into a practical resource management system.
In the literature review, we consider the traditional resource allocation techniques and their inability to accommodate the changing demand. We additionally analyse the available research employing machine learning approaches, paying special attention to RL in cloud computing resource distribution. The methodology describes the research design, detailing the employed RL algorithms Q-learning, Deep Q Networks (DQN), and Proximal Policy Optimization (PPO). We explain the data collection procedure that involves different workloads and also situations to mimic real environments.
Performance of each RL algorithm is presented in the experimental results based on the resource utilization, cost efficiency and also system responsiveness. Q-learning, DQN and PPO are being tested which provides a better understanding of their pros and cons. Discussion that follows the Interprets results of these findings bringing to light many challenges along the way as well as possible directions for future inquiry. Therefore, this research fills in the evolving landscape of cloud computing by demonstrating RL algorithms’ adaptability and effectiveness regarding resource allocation challenges under the dynamic environments.