Autonomous driving is a research field that has received attention in recent years, with increasing applications of reinforcement learning (RL) algorithms. It is impractical to train an autonomous vehicle thoroughly in the physical space, i.e., the so-called ’real world’; therefore, simulators are used in almost all training of autonomous driving algorithms. There are numerous autonomous driving simulators, very few of which are specifically targeted at RL. RL-based cars are challenging due to the variety of reward functions available. There is a lack of simulators addressing many central RL research tasks within autonomous driving, such as scene understanding, localization and mapping, planning and driving policies, and control, which have diverse requirements and goals. It is, therefore, challenging to prototype new RL projects with different simulators, especially when there is a need to examine several reward functions at once. This paper introduces a modified simulator based on the Udacity simulator, made for autonomous cars using RL. It creates reward functions, along with sensors to create a baseline implementation for RL-based vehicles. The modified simulator also resets the vehicle when it gets stuck or is in a non-terminating loop, making it more reliable. Overall, the paper seeks to make the prototyping of new systems simple, with the testing of different RL-based systems.