In a mobility-on-demand system, travel requests are handled by a fleet of shared vehicles in an on-demand fashion. An important factor that determines the operational efficiency and service level of such a mobility-on-demand system is its operational policy that assigns available vehicles to open passenger requests and relocates idle vehicles. Previously described operational policies are based on control theoretical approaches, most notably on receding horizon control. In this work, we employ reinforcement learning techniques to design an operational policy for a mobility-on-demand system. In particular, we propose a cascaded learning framework to reduce the number of state-action pairs which allows for more efficient learning. We train our model using the AMoDeus simulation environment and real taxi trip travel data from the city of San Francisco. Finally, we demonstrate that our reinforcement learning based operational policy for mobility-on-demand systems outperforms state-of the art fleet operational policies that are based on conventional control theoretical approaches.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.