Discovering the principles underlying the neural and biomechanical control of animal behavior requires a tight dialogue between real experiments and data-driven neuromechanical models. Until now, such models have primarily been used to further our understanding of lower-level motor control. For most whole-animal simulations, we still lack an effective framework for studying how the brain processes environmental signals to regulate motor behavior. The adult fly, Drosophila melanogaster, is well-suited for data-driven modeling and can be simulated using the neuromechanical model, NeuroMechFly. However, until now this simulation framework did not permit the exploration of full hierarchical sensorimotor loops. Here we present NeuroMechFly 2.0, a framework that greatly expands whole-animal modeling of Drosophila by enabling visual and olfactory processing as well as complex three-dimensional environments that can be navigated using leg adhesion. To illustrate its capabilities we explore the effectiveness of biologically-inspired leg controllers for navigating diverse terrain, and show how one can build and use Reinforcement Learning to train an end-to-end hierarchical model with multimodal sensory processing, descending commands, and low-level motor control in closed loop. NeuroMechFly 2.0 can accelerate the discovery of explanatory models of the nervous system and the development of machine learning models to control autonomous artificial agents and robots.