As deep space exploration tasks become increasingly complex, the mobility and adaptability of traditional wheeled or tracked probe robots with high functional density are constrained in harsh, dangerous, or unknown environments. A practical solution to these challenges is designing a probe robot for preliminary exploration in unknown areas, which is characterized by robust adaptability, simple structure, light weight, and minimal volume. Compared to the traditional deep space probe robot, the spherical robot with a geometric, symmetrical structure shows better adaptability to the complex ground environment. Considering the uncertain detection environment, the spherical robot should brake rapidly after jumping to avoid reentering obstacles. Moreover, since it is equipped with optical modules for deep space exploration missions, the spherical robot must maintain motion stability during the rolling process to ensure the quality of photos and videos captured. However, due to the nonlinear coupling and parameter uncertainty of the spherical robot, it is tedious to adjust controller parameters. Moreover, the adaptability of controllers with fixed parameters is limited. This paper proposes an adaptive proportion–integration–differentiation (PID) control method based on reinforcement learning for the multi-motion mode spherical probe robot (MMSPR) with rolling and jumping. This method uses the soft actor–critic (SAC) algorithm to adjust the parameters of the PID controller and introduces a switching control strategy to reduce static error. As the simulation results show, this method can facilitate the MMSPR’s convergence within 0.02 s regarding motion stability. In addition, in terms of braking, it enables an MMSPR with random initial speed brake within a convergence time of 0.045 s and a displacement of 0.0013 m. Compared with the PID method with fixed parameters, the braking displacement of the MMSPR is reduced by about 38%, and the convergence time is reduced by about 20%, showing better universality and adaptability.