Currently, the autonomous positioning of 1 unmanned ground vehicles (UGVs) still faces the problems 2 of insufficient persistence and poor reliability, especially in 3 the challenging scenarios where satellites are denied, or the 4 sensing modalities such as vision or laser are degraded. Based 5 on multimodal information fusion and failure detection (FD), 6 this article proposes a high-robustness and low-drift state 7 estimation system suitable for multiple scenes, which integrates 8 light detection and ranging (LiDAR), inertial measurement 9 units (IMUs), stereo camera, encoders, attitude and heading 10 reference system (AHRS) in a loose coupling way. Firstly, a state 11 estimator with variable fusion mode is designed based on the 12 error-state extended Kalman filtering (ES-EKF), which can 13 fuse encoder-AHRS subsystem (EAS), visual-inertial subsystem 14 (VIS), and LiDAR subsystem (LS) and change its integration 15 structure online by selecting a fusion mode. Secondly, in order 16 to improve the robustness of the whole system in challenging 17 environments, an information manager is created, which judges 18 the health status of subsystems by degeneration metrics, and 19 then online selects appropriate information sources and variables 20 to enter the estimator according to their health status. Finally, 21 the proposed system is extensively evaluated using the datasets 22 collected from six typical scenes: street, field, forest, forest-23 at-night, street-at-night and tunnel-at-night. The experimental 24 results show our framework is better or comparable accuracy 25 and robustness than existing publicly available systems. 26 Index Terms-Error-state extended Kalman filter (ES-EKF), 27 failure detection (FD) and handling, light detection and ranging 28 (LiDAR)-inertial-visual-encoder odometry, multimodal informa-29 tion fusion, state estimation. 30 I. INTRODUCTION 31 U NMANNED ground vehicles (UGVs) have been widely 32 deployed in various real world applications, such as 33