As a large-scale public transport mode, the driving safety of high-speed rail has a profound impact on public health. In this study, we determined the most efficient multi-modal warning interface for automatic driving of a high-speed train and put forward suggestions for optimization and improvement. Forty-eight participants were selected, and a simulated 350 km/h high-speed train driving experiment equipped with a multi-modal warning interface was carried out. Then, the parameters of eye movement and behavior were analyzed by independent sample Kruskal-Wallis test and one-way analysis of variance. The results showed that the current level 3 warning visual interface of a high-speed train had the most abundant warning graphic information, but it failed to increase the takeover efficiency of the driver. The visual interface of the level 2 warning was more likely to attract the attention of drivers than the visual interface of the level 1 warning, but it still needs to be optimized in terms of the relevance of and guidance between graphic-text elements. The multi-modal warning interface had a faster response efficiency than the single-modal warning interface. The auditory-visual multi-modal interface had the highest takeover efficiency and was suitable for the most urgent (level 3) high-speed train warning. The introduction of an auditory interface could increase the efficiency of a purely visual interface, but the introduction of a tactile interface did not improve the efficiency. These findings can be used as a basis for the interface design of automatic driving high-speed trains and help improve the active safety of automatic driving high-speed trains, which is of great significance to protect the health and safety of the public.
As a large-scale public transport mode, the driving safety of high-speed rail has a profound impact on public health. In this study, we determined the most efficient multi-modal warning interface for automatic driving of a high-speed train and put forward suggestions for optimization and improvement. Forty-eight participants were selected, and a simulated 350 km/h high-speed train driving experiment equipped with a multi-modal warning interface was carried out. Then, the parameters of eye movement and behavior were analyzed by independent sample Kruskal–Wallis test and one-way analysis of variance. The results showed that the current level 3 warning visual interface of a high-speed train had the most abundant warning graphic information, but it failed to increase the takeover efficiency of the driver. The visual interface of the level 2 warning was more likely to attract the attention of drivers than the visual interface of the level 1 warning, but it still needs to be optimized in terms of the relevance of and guidance between graphic–text elements. The multi-modal warning interface had a faster response efficiency than the single-modal warning interface. The auditory–visual multi-modal interface had the highest takeover efficiency and was suitable for the most urgent (level 3) high-speed train warning. The introduction of an auditory interface could increase the efficiency of a purely visual interface, but the introduction of a tactile interface did not improve the efficiency. These findings can be used as a basis for the interface design of automatic driving high-speed trains and help improve the active safety of automatic driving high-speed trains, which is of great significance to protect the health and safety of the public.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.