In order that fully self-driving vehicles can be realized, it is believed that systems where the driver shares control and authority with the intelligent vehicle offer the most effective solution. An understanding of driving intention is the key to building a collaborative autonomous driving system. In this study, the proposed method incorporates the spatiotemporal features of driver behavior and forward-facing traffic scenes through a feature extraction module; the joint representation was input into an inference module for obtaining driver intentions. The feature extraction module was a two-stream structure that was designed based on a deep three-dimensional convolutional neural network. To accommodate the differences in video data inside and outside the cab, the two-stream network consists of a slow pathway that processes the driver behavior data with low frame rates, along with a fast pathway that processes traffic scene data with high frame rates. Then, a gated recurrent unit, based on a recurrent neural network, and a fully connected layer constitute an intent inference module to estimate the driver’s lane-change and turning intentions. A public dataset, Brain4Cars, was used to validate the proposed method. The results showed that compared with modeling using the data related to driver behaviors, the ability of intention inference is significantly improved after integrating traffic scene information. The overall accuracy of the intention inference of five intents was 84.92% at a time of 1 s prior to the maneuver, indicating that making full use of traffic scene information was an effective way to improve inference performance.
Precise driving status recognition is a prerequisite for human–vehicle collaborative driving systems towards sustainable road safety. In this study, a simulated driving platform was built to capture multimodal information simultaneously, including vision-modal data representing driver behaviour and sensor-modal data representing vehicle motion. Multisource data are used to quantify the risk of distracted driving status from four levels, safe driving, slight risk, moderate risk, and severe risk, rather than detecting action categories. A multimodal fusion method called vision-sensor fusion transformer (V-SFT) was proposed to incorporate the vision-modal of driver behaviour and sensor-modal data of vehicle motion. Feature concatenation was employed to aggregate representations of different modalities. Then, successive internal interactions were performed to consider the spatiotemporal dependency. Finally, the representations were clipped and mapped into four risk level label spaces. The proposed approach was evaluated under different modality inputs on the collected datasets and compared with some baseline methods. The results showed that V-SFT achieved the best performance with an recognition accuracy of 92.0%. It also indicates that fusing multimodal information effectively improves driving status understanding, and V-SFT extensibility is conducive to integrating more modal data.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.