A vehicle motion state prediction algorithm integrating point cloud timing multiview features and multitarget interaction information is proposed in this work to effectively predict the motion states of traffic participants around intelligent vehicles in complex scenes. The algorithm analyzes the characteristics of object motion that are affected by the surrounding environment and the interaction of nearby objects and is based on the complex traffic environment perception dual multiline light detection and ranging (LiDAR) technology. The time sequence aerial view map and time sequence front view depth map are obtained using real-time point cloud information perceived by the LiDAR. Time sequence high-level abstract combination features in the multiview scene are then extracted by an improved VGG19 network model and are fused with the potential spatiotemporal interaction of the multitarget operation state data extraction features detected by the laser radar by using a one-dimensional convolution neural network. A temporal feature vector is constructed as the input data of the bidirectional long-term and short-term memory (BiLSTM) network, and the desired input-output mapping relationship is trained to predict the motion state of traffic participants. According to the test results, the proposed BiLSTM model based on point cloud multiview and vehicle interaction information is better than other methods in predicting the state of target vehicles. The results can provide support for the research to evaluate the risk of intelligent vehicle operation environment.
To effectively evaluate the risk situation between intelligent vehicles and surrounding traffic participants in complex scenes, a complex traffic environment perception technology based on dual multiline light detection and ranging (LiDAR) is proposed in this work. The vehicle motion state is predicted by fusing the multiview characteristics of point cloud timing and multitarget interaction information, and the risk assessment model is constructed via artificial potential field theory. The real-time point cloud information is used to obtain the time-sequence bird’s-eye view and range image. The improved VGG19 network model is used to extract the time-sequence high-level abstract combined features in the multiview scene. The constructed time-sequence feature vector is used as the input data of the attention mechanism, and the attention-bidirectional long short-term memory (Attention-BiLSTM) model is used for training to form the desired input-output mapping relationship. The motion state of the target vehicle can therefore be updated, and the static and dynamic risk fields of traffic participants surrounding the vehicle can be established based on artificial potential field theory, thereby allowing for the evaluation of the operational risk of the intelligent vehicle. The results of experiments demonstrate that the prediction effect of the target vehicle state parameters via the use of the proposed model is better than that of other compared models, and the prediction effect of the risk field of intelligent vehicle operation based on the multiview point cloud features and vehicle interaction information is good.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.