Motion Planning, as a fundamental technology of automatic navigation for autonomous vehicle, is still an open challenging issue in real-life traffic situation and is mostly applied by the model-based approaches. However, due to the complexity of the traffic situations and the uncertainty of the edge cases, it is hard to devise a general motion planning system for autonomous vehicle. In this paper, we proposed a motion planning model based on deep learning (named as spatiotemporal LSTM network), which is able to generate a real-time reflection based on spatiotemporal information extraction. To be specific, the model based on spatiotemporal LSTM network has three main structure. Firstly, the Convolutional Long-short Term Memory (Conv-LSTM) is used to extract hidden features through sequential image data. Then, the 3D Convolutional Neural Network(3D-CNN) is applied to extract the spatiotemporal information from the multi-frame feature information. Finally, the fully connected neural networks are used to construct a control model for autonomous vehicle steering angle. The experiments demonstrated that the proposed method can generate a robust and accurate visual motion planning results for autonomous vehicle.
High-level driving behavior decision-making is an open-challenging problem for connected vehicle technology, especially in heterogeneous traffic scenarios. In this paper, a deep reinforcement learning based high-level driving behavior decision-making approach is proposed for connected vehicle in heterogeneous traffic situations. The model is composed of three main parts: a data preprocessor that maps hybrid data into a data format called hyper-grid matrix, a two-stream deep neural network that extracts the hidden features, and a deep reinforcement learning network that learns the optimal policy. Moreover, a simulation environment, which includes different heterogeneous traffic scenarios, is built to train and test the proposed method. The results demonstrate that the model has the capability to learn the optimal high-level driving policy such as driving fast through heterogeneous traffic without unnecessary lane changes. Furthermore, two separate models are used to compare with the proposed model, and the performances are analyzed in detail.
To enhance the reality of Connected and Autonomous Vehicles (CAVs) kinematic simulation scenarios and to guarantee the accuracy and reliability of the verification, a four-layer CAVs kinematic simulation framework, which is composed with road network layer, vehicle operating layer, uncertainties modelling layer and demonstrating layer, is proposed in this paper. Properties of the intersections are defined to describe the road network. A target position based vehicle position updating method is designed to simulate such vehicle behaviors as lane changing and turning. Vehicle kinematic models are implemented to maintain the status of the vehicles when they are moving towards the target position. Priorities for individual vehicle control are authorized for different layers. Operation mechanisms of CAVs uncertainties, which are defined as position error and communication delay in this paper, are implemented in the simulation to enhance the reality of the simulation. A simulation platform is developed based on the proposed methodology. A comparison of simulated and theoretical vehicle delay has been analyzed to prove the validity and the creditability of the platform. The scenario of rear-end collision avoidance is conducted to verify the uncertainties operating mechanisms, and a slot-based intersections (SIs) control strategy is realized and verified in the simulation platform to show the supports of the platform to CAVs kinematic simulation and verification.
Cooperative Vehicle and Infrastructure System (CVIS) and Autonomous Vehicle (AV) are two mainstream technologies to improve urban traffic efficiency and vehicle safety in the Intelligent Transportation System (ITS). However, there remain significant obstacles that must be overcome before fully unmanned applications are ready for widespread adoption in a transportation system. To achieve fully driverless driving, the perception ability of vehicle should be accurate, fast, continuous, and wide-ranging. In this paper, an interactive perception framework is proposed, which combines the visual perception of AV and information interaction of CVIS. Based on the framework, an interactive perception-based multiple object tracking (IP-MOT) method is presented. IP-MOT can be divided into two parts. First, a Lidaronly multiple object tracking (L-MOT) method obtains the status of surroundings using the voxel cluster algorithm. Second, the preliminary tracking result is fused with the interactive information to generate the trajectories of target vehicles. Two simulation platforms are established to verify the proposed methods: CVIS simulation platform and Virtual Reality (VR) test platform. The L-MOT algorithm is tested on a public dataset and the IP-MOT algorithm is tested on our simulation platform. The results show that the IP-MOT algorithm can improve the accuracy of object tracking as well as expand the vehicle perception range via combination of CVIS and AV.INDEX TERMS Cooperative vehicle and infrastructure system, autonomous vehicle, perception mode, multiple object tracking.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.