A new car-following model termed as multiple headway, velocity, and acceleration difference (MHVAD) is proposed to describe the traffic phenomenon, which is a further extension of the existing model of full velocity difference (FVD) and full velocity and acceleration difference (FVAD). Based on the stability analysis, it is shown that the critical value of the sensitivity in the MHVAD model decreases and the stable region is apparently enlarged, compared with the FVD model and other previous models. At the end, the simulation results demonstrate that the dynamic performance of the proposed MHVAD model is better than that of the FVD and FVAD models.
Driving intention prediction is one of the key technologies for the development of advanced assisted driving systems (ADAS), which could greatly reduce traffic accidents caused by lane change and ensure driving safety. In this paper, an advanced predictive method based on Multi-LSTM (Long Short-Term Memory) is proposed to predict lane change intention effectively. First, the training data set and test set based on real road information data set NGSIM (Next Generation SIMulation) are built considering ego vehicle driving state and the influence of surrounding vehicles. Second, the Multi-LSTM-based prediction controller is constructed to learn vehicle behavior characteristics and time series relation of various states in the process of lane change. Then, the influences of prediction model structure change and data structure change on test results are verified. Finally, the verification tests based on HIL (Hardware-in-the-Loop) simulation are constructed. The results show that the proposed prediction model can accurately predict the vehicle lane change intention in highway scenarios and the maximum prediction accuracy can reach 83.75%, which is higher than that of common method SVM (Support Vector Machine).INDEX TERMS Intelligent vehicle, lane change, driving intention prediction, advanced assisted driving systems, multi-LSTM.
Face parsing is an important computer vision task that requires accurate pixel segmentation of facial parts (such as eyes, nose, mouth, etc.), providing a basis for further face analysis, modification, and other applications. In this paper, we introduce a simple, end-to-end face parsing framework: STN-aided iCNN (STN-iCNN), which extends interlinked Convolutional Neural Network (iCNN) by adding a Spatial Transformer Network (STN) between the two isolated stages. The STN-iCNN uses the STN to provide a trainable connection to the original two-stage iCNN pipeline, making end-to-end joint training possible. Moreover, as a by-product, STN also provides more precise cropped parts than the original cropper. Due to the two advantages, our approach significantly improves the accuracy of the original model.
In view of the problems of long matching time and the high-dimension and high-matching rate errors of traditional scale-invariant feature transformation (SIFT) feature descriptors, this paper proposes an improved SIFT algorithm with an added stability factor for image feature matching. First of all, the stability factor was increased during construction of the scale space to eliminate matching points of unstable points, speed up image processing and reduce the dimension and the amount of calculation. Finally, the algorithm was experimentally verified and showed excellent results in experiments on two data sets. Compared to other algorithms, the results showed that the algorithm proposed in this paper improved SIFT algorithm efficiency, shortened image-processing time, and reduced algorithm error.
Countries are increasingly interested in spacecraft surveillance and recognition which play an important role in on-orbit maintenance, space docking, and other applications. Traditional detection methods, including radar, have many restrictions, such as excessive costs and energy supply problems. For many on-orbit servicing spacecraft, image recognition is a simple but relatively accurate method for obtaining sufficient position and direction information to offer services. However, to the best of our knowledge, few practical machine-learning models focusing on the recognition of spacecraft feature components have been reported. In addition, it is difficult to find substantial on-orbit images with which to train or evaluate such a model. In this study, we first created a new dataset containing numerous artificial images of on-orbit spacecraft with labeled components. Our base images were derived from 3D Max and STK software. These images include many types of satellites and satellite postures. Considering real-world illumination conditions and imperfect camera observations, we developed a degradation algorithm that enabled us to produce thousands of artificial images of spacecraft. The feature components of the spacecraft in all images were labeled manually. We discovered that direct utilization of the DeepLab V3+ model leads to poor edge recognition. Poorly defined edges provide imprecise position or direction information and degrade the performance of on-orbit services. Thus, the edge information of the target was taken as a supervisory guide, and was used to develop the proposed Edge Auxiliary Supervision DeepLab Network (EASDN). The main idea of EASDN is to provide a new edge auxiliary loss by calculating the L2 loss between the predicted edge masks and ground-truth edge masks during training. Our extensive experiments demonstrate that our network can perform well both on our benchmark and on real on-orbit spacecraft images from the Internet. Furthermore, the device usage and processing time meet the demands of engineering applications.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.