For the improvement of automotive active safety and the reduction of traffic collisions, significant efforts have been made on developing a vehicle coordinated collision avoidance system. However, the majority of the current solutions can only work in simple driving conditions, and cannot be dynamically optimized as the driving experience grows. In this study, a novel self-learning control framework for coordinated collision avoidance is proposed to address these gaps. First, a dynamic decision model is designed to provide initial braking and steering control inputs based on real-time traffic information. Then, a multilayer artificial neural networks controller is developed to optimize the braking and steering control inputs. Next, a proportional–integral–derivative feedback controller is used to track the optimized control inputs. The effectiveness of the proposed self-learning control method is evaluated using hardware-in-the-loop tests in different scenarios. Experimental results indicate that the proposed method can provide good collision avoidance control effect. Furthermore, vehicle stability during the coordinated collision avoidance control can be gradually improved by the self-learning method as the driving experience grows.
The realization of a novel human gesture recognition algorithm is essential to enable the effective collision avoidance of autonomous vehicles. Compared to visible spectrum cameras, the use of infrared imaging can enable more robust human gesture recognition in a complex environment. However, gesture recognition in infrared images has not been extensively investigated. In this work, we propose a model to detect human gestures, based on the improved YOLO-V3 network involving a saliency map as the second input channel to enhance the reuse of features and improve the network performance. Three DenseNet blocks are added before the residual components in the YOLO-V3 network to enhance the convolutional feature propagation. The saliency maps are obtained by multiscale superpixel segmentation, superpixel block clustering and cellular automata saliency detection. The obtained five scale saliency maps are fused using a Bayesian based fusion algorithm, and the final saliency image is generated. The infrared images composed of 4 main gesture classes are collected, each of which contains several approximated gestures in morphological terms. The training and testing datasets are generated, including original and augmented infrared images with a resolution of 640 × 480. The experimental results show that the proposed approach can enable real time human gesture detection for autonomous vehicles, with an average detection accuracy of 86.2%. INDEX TERMS Human gesture recognition, autonomous vehicles, deep learning approach, infrared images, saliency maps.
Environment perception is a basic and necessary technology for autonomous vehicles to ensure safety and reliable driving. A lot of studies have focused on the ideal environment, while much less work has been done on the perception of low-observable targets, features of which may not be obvious in a complex environment. However, it is inevitable for autonomous vehicles to drive in environmental conditions such as rain, snow and night-time, during which the features of the targets are not obvious and detection models trained by images with significant features fail to detect low-observable target. This article mainly studies the efficient and intelligent recognition algorithm of low-observable targets in complex environments, focuses on the development of engineering method to dual-modal image (color–infrared images) low-observable target recognition and explores the applications of infrared imaging and color imaging for an intelligent perception system in autonomous vehicles. A dual-modal deep neural network is established to fuse the color and infrared images and detect low-observable targets in dual-modal images. A manually labeled color–infrared image dataset of low-observable targets is built. The deep learning neural network is trained to optimize internal parameters to make the system capable for both pedestrians and vehicle recognition in complex environments. The experimental results indicate that the dual-modal deep neural network has a better performance on the low-observable target detection and recognition in complex environments than traditional methods.
Recent advancements in environmental perception for autonomous vehicles have been driven by deep learning-based approaches. However, effective traffic target detection in complex environments remains a challenging task. This paper presents a novel dual-modal instance segmentation deep neural network (DM-ISDNN) by merging camera and LIDAR data, which can be used to deal with the problem of target detection in complex environments efficiently based on multi-sensor data fusion. Due to the sparseness of the LIDAR point cloud data, we propose a weight assignment function that assigns different weight coefficients to different feature pyramid convolutional layers for the LIDAR sub-network. We compare and analyze the adaptations of early-, middle-, and late-stage fusion architectures in depth. By comprehensively considering the detection accuracy and detection speed, the middle-stage fusion architecture with a weight assignment mechanism, with the best performance, is selected. This work has great significance for exploring the best feature fusion scheme of a multi-modal neural network. In addition, we apply a mask distribution function to improve the quality of the predicted mask. A dual-modal traffic object instance segmentation dataset is established using a 7481 camera and LIDAR data pairs from the KITTI dataset, with 79,118 manually annotated instance masks. To the best of our knowledge, there is no existing instance annotation for the KITTI dataset with such quality and volume. A novel dual-modal dataset, composed of 14,652 camera and LIDAR data pairs, is collected using our own developed autonomous vehicle under different environmental conditions in real driving scenarios, for which a total of 62,579 instance masks are obtained using semi-automatic annotation method. This dataset can be used to validate the detection performance under complex environmental conditions of instance segmentation networks. Experimental results on the dual-modal KITTI Benchmark demonstrate that DM-ISDNN using middle-stage data fusion and the weight assignment mechanism has better detection performance than single- and dual-modal networks with other data fusion strategies, which validates the robustness and effectiveness of the proposed method. Meanwhile, compared to the state-of-the-art instance segmentation networks, our method shows much better detection performance, in terms of AP and F1 score, on the dual-modal dataset collected under complex environmental conditions, which further validates the superiority of our method.
The double-row self-aligning ball bearing commonly runs with angular misalignment between the inner and outer rings, and the angular misalignment significantly affects the bearing performance. However, the effect of angular misalignment on the dynamic characteristics of the double-row self-aligning ball bearing has not been studied thoroughly. This paper investigates the effect of angular misalignment on the stiffness of the double-row self-aligning ball bearing. The quasi-static model for the double-row self-aligning ball bearing is established with five degrees of freedom, namely, three translational displacements along x, y, and z directions and two tilting angles around x-axis and y-axis. The internal clearance between balls and raceways is included in the presented model. The formulation of the three-dimensional stiffness matrix for the double-row self-aligning ball bearing is analytically derived, and is verified by comparing with the available dada in published literature. Finally, the stiffness of the double-row self-aligning ball bearing under various angular misalignment conditions is analyzed systematically. The results show that the tilting angles vary the contact angles and contact forces of the compressed balls in the double-row self-aligning ball bearing, thus affect the bearing stiffness; the bearing stiffness decreases with the internal clearance; the angular misalignment significantly impacts the stiffness of the bearing running at a low speed.
Accurate knowledge of the vehicle states is the foundation of vehicle motion control. However, in real implementations, sensory signals are always corrupted by delays and noises. Network induced time-varying delays and measurement noises can be a hazard in the active safety of over-actuated electric vehicles (EVs). In this paper, a brain-inspired proprioceptive system based on state-of-the-art deep learning and data fusion technique is proposed to solve this problem in autonomous four-wheel actuated EVs. A deep recurrent neural network (RNN) is trained by the noisy and delayed measurement signals to make accurate predictions of the vehicle motion states. Then unscented Kalman predictor, which is the adaption of unscented Kalman filter in time-varying-delay situations, combines the predictions of the RNN and corrupted sensory signals to provide better perceptions of the locomotion. Simulations with a high-fidelity, CarSim, full-vehicle model are carried out to show the effectiveness of our RNN framework and the entire proprioceptive system. Index Terms-Deep learning (DL), four-wheel independently actuated (FWIA) autonomous electric vehicles, network-induced delays, recurrent neural networks (RNNs), unscented Kalman predictor (UKP).
scite is a Brooklyn-based startup that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
334 Leonard St
Brooklyn, NY 11211
Copyright © 2023 scite Inc. All rights reserved.
Made with 💙 for researchers