Visual-based vehicle detection has been extensively applied for autonomous driving systems and advanced driving assistant systems, however, it faces great challenges as a partial observation regularly happens owing to occlusion from infrastructure or dynamic objects or a limited vision field. This paper presents a two-stage detector based on Faster R-CNN for high occluded vehicle detection, in which we integrate a part-aware region proposal network to sense global and local visual knowledge among different vehicle attributes. That entails the model simultaneously generating partial-level proposals and instance-level proposals at the first stage. Then, different parts belong to the same vehicle are encoded and reconfigured into a compositional entire proposal through a part affinity fields, allowing the model to generate integral candidates and mitigate the impact of occlusion challenge to the utmost extent. Extensive experiments conducted on KITTI benchmark exhibit that our method outperforms most machine-learning-based vehicle detection methods and achieves high recall in the severely occluded application scenario.
The accurate fruit recognition in the field was one of the key technologies of fruit picking agricultural robots. An improved Single Shot Multi-Box Detector (SSD) model based on the color and morphological characteristics of fruit was proposed in this paper when aimed at the large collection workload and low secondary transfer efficiency of fruit such as palm fruit, durian, pineapple and other fruits grown in a complex field environment. A binocular depth camera RealSense D435i was used to collect images of the fruit to be picked in the field. Meanwhile, the MobileNet was replaced with the VGG16 basic network based on the Tensor-flow deep learning framework to reduce the amount of convolution operations for extracting image features in the SSD model, and a spatial positioning system for pineapple fruit was designed. Furtherly, experiments showed that the improved SSD depth detection model had a smaller size and it was more convenient to be deployed on the mobile end of agricultural robots, which the model had a high accuracy in the effective recognition of the fruits to be picked under the weed occlusion and overlapping scenes. The frame rate of the video reading and detection for the binocular depth camera reached 16.74 Frames Per Second (FPS), which had good robustness and real-time, and a good solution for the automatic picking of agricultural picking robots could be provided in the field.
Lane detection severs as one of the pivotal techniques to promote the development of local navigation and HD Map building of autonomous driving. However, lane detection remains an unresolved problem for the challenge of detection accuracy in diverse driving scenarios and computational limitation in on-board devices, let alone other road guidance markings. In this paper, we go beyond aforementioned limitations and propose a segmentation-by-detection method for road marking extraction. The architecture of this method consists of three modules: pre-processing, road marking detection and segmentation. In the pre-processing stage, image enhancement operation is used to highlight the contrast especially between road markings and road background. To reduce the computational complexity, the road region will be cropped by vanishing point detection algorithm in this module. Then, a lightweight network is dedicated designed for road marking detection. In order to enhance the network sensitivity to road markings and improve the detection accuracy, we further incorporate a Siamese attention module by integrating with the channel and spatial maps into the network. In the segmentation module, different from the method of semantic segmentation by neural network, our segmentation method is mainly based on conventional image morphological algorithms, which is less computational and also can achieve pixel-level accuracy. Additionally, the sliding search box and maximum stable external region (MSER) algorithms are utilized to compensate for missed detection and position error of bounding boxes. In the experiments, our proposed method delivers outstanding performances on cross datasets and achieves the real-time speed on the embedded devices.
Neglecting the driver behavioral model in lane-departure-warning systems has taken over as the primary reason for false warnings in human–machine interfaces. We propose a machine learning-based mechanism to identify drivers’ unintended lane-departure behaviors, and simultaneously predict the possibility of driver proactive correction after slight departure. First, a deep residual network for driving state feature extraction is established by combining time series sensor data and three serial ReLU residual modules. Based on this feature network, online extreme learning machine is organized to identify a driver’s behavior intention, such as unconscious lane-departure and intentional lane-changing. Once the system senses unconscious lane-departure before crossing the outermost warning boundary, the ϵ-greedy LSTM module in shadow mode is roused to verify the chances of driving the vehicle back to the original lane. Only those unconscious lane-departures with no drivers’ proactive correction behavior are transferred into the warning module, guaranteeing that the system has a limited false alarm rate. In addition, naturalistic driving data of twenty-one drivers are collected to validate the system performance. Compared with the basic time-to-line-crossing (TLC) method and the TLC-DSPLS method, the proposed warning mechanism shows a large-scale reduction of 12.9% on false alarm rate while maintaining the competitive accuracy rate of about 98.8%.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.