Abstract-A vision-based vehicle guidance system for road vehicles can have three main roles: 1) road detection; 2) obstacle detection; and 3) sign recognition. The first two have been studied for many years and with many good results, but traffic sign recognition is a less-studied field. Traffic signs provide drivers with very valuable information about the road, in order to make driving safer and easier. We think that traffic signs must play the same role for autonomous vehicles. They are designed to be easily recognized by human drivers mainly because their color and shapes are very different from natural environments. The algorithm described in this paper takes advantage of these features. It has two main parts. The first one, for the detection, uses color thresholding to segment the image and shape analysis to detect the signs. The second one, for the classification, uses a neural network. Some results from natural scenes are shown. On the other hand, the algorithm is valid to detect other kinds of marks that would tell the mobile robot to perform some task at that place.Index Terms-Advanced driver information systems, color/shape processing, computer vision, neural networks, traffic signs recognition.
Understanding driving situations regardless the conditions of the traffic scene is a cornerstone on the path towards autonomous vehicles; however, despite common sensor setups already include complementary devices such as LiDAR or radar, most of the research on perception systems has traditionally focused on computer vision. We present a LiDARbased 3D object detection pipeline entailing three stages. First, laser information is projected into a novel cell encoding for bird's eye view projection. Later, both object location on the plane and its heading are estimated through a convolutional neural network originally designed for image processing. Finally, 3D oriented detections are computed in a post-processing phase. Experiments on KITTI dataset show that the proposed framework achieves state-of-the-art results among comparable methods. Further tests with different LiDAR sensors in real scenarios assess the multi-device capabilities of the approach.
This paper deals with object recognition in outdoor environments. In this type of environments, lighting conditions cannot be controlled and predicted, objects can be partially occluded, and their position and orientation is not known a priori. The chosen type of objects is traffic or road signs, due to their usefulness for sign maintenance, inventory in highways and cities, Driver Support Systems and Intelligent Autonomous Vehicles. A genetic algorithm is used for the detection step, allowing an invariance localisation to changes in position, scale, rotation, weather conditions, partial occlusion, and the presence of other objects of the same colour. A neural network achieves the classification. The global system not only recognises the traffic sign but also provides information about its condition or state. q
There are increasing applications that require precise calibration of cameras to perform accurate measurements on objects located within images, and an automatic algorithm would reduce this time consuming calibration procedure. The method proposed in this article uses a pattern similar to that of a chess board, which is found automatically in each image, when no information regarding the number of rows or columns is supplied to aid its detection. This is carried out by means of a combined analysis of two Hough transforms, image corners and invariant properties of the perspective transformation. Comparative analysis with more commonly used algorithms demonstrate the viability of the algorithm proposed, as a valuable tool for camera calibration.
Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. Garcia, F.; Cerri, P.; Escalera, A.; Armingol, J. M. (2012). Data fusion for overtaking vehicle detection based on radar and optical flow.
This paper presents an experimental study on pedestrian classification and detection in far infrared (FIR) images. The study includes an in-depth evaluation of several combinations of features and classifiers, which include features previously used for daylight scenarios, as well as a new descriptor (HOPE -Histograms of Oriented Phase Energy), specifically targeted to infrared images, and a new adaptation of a latent variable SVM approach to FIR images. The presented results are validated on a new classification and detection dataset of FIR images collected in outdoor environments from a moving vehicle. The classification space contains 16152 pedestrians and 65440 background samples evenly selected from several sequences acquired at different temperatures and different illumination conditions. The detection dataset consist on 15224 images with ground truth information. The authors are making this dataset public for benchmarking new detectors in the area of intelligent vehicles and field robotics applications.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.