Understanding driving situations regardless the conditions of the traffic scene is a cornerstone on the path towards autonomous vehicles; however, despite common sensor setups already include complementary devices such as LiDAR or radar, most of the research on perception systems has traditionally focused on computer vision. We present a LiDARbased 3D object detection pipeline entailing three stages. First, laser information is projected into a novel cell encoding for bird's eye view projection. Later, both object location on the plane and its heading are estimated through a convolutional neural network originally designed for image processing. Finally, 3D oriented detections are computed in a post-processing phase. Experiments on KITTI dataset show that the proposed framework achieves state-of-the-art results among comparable methods. Further tests with different LiDAR sensors in real scenarios assess the multi-device capabilities of the approach.
Sensor setups consisting of a combination of 3D range scanner lasers and stereo vision systems are becoming a popular choice for on-board perception systems in vehicles; however, the combined use of both sources of information implies a tedious calibration process. We present a method for extrinsic calibration of lidar-stereo camera pairs without user intervention. Our calibration approach is aimed to cope with the constraints commonly found in automotive setups, such as low-resolution and specific sensor poses. To demonstrate the performance of our method, we also introduce a novel approach for the quantitative assessment of the calibration results, based on a simulation environment. Tests using real devices have been conducted as well, proving the usability of the system and the improvement over the existing approaches. Code is available at http://wiki.ros.org/velo2cam calibration.
One of the most challenging problems in the domain of autonomous aerial vehicles is the designing of a robust real-time obstacle detection and avoidance system. This problem is complex, especially for the micro and small aerial vehicles, that is due to the Size, Weight and Power (SWaP) constraints. Therefore, using lightweight sensors (i.e., Digital camera) can be the best choice comparing with other sensors; such as laser or radar.For real-time applications, different works are based on stereo cameras in order to obtain a 3D model of the obstacles, or to estimate their depth. Instead, in this paper, a method that mimics the human behavior of detecting the collision state of the approaching obstacles using monocular camera is proposed. The key of the proposed algorithm is to analyze the size changes of the detected feature points, combined with the expansion ratios of the convex hull constructed around the detected feature points from consecutive frames. During the Aerial Vehicle (UAV) motion, the detection algorithm estimates the changes in the size of the area of the approaching obstacles. First, the method detects the feature points of the obstacles, then extracts the obstacles that have the probability of getting close toward the UAV. Secondly, by comparing the area ratio of the obstacle and the position of the UAV, the method decides if the detected obstacle may cause a collision. Finally, by estimating the obstacle 2D position in the image and combining with the tracked waypoints, the UAV performs the avoidance maneuver. The proposed algorithm was evaluated by performing real indoor and outdoor flights, and the obtained results show the accuracy of the proposed algorithm compared with other related works.
A driver behaviour analysis tool is presented. The proposal offers a novel contribution based on low-cost hardware and advanced software capabilities based on data fusion. The device takes advantage of the information provided by the in-vehicle sensors using Controller Area Network Bus (CAN-BUS), an Inertial Measurement Unit (IMU) and a GPS. By fusing this information, the system can infer the behaviour of the driver, providing aggressive behaviour detection. By means of accurate GPS-based localization, the system is able to add context information, such as digital map information, speed limits, etc. Several parameters and signals are taken into account, both in the temporal and frequency domains, to provide real time behaviour detection. The system was tested in urban, interurban and highways scenarios.
Road safety applications demand the most reliable sensor systems. In recent years, the advances in information technologies have led to more complex road safety applications able to cope with a high variety of situations. These applications have strong sensing requirements that a single sensor, with the available technology, cannot attain. Recent researches in Intelligent Transport Systems (ITS) try to overcome the limitations of the sensors by combining them. But not only sensor information is crucial to give a good and robust representation of the road environment; context information has a key role for reliable safety applications to provide reliable detection and complete situation assessment. This paper presents a novel approach for pedestrian detection using sensor fusion of laser scanner and computer vision. The application also takes advantage of context information, providing danger estimation for the pedestrians detected. Closing the loop, the danger estimation is later used, together with context information, as feed-back to enhance the pedestrian detection process.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.