A autonomous vehicle requires improved and robust perception systems than conventional perception systems of intelligent vehicles. In particular, single sensor based perception systems have been widely studied by using cameras and laser radar sensors which are the most representative sensors for perception by providing object information such as distance information and object features. The distance information of the laser radar sensor is used for road environment perception of road structures, vehicles, and pedestrians. The image information of the camera is used for visual recognition such as lanes, crosswalks, and traffic signs. However, single sensor based perception systems suffer from false positives and true negatives which are caused by sensor limitations and road environments. Accordingly, information fusion systems are essentially required to ensure the robustness and stability of perception systems in harsh environments. This paper describes a perception system for autonomous vehicles, which performs information fusion to recognize road environments. Particularly, vision and laser radar sensors are fused together to detect lanes, crosswalks, and obstacles. The proposed perception system was validated on various roads and environmental conditions with an autonomous vehicle.
This paper proposes a robust lane detection algorithm with a cascade particle filter that incorporates a model decomposition approach. Despite the sophisticated tracking mechanism of a particle filter, the conventional particle-filter-based lane detection system suffers from an estimation accuracy problem and a high computational load. In order to improve the robustness and the computation time for lane detection systems, the proposed cascade particle filter decomposes a lane model into two submodels: a straight model and a curve model. By dividing the lane model, not only can the computation time be decreased, but also the accuracy of the lane state estimation system can be increased. The proposed lane detection algorithm and the cascade particle filter were evaluated on various roads and environmental conditions with the autonomous vehicle A1, which was the winner of the 2010 and 2012 Autonomous Vehicle Competition in the Republic of Korea organized by the Hyundai motor group. The proposed algorithm proved to be sufficiently robust and fast to be applied to autonomous vehicles as well as to intelligent vehicles for improving the vehicle safety.
: The camera based object detection systems should satisfy the recognition performance as well as real-time constraints. Particularly, in safety-critical systems such as Autonomous Emergency Braking (AEB), the real-time constraints significantly affects the system performance. Recently, multi-core processors and system-on-chip technologies are widely used to accelerate the object detection algorithm by distributing computational loads. However, due to the advanced hardware, the complexity of system architecture is increased even though additional hardwares improve the real-time performance. The increased complexity also cause difficulty in migration of existing algorithms and development of new algorithms. In this paper, to improve real-time performance and design complexity, a task scheduling strategy is proposed for visual object tracking systems. The real-time performance of the vision algorithm is increased by applying pipelining to task scheduling in a multi-core processor. Finally, the proposed task scheduling algorithm is applied to crosswalk detection and tracking system to prove the effectiveness of the proposed strategy.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.