Abstract-This paper provides a review of the literature in on-road vision-based vehicle detection, tracking, and behavior understanding. Over the past decade, vision-based surround perception has progressed from its infancy into maturity. We provide a survey of recent works in the literature, placing vision-based vehicle detection in the context of sensor-based on-road surround analysis. We detail advances in vehicle detection, discussing monocular, stereo vision, and active sensor-vision fusion for on-road vehicle detection. We discuss vision-based vehicle tracking in the monocular and stereo-vision domains, analyzing filtering, estimation, and dynamical models. We discuss the nascent branch of intelligent vehicles research concerned with utilizing spatiotemporal measurements, trajectories, and various features to characterize on-road behavior. We provide a discussion on the state of the art, detail common performance metrics and benchmarks, and provide perspective on future research directions in the field.
Abstract-This paper introduces a general active-learning framework for robust on-road vehicle recognition and tracking. This framework takes a novel active-learning approach to building vehicle-recognition and tracking systems. A passively trained recognition system is built using conventional supervised learning. Using the query and archiving interface for active learning (QUAIL), the passively trained vehicle-recognition system is evaluated on an independent real-world data set, and informative samples are queried and archived to perform selective sampling. A second round of learning is then performed to build an active-learning-based vehicle recognizer. Particle filter tracking is integrated to build a complete multiple-vehicle tracking system. The active-learning-based vehicle-recognition and tracking (ALVeRT) system has been thoroughly evaluated on static images and roadway video data captured in a variety of traffic, illumination, and weather conditions. Experimental results show that this framework yields a robust efficient on-board vehicle recognition and tracking system with high precision, high recall, and good localization.Index Terms-Active safety, computer vision, intelligent driverassistance systems, machine learning.
Abstract-In this paper, we introduce a synergistic approach to integrated lane and vehicle tracking for driver assistance. The approach presented in this paper results in a final system that improves on the performance of both lane tracking and vehicle tracking modules. Further, the presented approach introduces a novel approach to localizing and tracking other vehicles on the road with respect to lane position, which provides information on higher contextual relevance that neither the lane tracker nor vehicle tracker can provide by itself. Improvements in lane tracking and vehicle tracking have been extensively quantified. Integrated system performance has been validated on real-world highway data. Without specific hardware and software optimizations, the fully implemented system runs at near-real-time speeds of 11 frames per second.
Abstract-This document provides a review of the past decade's literature in on-road vision-based vehicle detection. Over the past decade, vision-based surround perception has matured significantly from its infancy. We detail advances in vehicle detection, discussing representative works from the monocular and stereo-vision domains. We provide discussion on the state-of-the-art, and provide perspective on future research directions in the field.
In this paper, we introduce a novel stereo-monocular fusion approach to on-road localization and tracking of vehicles. Utilizing a calibrated stereo-vision rig, the proposed approach combines monocular detection with stereo-vision for on-road vehicle localization and tracking for driver assistance. The system initially acquires synchronized monocular frames and calculates depth maps from the stereo rig. The system then detects vehicles in the image plane using an active learning-based monocular vision approach. Using the image coordinates of detected vehicles, the system then localizes the vehicles in real-world coordinates using the calculated depth map. The vehicles are tracked both in the image plane, and in real-world coordinates, fusing information from both the monocular and stereo modalities. Vehicles' states are estimated and tracked using Kalman filtering. Quantitative analysis of tracks is provided. The full system takes 46ms to process a single frame.
This paper details the research, development, and demonstrations of real-world systems intended to assist the driver in urban environments, as part of the Urban Intelligent Assist (UIA) research initiative. A 3-year collaboration between
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.