This paper presents the TerraMax vision systems used during the 2007 DARPA Urban Challenge. First, a description of the different vision systems is provided, focusing on their hardware configuration, calibration method, and tasks. Then, each component is described in detail, focusing on the algorithms and sensor fusion opportunities: obstacle detection, road marking detection, and vehicle detection. The conclusions summarize the lesson learned from the developing of the passive sensing suite and its successful fielding in the Urban Challenge
Autonomous driving in complex urban environments, including traffic merge, four-ways stop, overtaking, etc., requires a very wide range sensorial capabilities, both in angle and distance. This paper presents a vision system, designed to help merging into traffic on two-ways intersections, and able to provide a long detection distance (over 100m) for incoming vehicles. The system is made of two high resolution wide angle cameras, each one looking laterally (70 degrees) with respect of the moving direction, performing a specific background subtraction based technique, along with tracking and speed estimation. The system works when the vehicle is stopped at intersections, and is triggered by the high-level vehicle manager. The system has been developed and tested on the Oshkosh Team's vehicle TerraMax TM , one of the 11 robots admitted to the DARPA Urban Challenge 2007 Final Event.
Abstract-This paper presents the TerraMax autonomous vehicle, which competed in the DARPA Urban Challenge 2007. The sensing system is mainly based on passive sensors, in particular four vision subsystems are used to cover a 360• area around the vehicle, and to cope with the problems related to complex traffic scenes navigation. A trinocular system derived from the one used during the 2005 Grand Challenge performs obstacle and lane detection, twin stereo systems (one in the front and one in the back) monitor the area close to the truck, two lateral cameras detect oncoming vehicles at intersections, and a rear view system monitors the lanes next to the truck looking for overtaking vehicles. Data fusion between laserscanners and vision will be discussed, focusing on the benefits of this approach.
Reliable perception of terrain slope and terrain traversability is a key-feature for any off-road unmanned ground vehicle, as well as for any Driver Assistance Systems designed to work in extreme envirnoments, like mining. In this paper we want to present an innovative technique to build a 3D elevation map of the traversable terrain from a world's 3D dense data set, in real time. The 3D points are grouped into lateral and longitudinal equally spaced slices, then they are projected onto the corresponding slices' reference planes. The projections are then analyzed by a biologically inspired Optimization Algorithm able to segment points into terrains inlier and outlier; the resulting 2D terrain slopes represent an optimal terrain approximation along each slice. Finally, the 2D approximations are merged together, to create the overall 3D terrain surface.The algorithm has been successfully tested with 3D data provided by a stereo camera system mounted on a Cat R wheel loader operating in a mining environment.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.