Abstract:The Navlab group at Carnegie Mellon University has a long history of development of automated vehicles and intelligent systems for driver assistance. The earlier work of the group concentrated on road following, cross-country driving, and obstacle detection. The new focus is on short-range sensing, to look all around the vehicle for safe driving. The current system uses video sensing, laser rangefinders, a novel light-stripe rangefinder, software to process each sensor individually, a map-based fusion system, … Show more
Abstract-We present a method for identifying drivable surfaces in difficult unpaved and offroad terrain conditions as encountered in the DARPA Grand Challenge robot race. Instead of relying on a static, pre-computed road appearance model, this method adjusts its model to changing environments. It achieves robustness by combining sensor information from a laser range finder, a pose estimation system and a color camera. Using the first two modalities, the system first identifies a nearby patch of drivable surface. Computer Vision then takes this patch and uses it to construct appearance models to find drivable surface outward into the far range. This information is put into a drivability map for the vehicle path planner. In addition to evaluating the method's performance using a scoring framework run on real-world data, the system was entered, and won, the 2005 DARPA Grand Challenge. Post-race log-file analysis proved that without the Computer Vision algorithm, the vehicle would not have driven fast enough to win.
Abstract-We present a method for identifying drivable surfaces in difficult unpaved and offroad terrain conditions as encountered in the DARPA Grand Challenge robot race. Instead of relying on a static, pre-computed road appearance model, this method adjusts its model to changing environments. It achieves robustness by combining sensor information from a laser range finder, a pose estimation system and a color camera. Using the first two modalities, the system first identifies a nearby patch of drivable surface. Computer Vision then takes this patch and uses it to construct appearance models to find drivable surface outward into the far range. This information is put into a drivability map for the vehicle path planner. In addition to evaluating the method's performance using a scoring framework run on real-world data, the system was entered, and won, the 2005 DARPA Grand Challenge. Post-race log-file analysis proved that without the Computer Vision algorithm, the vehicle would not have driven fast enough to win.
“…Research in this area includes the search for adequate sensors and actuators [6], vehicle control algorithms [4], and assessment of the usability of such vehicles [2]. This has led to a number of results, including a practical demonstration of driverless vehicles following a road lane, overtaking a slower vehicle, and crossing an unsignalised junction [3], [7].…”
Abstract-Autonomous vehicles seem to be a promising approach to both reducing traffic congestion and improving road safety. However, for such vehicles to coexist safely, they will need to coordinate their behaviour to ensure that they do not collide with each other. This coordination will typically be based on (wireless) communication between vehicles and will need to satisfy stringent real-time constraints. However, realtime message delivery cannot be guaranteed in dynamic wireless networks which means that existing coordination models that rely on continuous connectivity cannot be employed.In this paper, we present a novel coordination model for autonomous vehicles that does not require continuous real-time connectivity between participants in order to ensure that system safety constraints are not violated. This coordination model builds on a real-time communication model for wireless networks that provides feedback to entities about the state of communication.The coordination model uses this feedback to ensure that vehicles always satisfy safety constraints, by adapting their behaviour when communication is degraded. We show that this model can be used to coordinate vehicles crossing an unsignalised junction.
“…If a robot is running on a well-structured road, such as freeways or the roads in an urban area [3], the primary focus of research is lane detection [4] using surface and boundary features, and road following [5], which detects road trends. Since the road has a relatively uniform surface and clear lane markings, techniques such as road segmentation, road edge detection [6], and curve-fitting [7] are often used to generate vehicle control inputs.…”
Section: Related Workmentioning
confidence: 99%
“…Motion blurring and vibration caused by a fast moving vehicle further degrade image quality. To address these issues, researchers approach the problem using different strategies such as color vision [10], [16], prior knowledge [6], pixel voting [15], classifier fusion [14], optical flow [21], neural networks [3], and machine learning [20], [21].…”
Abstract-We report our development of a vision-based motion planning system for an autonomous motorcycle designed for desert terrain, where uniform road surface and lane markings are not present. The motion planning is based on a vision vector space (V 2 -Space), which is an unitary vector set that represents local collision-free directions in the image coordinate system. V 2 -Space is constructed by extracting the vectors based on the similarity of adjacent pixels, which captures both the color information and the directional information from prior vehicle tire tracks and pedestrian footsteps. We report how V 2 -Space is constructed to reduce the impact of varying lighting conditions in outdoor environments. We also show how V 2 -Space can be used to incorporate vehicle kinematic, dynamic, and timedelay constraints in motion planning to fit the highly dynamic requirements of the motorcycle. The combined algorithm of the V 2 -Space construction and the motion planning runs in O(n) time, where n is the number of pixels in the captured image. Experiments show that our algorithm outputs correct robot motion commands more than 90% of the time.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.