Abstract-We present a real-time algorithm which enables an autonomous car to comfortably follow other cars at various speeds while keeping a safe distance. We focus on highway scenarios.A velocity and distance regulation approach is presented that depends on the position as well as the velocity of the followed car. Radar sensors provide reliable information on straight lanes, but fail in curves due to their restricted field of view. On the other hand, Lidar sensors are able to cover the regions of interest in almost all situations, but do not provide precise speed information. We combine the advantages of both sensors with a sensor fusion approach in order to provide permanent and precise spatial and dynamical data.Our results in highway experiments with real traffic will be described in detail.
In this paper we present a novel approach to estimating the position of objects tracked by a team of mobile robots and to use these objects for a better self localization. Modeling of moving objects is commonly done in a robo-centric coordinate frame because this information is sufficient for most low level robot control and it is independent of the quality of the current robot localization. For multiple robots to cooperate and share information, though, they need to agree on a global, allocentric frame of reference. When transforming the egocentric object model into a global one, it inherits the localization error of the robot in addition to the error associated with the egocentric model.We propose using the relation of objects detected in camera images to other objects in the same camera image as a basis for estimating the position of the object in a global coordinate system. The spacial relation of objects with respect to stationary objects (e.g., landmarks) offers several advantages: a) Errors in feature detection are correlated and not assumed independent. Furthermore, the error of relative positions of objects within a single camera frame is comparably small. b) The information is independent of robot localization and odometry. c) As a consequence of the above, it provides a highly efficient method for communicating information about a tracked object and communication can be asynchronous. d) As the modeled object is independent from robo-centric coordinates, its position can be used for self localization of the observing robot.We present experimental evidence that shows how two robots are able to infer the position of an object within a global frame of reference, even though they are not localized themselves and then use this object information for self localization.
Abstract-This paper explores how the absence of an expected sensor reading can be used to improve Markov localization. This negative information usually is not being used in localization, because it yields less information than positive information (i.e. sensing a landmark), and a sensor often fails to detect a landmark, even if it falls within its sensing range. We address these difficulties by carefully modeling the sensor to avoid false negatives. This can also be thought of as adding an additional sensor that detects the absence of an expected landmark. We show how such modeling is done and how it is integrated into Markov localization. In real world experiments, we demonstrate that a robot is able to localize in positions where otherwise it could not and quantify our findings using the entropy of the particle distribution. Exploiting negative information leads to a greatly improved localization performance and reactivity.
Collision detection in a quadruped robot based on the comparison of sensor readings (actual motion) to actuator commands (intended motion) is described. Ways of detecting such incidences using just the sensor readings from the servo motors of the robot's legs are shown. Dedicated range sensors or collision detectors are not used. It was found that comparison of motor commands and actual movement (as sensed by the servo's position sensor) allowed the robot to reliably detect collisions and obstructions. Minor modifications to make the system more robust enabled us to use it in the RoboCup domain, enabling the system to cope with arbitrary movements and accelerations apparent in this highly dynamic environment. A sample behavior is outlined that utilizes the collision information. Further emphasis was put on keeping the process of calibration for different robot gaits simple and manageable.
Modern cars are equipped with a variety of sensors, advanced driver assistance systems and user interfaces nowadays. To benefit from these systems and to optimally support the driver in his monitoring and decision making process, efficient human-machine interfaces play an important part. This paper describes the second release of iDriver, an iPad software solution which was developed to navigate and remote control autonomous cars, to give access to live sensor data and useful data about the car state, as there are, e.g., current speed, engine and gear state. The software was used and evaluated in our two fully autonomous research cars "Spirit of Berlin" and "Made in Germany".
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.