This paper reports on AnnieWAY, an autonomous vehicle that is capable of driving through urban scenarios and that successfully entered the finals of the 2007 DARPA Urban Challenge competition. After describing the main challenges imposed and the major hardware components, we outline the underlying software structure and focus on selected algorithms. Environmental perception mainly relies on a recent laser scanner that delivers both range and reflectivity measurements. Whereas range measurements are used to provide three-dimensional scene geometry, measuring reflectivity allows for robust lane marker detection. Mission and maneuver planning is conducted using a hierarchical state machine that generates behavior in accordance with California traffic laws. We conclude with a report of the results achieved during the competition. C 2008 Wiley Periodicals, Inc.
This paper reports on AnnieWAY, an autonomous vehicle that is capable of driving through urban scenarios and that has successfully entered the finals of the DARPA Urban Challenge 2007 competition. After describing the main challenges imposed and the major hardware components, we outline the underlying software structure and focus on selected algorithms. Environmental perception mainly relies on a recent laser scanner which delivers both range and reflectivity measurements. While range measurements are used to provide 3D scene geometry, measuring reflectivity allows for robust lane marker detection. Mission and maneuver planning is conducted via a concurrent hierarchical state machine that generates behavior in accordance with California traffic laws. We conclude with a report of the results achieved during the competition.
We present rosbridge, a middleware abstraction layer which provides robotics technology with a standard, minimalist applications development framework accessible to applications programmers who are not themselves roboticists. Rosbridge provides a simple, socket-based programmatic access to robot interfaces and algorithms provided (for now) by ROS, the open-source "Robot Operating System", the current state-of-the-art in robot middleware. In particular, it facilitates the use of web technologies such as Javascript for the purpose of broadening the use and usefulness of robotic technology. We demonstrate potential applications in the interface design, education, human-robot interaction and remote laboratory environments.
Abstract-In this article we investigate the representation and acquisition of Semantic Objects Maps (SOMs) that can serve as information resources for autonomous service robots performing everyday manipulation tasks in kitchen environments. These maps provide the robot with information about its operation environment that enable it to perform fetch and place tasks more efficiently and reliably. To this end, the semantic object maps can answer queries such as the following ones: "What do parts of the kitchen look like?", "How can a container be opened and closed?", "Where do objects of daily use belong?", "What is inside of cupboards/drawers?", etc.The semantic object maps presented in this article, which we call SOM + , extend the first generation of SOMs presented by Rusu et al. [1] in that the representation of SOM + is designed more thoroughly and that SOM + also include knowledge about the appearance and articulation of furniture objects. Also, the acquisition methods for SOM + substantially advance those developed in [1] in that SOM + are acquired autonomously and with low-cost (Kinect) instead of very accurate (laser-based) 3D sensors. In addition, perception methods are more general and are demonstrated to work in different kitchen environments. I. INTRODUCTIONRobots that do not know where objects are have to search for them. Robots that do not know how objects look have to guess whether they have fetched the right one. Robots that do not know the articulation models of drawers and cupboards have to open them very carefully in order to not damage them. Thus, robots should store and maintain knowledge about their environment that enables them to perform their tasks more reliably and efficiently. We call the collection of this knowledge the robot's maps and consider maps to be models of the robot's operation environment that serve as information resources for better task performance. Robots build environment maps for many purposes. Most robot maps so far have been proposed for navigation. Robot maps for navigation enable robots to estimate their position in the environment, to check the reachability of the destination and to compute navigation plans. Depending on their purpose maps have to store different kinds of information in different forms. Maps might represent the occupancy of environment of 2D or 3D grid cells, they might contain landmarks or represent the topological structure of the environment. The maps might model objects of daily use, indoor, outdoor, underwater, extraterrestrial surfaces, and aerial environments.
In this paper, we describe a remote lab system that allows remote groups to access a shared PR2. This lab will enable a larger and more diverse group of researchers to participate directly in state-of-the-art robotics research and will improve the reproducibility and comparability of robotics experiments. We identify a set of requirements that apply to all web-based remote laboratories and focus on solutions to these requirements. Specifically, we present solutions to interface, control and design difficulties in the client and server-side software when implementing a remote laboratory architecture. The combination of shared physical hardware and shared middleware software allows for experiments that build upon and compare against results on the same platform and in the same environment for common tasks. We describe how researchers can interact with the PR2 and its environment remotely through a web interface, as well as develop similar interfaces to visualize and run experiments remotely.
Abstract-We describe our efforts to create infrastructure to enable web interfaces for robotics. Such interfaces will enable researchers and users to remotely access robots through the internet as well as expand the types of robotic applications available to users with web-enabled devices. This paper centers on rosjs, a lightweight Javascript binding for ROS, Willow Garage's robot middleware framework. rosjs exposes many of the capabilities of ROS, allowing application developers to write controllers that are executed through a web browser. We discuss how rosjs extends ROS and briefly overview some of the features it provides. rosjs has been instrumental in the creation of remote laboratories featuring the iRobot Create and the PR2. These facilities will be available to the community as experimental resources. We describe the overall goals of this project as well as provide a brief description of how rosjs was used to help create web interfaces for these facilities.
Abstract-Robot manipulator designs are increasingly focused on low cost approaches, especially those envisioned for use in unstructured environments such as households, office spaces and hazardous environments. The cost of angular sensors varies based on the precision offered. For tasks in these environments, millimeter order manipulation errors are unlikely to cause drastic reduction in performance. In this paper, estimates the joint angles of a manipulator using low cost triaxial accelerometers by taking the difference between consecutive acceleration vectors. The accelerometer-based angle is compensated with a uniaxial gyroscope using a complementary filter to give robust measurements. Three compensation strategies are compared: complementary filter, time varying complementary filter, and extended Kalman filter. This sensor setup can also accurately track the joint angle even when the joint axis is parallel to gravity and the accelerometer data does not provide useful information. In order to analyze this strategy, accelerometers and gyroscopes were mounted on one arm of a PR2 robot. The arm was manually moved smoothly through different trajectories in its workspace while the joint angle readings from the on-board optical encoders were compared against the joint angle estimates from the accelerometers and gyroscopes. The low cost angle estimation strategy has a mean error 1.3• over the three joints estimated, resulting in mean end effector position errors of 6.1 mm or less. This system provides an effective angular measurement as an alternative to high precision encoders in low cost manipulators and as redundant measurements for safety in other manipulators.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
334 Leonard St
Brooklyn, NY 11211
Copyright © 2023 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.