We present a human robot interface (HRI) for semiautonomous human-in-the-loop control, that aims to tackle some of the challenges for robotics in unstructured environments. Our HRI lets the user specify desired object alignments in an image editor as geometric overlays on images. The HRI is based on the technique of visual task specification [1], which provides a well studied theoretical framework. Tasks are completed using uncalibrated image-based visual servoing (UVS). Our interface is shown to be effective for a versatile set of tasks that span both coarse and fine manipulation. We complete tasks such as inserting a marker in its cap, inserting a small cube in a shape sorter, grasping a circular lid, following a line, grasping a screw, cutting along a line, picking and placing a box and grasping a cylinder using a Barrett WAM arm and hand.
Long-term metric self-localization is an essential capability of autonomous mobile robots, but remains challenging for vision-based systems due to appearance changes caused by lighting, weather, or seasonal variations. While experience-based mapping has proven to be an effective technique for bridging the 'appearance gap,' the number of experiences required for reliable metric localization over days or months can be very large, and methods for reducing the necessary number of experiences are needed for this approach to scale. Taking inspiration from color constancy theory, we learn a nonlinear RGB-to-grayscale mapping that explicitly maximizes the number of inlier feature matches for images captured under different lighting and weather conditions, and use it as a pre-processing step in a conventional single-experience localization pipeline to improve its robustness to appearance change. We train this mapping by approximating the target non-differentiable localization pipeline with a deep neural network, and find that incorporating a learned low-dimensional context feature can further improve cross-appearance feature matching. Using synthetic and realworld datasets, we demonstrate substantial improvements in localization performance across day-night cycles, enabling continuous metric localization over a 30-hour period using a single mapping experience, and allowing experience-based localization to scale to long deployments with dramatically reduced data requirements.
We introduce image based visual servoing (IBVS) into a shared autonomy grasping system to improve its performance. Visual servoing is a technique that uses visual input to control a dynamic system, such as a robot. Autonomous grasp planning is used to calculate stable grasps to simplify the user control over a robot hand to 1 degree of freedom (DOF) open and close in contrast to controlling every finger. Visual servoing serves to increase the performance of completing the calculated grasp by using visual input to move some of the robot fingers to their grasp points. In this paper, we detail what we have accomplished, the challenges we have faced and what we have learned from them throughout the development of our system.
The SAE AutoDrive Challenge is a three-year competition to develop a Level 4 autonomous vehicle by 2020. The first set of challenges were held in April of 2018 in Yuma, Arizona. Our team (aUToronto/Zeus) placed first. In this paper, we describe our complete system architecture and specialized algorithms that enabled us to win. We show that it is possible to develop a vehicle with basic autonomy features in just six months relying on simple, robust algorithms. We do not make use of a prior map. Instead, we have developed a multisensor visual localization solution. All of our algorithms run in real-time using CPUs only. We also highlight the closed-loop performance of our system in detail in several experiments.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.