The Cloud infrastructure and its extensive set of Internet-accessible resources has potential to provide significant benefits to robots and automation systems. We consider robots and automation systems that rely on data or code from a network to support their operation, i.e., where not all sensing, computation, and memory is integrated into a standalone system. This survey is organized around four potential benefits of the Cloud: 1) Big Data: access to libraries of images, maps, trajectories, and descriptive data; 2) Cloud Computing: access to parallel grid computing on demand for statistical analysis, learning, and motion planning; 3) Collective Robot Learning: robots sharing trajectories, control policies, and outcomes; and 4) Human Computation: use of crowdsourcing to tap human skills for analyzing images and video, classification, learning, and error recovery. The Cloud can also improve robots and automation systems by providing access to: a) datasets, publications, models, benchmarks, and simulation tools; b) open competitions for designs and systems; and c) open-source software. This survey includes over 150 references on results and open challenges. A website with new developments and updates is available at: http://goldberg.berkeley.edu/cloud-robotics/ Note to Practitioners-Most robots and automation systems still operate independently using onboard computation, memory, and programming. Emerging advances and the increasing availability of networking in the "Cloud" suggests new approaches where processing is performed remotely with access to dynamic global datasets to support a range of functions. This paper surveys research to date.
Abstract-Rapidly expanding internet resources and wireless networking have potential to liberate robots and automation systems from limited onboard computation, memory, and software. "Cloud Robotics" describes an approach that recognizes the wide availability of networking and incorporates opensource elements to greatly extend earlier concepts of "Online Robots" and "Networked Robots". In this paper we consider how cloud-based data and computation can facilitate 3D robot grasping. We present a system architecture, implemented prototype, and initial experimental data for a cloud-based robot grasping system that incorporates a Willow Garage PR2 robot with onboard color and depth cameras, Google's proprietary object recognition engine, the Point Cloud Library (PCL) for pose estimation, Columbia University's GraspIt! toolkit and OpenRAVE for 3D grasping and our prior approach to sampling-based grasp analysis to address uncertainty in pose. We report data from experiments in recognition (a recall rate of 80% for the objects in our test set), pose estimation (failure rate under 14%), and grasping (failure rate under 23%) and initial results on recall and false positives in larger data sets using confidence measures.
Abstract-Automating repetitive surgical subtasks such as suturing, cutting and debridement can reduce surgeon fatigue and procedure times and facilitate supervised tele-surgery. Programming is difficult because human tissue is deformable and highly specular. Using the da Vinci Research Kit (DVRK) robotic surgical assistant, we explore a "Learning By Observation" (LBO) approach where we identify, segment, and parameterize sub-trajectories ("surgemes") and sensor conditions to build a finite state machine (FSM) for each subtask. The robot then executes the FSM repeatedly to tune parameters and if necessary update the FSM structure. We evaluate the approach on two surgical subtasks: debridement of 3D Viscoelastic Tissue Phantoms (3d-DVTP), in which small target fragments are removed from a 3D viscoelastic tissue phantom, and Pattern Cutting of 2D Orthotropic Tissue Phantoms (2d-PCOTP), a step in the standard Fundamentals of Laparoscopic Surgery training suite, in which a specified circular area must be cut from a sheet of orthotropic tissue phantom. We describe the approach and physical experiments, which yielded a success rate of 96% for 50 trials of the 3d-DVTP subtask and 70% for 20 trials of the 2d-PCOTP subtask.
Abstract-Robotic surgical assistants (RSAs) enable surgeons to perform delicate and precise minimally invasive surgery. Currently these devices are primarily controlled by surgeons in a local tele-operation (master-slave) mode. Introducing autonomy of surgical sub-tasks has the potential to assist surgeons, reduce fatigue, and facilitate supervised autonomy for remote tele-surgery. This paper considers the sub-task of surgical debridement: removing dead or damaged tissue fragments to allow the remaining healthy tissue to heal. We present an implemented automated surgical debridement system that uses the Raven, an open-architecture surgical robot with two cabledriven 7 DOF arms. Our system combines stereo vision for 3D perception, trajopt, an optimization-based motion planner, and model predictive control (MPC). Experiments with autonomous sensing, grasping, and removal of over 100 fragments suggest that it is possible for an autonomous surgical robot to achieve robustness comparable to human levels for a surgically-relevant subtask, although for our current implementation, execution time is 2-3× slower than human levels, primarily due to replanning times for MPC. This paper provides three contributions: (i) introducing debridement as a surgically-relevant sub-task for robotics, (ii) designing and implementing an autonomous multilateral surgical debridement system that uses both arms of the Raven surgical robot, and (iii) providing experimental data that highlights the importance of accurate state estimation for future research.
Abstract-Precise control of industrial automation systems with non-linear kinematics due to joint elasticity, variation in cable tensioning, or backlash is challenging; especially in systems that can only be controlled through an interface with an imprecise internal kinematic model. Cable-driven Robotic Surgical Assistants (RSAs) are one example of such an automation system, as they are designed for master-slave teleoperation. We consider the problem of learning a function to modify commands to the inaccurate control interface such that executing the modified command on the system results in a desired state. To achieve this, we must learn a mapping that accounts for the non-linearities in the kinematic chain that are not accounted for by the system's internal model. Gaussian Process Regression (GPR) is a data-driven technique that can estimate this non-linear correction in a task-specific region of state space, but it is sensitive to corruption of training examples due to partial occlusion or lighting changes. In this paper, we extend the use of GPR to learn a non-linear correction for cable-driven surgical robots by using i) velocity as a feature in the regression and ii) removing corrupted training observations based on rotation limits and the magnitude of velocity. We evaluate this approach on the Raven II Surgical Robot on the task of grasping foam "damaged tissue" fragments, using the PhaseSpace LED-based motion capture system to track the Raven end-effector. Our main result is a reduction in the norm of the mean position error from 2.6 cm to 0.2 cm and the norm of the mean angular error from 20.6 degrees to 2.8 degrees when correcting commands for a set of held-out trajectories. We also use the learned mapping to achieve a 3.8× speedup over past results on the task of autonomous surgical debridement. Further information on this research, including data, code, photos, and video, is available at http: //rll.berkeley.edu/surgical.
Computing grasps for an object is challenging when the object geometry is not known precisely. In this paper, we explore the use of Gaussian process implicit surfaces (GPISs) to represent shape uncertainty from RGBD point cloud observations of objects. We study the use of GPIS representations to select grasps on previously unknown objects, measuring grasp quality by the probability of force closure. Our main contribution is GP-GPIS-OPT, an algorithm for computing grasps for parallel-jaw grippers on 2D GPIS object representations. Specifically, our method optimizes an approximation to the probability of force closure subject to antipodal constraints on the parallel jaws using Sequential Convex Programming (SCP). We also introduce GPIS-Blur, a method for visualizing 2D GPIS models based on blending shape samples from a GPIS. We test the algorithm on a set of 8 planar objects with transparency, translucency, and specularity. Our experiments suggest that GP-GPIS-OPT computes grasps with higher probability of force closure than a planner that does not consider shape uncertainty on our test objects and may converge to a grasp plan up to 5.7× faster than using Monte-Carlo integration, a common method for grasp planning under shape uncertainty. Furthermore, initial experiments on the Willow Garage PR2 robot suggest that grasps selected with GP-GPIS-OPT are up to 90% more successful than those planned assuming a deterministic shape. Our dataset, code, and videos of our experiments are available at http://rll.berkeley.edu/icra2015grasping/.
Abstract-This paper explores how Cloud Computing can facilitate grasping with shape uncertainty. We consider the most common robot gripper: a pair of thin parallel jaws, and a class of objects that can be modeled as extruded polygons. We model a conservative class of push-grasps that can enhance object alignment. The grasp planning algorithm takes as input an approximate object outline and Gaussian uncertainty around each vertex and center of mass. We define a grasp quality metric based on a lower bound on the probability of achieving force closure. We present a highly-parallelizable algorithm to compute this metric using Monte Carlo sampling. The algorithm uses Coulomb frictional grasp mechanics and a fast geometric test for conservative conditions for force closure. We run the algorithm on a set of sample shapes and compare the grasps with those from a planner that does not model shape uncertainty. We report computation times with single and multi-core computers and sensitivity analysis on algorithm parameters. We also describe physical grasp experiments using the Willow Garage PR2 robot.
Abstract-We explore setting bounds on part tolerances based on an adaptive Cloud-based algorithm to estimate lower bounds on achieving force closure during grasping. We consider the most common robot gripper: a pair of thin parallel jaws, and a conservative class of push-grasps allowing slip that can enhance part alignment for parts that can be modeled as extruded polygons. The grasp analysis algorithm takes as input a set of candidate grasps and perturbations of a nominal part shape. We define a grasp quality metric based on a lower bound on the probability of achieving force closure. We present two extensions to our previous highly-parallelizable algorithm that adaptively reduce the number of grasp evaluations and improve the lower bound by including slip. We develop a procedure for finding the effect of increasing tolerance in vertices on grasp quality, which allows part tolerances to be bounded to ensure minimum grasp quality levels. We find that including slip improves grasp quality estimates by 16%, and our adaptive extension reduces grasp evaluations by 91.5% while maintaining 92.6% of grasp quality.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.