Manipulation from a free-flying vehicle has applications in space and undersea teleoperation. Both environments allow a vehicle to move freely in all six degrees of freedom. For many operations, such as inspection and servicing, the ability to manipulate from an undocked teleoperator will be essential. The major contribution of this research is the development of a control algorithm, coordinated control, which allows the simultaneous reduced-order control of a vehicle and attached manipulator/The entire telerobot system is controlled by commanding the end effector inertially with respect to the task. This is accomplished through a unified treatment of the vehicle and manipulator as a single dynamic system, based on considering the free-flying teleoperator as a redundant manipulator. The vehicle controller minimizes fuel expenditure while maintaining a desirable manipulator configuration. The coordinated trajectory algorithm is a blend of two modes: gradient pseudo-inverse trajectory control, which uses both vehicle thrust and manipulator motion, and reaction-compensation trajectory control, which allows the base to react freely to manipulator interaction torques. Blending between these modes occurs as a function of the teleoperator's configuration potential. The potential incorporates kinematic functions such as singularity avoidance, joint limits, and collision avoidance.
This paper presents an analysis of stopping distances for an unmanned ground vehicle achievable with selected ladar and stereo video sensors. Based on a stop-to-avoid response to detected obstacles, current passive stereo technology and existing ladars provide equivalent safe driving speeds. Only a proposed high-resolution ladar can detect small (8 inch) obstacles far enough ahead to allow driving speeds in excess of 10 miles per hour. The stopping distance analysis relates safe vehicle velocity to obstacle and sensor pixel sizes.
When teleoperating a robot using video from a remote camera, it is difficult for the operator to gauge depth and orientation from a single view. In addition, there are situations where a camera mounted for viewing by the teleoperator during a teleoperation task may not be able to see the tool tip, or the viewing angle may not be intuitive (requiring extensive training to reduce the risk of incorrect or dangerous moves by the teleoperator). A machine vision based teleoperator aid is presented which uses the operator's camera view to compute an object's pose (position and orientation), and then overlays onto the operator's screen information on the object's current and desired positions. The operator can choose to display orientation and translation information as graphics and/or text. This aid provides easily assimilated depth and relative orientation information to the teleoperator. The camera may be mounted at any known orientation relative to the tool tip. A preliminary experiment with human operators was conducted and showed that task accuracies were significantly greater with than without this aid.Keywords: Machine Vision, Teleoperation, Telerobotics, Pose Estimation. , INTRODUCTIONTelerobotics has the potential to greatly benefit many space applications, by reducing the great cost and hazards associated with manned flight operations. For example, space assembly, maintenance, and inspection tasks can potentially be done remotely using robots instead of using extra-vehicular activity (EVA). Teleoperation is an attractive method of control of such robots due to the availability and maturity of the technology.Unfortunately, using remote camera views degrades the operator's sense of perception as compared to actually having the operator physically on the scene. This paper describes how artificial intelligence (specifically, machine vision) can be used to implement a teleoperator aid that improves the operator's sense of perception. The Problem of Perception in TeleoperationIn this paper, we are concerned with the class of teleoperation tasks that involves placing the endeffector of the robot in a certain pose (position and orientation) relative to some other object in the scene. This class of tasks includes most manipulation tasks, since generally an object must be grasped or manipulated in a specific manner, and so the accurate placement of the end-effector with respect to that object is a required precondition. In addition to manipulation requirements, the end-effector must be moved accurately around the workspace to avoid collisions. We are also concerned with the class of tasks in which the identity, geometry, and appearance of the object to be manipulated is well known in advance, but its location is only approximately known. Finally, we are concerned with tasks where the end-effector must be placed relative to the object with an accuracy that is tighter or more stringent than the initial a priori knowledge of the location of that object. This situation is common when the task and environment are fa...
Spot lord'. John 131 itch5. Will am K larquist'. Robin M urphv Center for Intelligent Systems, Science Applications International ('orp. Advanced Technology Offce. Defense Advanced Research Projects Agettcr Computer Science and Engineering. Universitr ot South I lorida ABSTRA CT learns (if heterogeneous mobile robots are a ke aspect of future unmanned systems for operations in complex and dr mimic urban environments, such as that envisioned b' DARPAs Tactical Mobile Robotics program. ( )ne example of an interaction among such team members is the docking of small robot of limited sensory and processino capahi I hr vitIi a larger. niorecapable robot. Applications for such docking include the transfer of po er. data, arid material, as el I as plir sical l combined maneuver or manipulation. A two-robot sr stem is considered in this paper. 1 lie smaller "tlirowahlc'' robot contains a video camera capable of imaging the larger "packable" robot and transmitting the imagery, lie packable robot can both sense the throwable robot through an onboard camera, as well as sense itself through the throable robots transnntted ideo. arid is capable of processing imagery from either source. This paper describes recent results in the development of control and sensing strategies Rr automatic mid-range docking of these two robots. I)ecisions addressed include the selection of which robot's image sensor to use and which robot to nianeuver. Initial experimental results are presented for dockiiig using sensor data from each robot. Keywords: heterogeneous. tactical mobile robots. vision, docking. marsupial. heterobotic SPIE Vol. 3839 . 0277-786X/99!$1O.OO Downloaded From: http://proceedings.spiedigitallibrary.org/ on 06/21/2015 Terms of Use: http://spiedl.org/terms
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.