We address the problem of executing tool-using manipulation skills in scenarios where the objects to be used may vary. We assume that point clouds of the tool and target object can be obtained, but no interpretation or further knowledge about these objects is provided. The system must interpret the point clouds and decide how to use the tool to complete a manipulation task with a target object; this means it must adjust motion trajectories appropriately to complete the task. We tackle three everyday manipulations: scraping material from a tool into a container, cutting, and scooping from a container. Our solution encodes these manipulation skills in a generic way, with parameters that can be filled in at runtime via queries to a robot perception module; the perception module abstracts the functional parts of the tool and extracts key parameters that are needed for the task. The approach is evaluated in simulation and with selected examples on a PR2 robot.
The paper reviews the factors limiting the accuracy of locating a fiber optic cable fault when using an optical time domain reflectometer (OTDR) and describes an error estimation method for typical use cases. The primary source of errors lies in the complex relationship between the length of the optical fiber (measured by OTDR), its routing, cable design depending on cable design and type of installation (i.e. duct, directly buried, aerial) as well as the spare lengths used for service purposes. The techniques which considerably improve the accuracy of the fault localization processes are presented, the importance of accurate documentation of the network and of referencing the fault location to the nearest splice instead of end of the line are discussed, as is the absence of cable helix factor in data sheets.
Multi-purpose service robots must execute their tasks reliably in different situations, as well as learn from humans and explain their plans to them. We address these issues by introducing a knowledge representation scheme to facilitate skill generalization and explainability. This scheme allows representing knowledge of the robot’s understanding of a scene and performed task. We also present techniques for extracting this knowledge from raw data. Such knowledge representation and extraction methods have not been explored adequately in previous research. Our approach does not require any prior knowledge or 3D models of the objects involved. Moreover, the representation scheme is easy to understand for humans. The system is modular so that new recognition or reasoning routines can be added without changing the basic architecture. We developed a computer vision system and a task reasoning module that works with our knowledge representation. The efficacy of our approach is demonstrated with two different tasks: hanging items on pegs and stacking one item on another. A formalization of our knowledge representation scheme is presented, showing how the system is capable of learning from a few demonstrations.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.