Abstract-The need for combined task and motion planning in robotics is well understood. Solutions to this problem have typically relied on special purpose, integrated implementations of task planning and motion planning algorithms. We propose a new approach that uses off-the-shelf task planners and motion planners and makes no assumptions about their implementation. Doing so enables our approach to directly build on, and benefit from, the vast literature and latest advances in task planning and motion planning. It uses a novel representational abstraction and requires only that failures in computing a motion plan for a high-level action be identifiable and expressible in the form of logical predicates at the task level. We evaluate the approach and illustrate its robustness through a number of experiments using a state-of-the-art robotics simulator and a PR2 robot. These experiments show the system accomplishing a diverse set of challenging tasks such as taking advantage of a tray when laying out a table for dinner and picking objects from cluttered environments where other objects need to be re-arranged before the target object can be reached.
Differences in the horizontal positions of retinal images-binocular disparity-provide important cues for three-dimensional object recognition and manipulation. We investigated the neural coding of three-dimensional shape defined by disparity in anterior intraparietal (AIP) area. Robust selectivity for disparity-defined slanted and curved surfaces was observed in a high proportion of AIP neurons, emerging at relatively short latencies. The large majority of AIP neurons preserved their three-dimensional shape preference over different positions in depth, a hallmark of higher-order disparity selectivity. Yet both stimulus type (concave-convex) and position in depth could be reliably decoded from the AIP responses. The neural coding of three-dimensional shape was based on first-order (slanted surfaces) and second-order (curved surfaces) disparity selectivity. Many AIP neurons tolerated the presence of disparity discontinuities in the stimulus, but the population of AIP neurons provided reliable information on the degree of curvedness of the stimulus. Finally, AIP neurons preserved their three-dimensional shape preference over different positions in the frontoparallel plane. Thus, AIP neurons extract or have access to three-dimensional object information defined by binocular disparity, consistent with previous functional magnetic resonance imaging data. Unlike the known representation of three-dimensional shape in inferior temporal cortex, the neural representation in AIP appears to emphasize object parameters required for the planning of grasping movements.
The analysis of object shape is critical for both object recognition and grasping. Areas in the intraparietal sulcus of the rhesus monkey are important for the visuomotor transformations underlying actions directed toward objects. The lateral intraparietal (LIP) area has strong anatomical connections with the anterior intraparietal area, which is known to control the shaping of the hand during grasping, and LIP neurons can respond selectively to simple two-dimensional shapes. Here we investigate the shape representation in area LIP of awake rhesus monkeys. Specifically, we determined to what extent LIP neurons are tuned to shape dimensions known to be relevant for grasping and assessed the invariance of their shape preferences with regard to changes in stimulus size and position in the receptive field. Most LIP neurons proved to be significantly tuned to multiple shape dimensions. The population of LIP neurons that were tested showed barely significant size invariance. Position invariance was present in a minority of the neurons tested. Many LIP neurons displayed spurious shape selectivity arising from accidental interactions between the stimulus and the receptive field. We observed pronounced differences in the receptive field profiles determined by presenting two different shapes. Almost all LIP neurons showed spatially selective saccadic activity, but the receptive field for saccades did not always correspond to the receptive field as determined using shapes. Our results demonstrate that a subpopulation of LIP neurons encodes stimulus shape. Furthermore, the shape representation in the dorsal visual stream appears to differ radically from the known representation of shape in the ventral visual stream.
The macaque anterior intraparietal area (AIP) is crucial for visually guided grasping. AIP neurons respond during the visual presentation of real-world objects and encode the depth profile of disparity-defined curved surfaces. We investigated the neural representation of curved surfaces in AIP using a stimulus-reduction approach. The stimuli consisted of three-dimensional (3-D) shapes curved along the horizontal axis, the vertical axis, or both the horizontal and the vertical axes of the shape. The depth profile was defined solely by binocular disparity that varied along either the boundary or the surface of the shape or along both the boundary and the surface of the shape. The majority of AIP neurons were selective for curved boundaries along the horizontal or the vertical axis, and neural selectivity emerged at short latencies. Stimuli in which disparity varied only along the surface of the shape (with zero disparity on the boundaries) evoked selectivity in a smaller proportion of AIP neurons and at considerably longer latencies. AIP neurons were not selective for 3-D surfaces composed of anticorrelated disparities. Thus the neural selectivity for object depth profile in AIP is present when only the boundary is curved in depth, but not for disparity in anticorrelated stereograms.
Users of AI systems may rely upon them to produce plans for achieving desired objectives. Such AI systems should be able to compute obfuscated plans whose execution in adversarial situations protects privacy, as well as legible plans which are easy for team members to understand in cooperative situations. We develop a unified framework that addresses these dual problems by computing plans with a desired level of comprehensibility from the point of view of a partially informed observer. For adversarial settings, our approach produces obfuscated plans with observations that are consistent with at least k goals from a set of decoy goals. By slightly varying our framework, we present an approach for goal legibility in cooperative settings which produces plans that achieve a goal while being consistent with at most j goals from a set of confounding goals. In addition, we show how the observability of the observer can be controlled to either obfuscate or clarify the next actions in a plan when the goal is known to the observer. We present theoretical results on the complexity analysis of our problems. We demonstrate the execution of obfuscated and legible plans in a cooking domain using a physical robot Fetch. We also provide an empirical evaluation to show the feasibility and usefulness of our approaches using IPC domains.
We present the first platform-independent evaluation method for Task and Motion Planning (TAMP). Previously point, various problems have been used to test individual planners for specific aspects of TAMP. However, no common set of metrics, formats, and problems have been accepted by the community. We propose a set of benchmark problems covering the challenging aspects of TAMP and a planner-independent specification format for these problems. Our objective is to better evaluate and compare TAMP planners, foster communication and progress within the field, and lay a foundation to better understand this class of planning problems.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.