major goal of humanoid robotics is to enable safe and reliable human-robot collaboration in realworld scenarios. In this article, we present ARMAR-6, a new high-performance humanoid robot for various tasks, including but not limited to grasping, mobile manipulation, integrated perception, bimanual collaboration, compliant-motion execution, and natural language understanding. We describe how the requirements arising from these tasks influenced our major design decisions, resulting in vertical integration during the joint hardware and software development phases. In particular, the entire hardware-including its structure, sensor-actuator units, and low-level controllers-as well as its perception, grasping and manipulation skills, task coordination, and the entire software architecture were all developed by one team of engineers. Component interaction is facilitated by our software framework ArmarX, which
Reliable execution of robot manipulation actions in cluttered environments requires that the robot is able to understand relations between objects and reason about consequences of actions applied to these objects. We present an approach for extracting physically plausible support relations between objects based on visual information which does not require any prior knowledge about physical object properties, e. g. mass distribution or friction coefficients. Based on a scene representation enriched by such physically plausible support relations between objects, we derive predictions about action effects. These predictions take into account uncertainty about support relations and allow applying strategies for safe bimanual object manipulation when needed. The extraction of physically plausible support relations is evaluated both in simulation and in real world experiments using real data from a depth camera, whereas the handling of support relation uncertainties is validated on the humanoid robot ARMAR-III.
Capturing scene dynamics and predicting the future scene state is challenging but essential for robotic manipulation tasks, especially when the scene contains both rigid and deformable objects. In this work, we contribute a simulation environment and generate a novel dataset for task-specific manipulation, involving interactions between rigid objects and a deformable bag. The dataset incorporates a rich variety of scenarios including different object sizes, object numbers and manipulation actions. We approach dynamics learning by proposing an object-centric graph representation and two modules which are Active Prediction Module (APM) and Position Prediction Module (PPM) based on graph neural networks with an encode-process-decode architecture. At the inference stage, we build a two-stage model based on the learned modules for single time step prediction. We combine modules with different prediction horizons into a mixed-horizon model which addresses long-term prediction. In an ablation study, we show the benefits of the two-stage model for single time step prediction and the effectiveness of the mixed-horizon model for long-term prediction tasks. Supplementary material is available at https://github.com/wengzehang/ deformable_rigid_interaction_prediction
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.