Programming robots to perform complex tasks is a very expensive job. Traditional path planning and control are able to generate point to point collision free trajectories, but when the tasks to be performed are complex, traditional planning and control become complex tasks. This study focused on robotic operations in logistics, specifically, on picking objects in unstructured areas using a mobile manipulator configuration. The mobile manipulator has to be able to place its base in a correct place so the arm is able to plan a trajectory up to an object in a table. A deep reinforcement learning (DRL) approach was selected to solve this type of complex control tasks. Using the arm planner’s feedback, a controller for the robot base is learned, which guides the platform to such a place where the arm is able to plan a trajectory up to the object. In addition the performance of two DRL algorithms ((Deep Deterministic Policy Gradient (DDPG)) and (Proximal Policy Optimisation (PPO)) is compared within the context of a concrete robotic task.
Grasping point detection has traditionally been a core robotic and computer vision problem. In recent years, deep learning based methods have been widely used to predict grasping points, and have shown strong generalization capabilities under uncertainty. Particularly, approaches that aim at predicting object affordances without relying on the object identity, have obtained promising results in random bin-picking applications. However, most of them rely on RGB/RGB-D images, and it is not clear up to what extent 3D spatial information is used. Graph Convolutional Networks (GCNs) have been successfully used for object classification and scene segmentation in point clouds, and also to predict grasping points in simple laboratory experimentation. In the present proposal, we adapted the Deep Graph Convolutional Network model with the intuition that learning from n-dimensional point clouds would lead to a performance boost to predict object affordances. To the best of our knowledge, this is the first time that GCNs are applied to predict affordances for suction and gripper end effectors in an industrial bin-picking environment. Additionally, we designed a bin-picking oriented data preprocessing pipeline which contributes to ease the learning process and to create a flexible solution for any bin-picking application. To train our models, we created a highly accurate RGB-D/3D dataset which is openly available on demand. Finally, we benchmarked our method against a 2D Fully Convolutional Network based method, improving the top-1 precision score by 1.8% and 1.7% for suction and gripper respectively.
Surface defect identification based on computer vision algorithms often leads to inadequate generalization ability due to large intraclass variation. Diversity in lighting conditions, noise components, defect size, shape, and position make the problem challenging. To solve the problem, this paper develops a pixel-level image augmentation method that is based on image-to-image translation with generative adversarial neural networks (GANs) conditioned on fine-grained labels. The GAN model proposed in this work, referred to as Magna-Defect-GAN, is capable of taking control of the image generation process and producing image samples that are highly realistic in terms of variations. Firstly, the surface defect dataset based on the magnetic particle inspection (MPI) method is acquired in a controlled environment. Then, the Magna-Defect-GAN model is trained, and new synthetic image samples with large intraclass variations are generated. These synthetic image samples artificially inflate the training dataset size in terms of intraclass diversity. Finally, the enlarged dataset is used to train a defect identification model. Experimental results demonstrate that the Magna-Defect-GAN model can generate realistic and high-resolution surface defect images up to the resolution of 512 × 512 in a controlled manner. We also show that this augmentation method can boost accuracy and be easily adapted to any other surface defect identification models.
Due to decarbonization commitment made by steelmaking companies, the steel industry is tackling a technological transition from blast furnace (BF)–basic oxygen furnace (BOF) route to direct reduction iron (DRI)–electric arc furnace (EAF) route. Under this scenario, ferrous scrap becomes a critical factor for reaching CO2 reduction challenge. However, ferrous scrap can be considered one of the most complex industrial raw materials. In addition, scrap presents a huge heterogeneity in both physical and chemical characteristics. However, for producing high‐quality steel products, certainty on scrap specifics is required. Herein, an artificial intelligent model based on spectral information for the segmentation of different materials contained in the ferrous scrap is proposed. Developed solution offers a processing pipeline through a 2D–3D convolutional neural network algorithm based on a dataset with more than 428 million of pixels through hyperspectral cameras in the 400–1700 nm range. By this model, the detection of ferric fraction, stainless steel, aluminum, zinc, copper, sterile, and rubber and plastic materials are assessed. This work aims at increasing the reliability of the steelmaking process by lowering the number of steel quality noncompliance rejection due to lack of knowledge and uncertainties of these raw material compositions.
This work focuses on the operation of picking an object on a table with a mobile manipulator. We use deep reinforcement learning (DRL) to learn a positioning policy for the robot’s base by considering the reachability constraints of the arm. This work extends our first proof-of-concept with the ultimate goal of validating the method on a real robot. Twin Delayed Deep Deterministic Policy Gradient (TD3) algorithm is used to model the base controller, and is optimised using the feedback from the MoveIt! based arm planner. The idea is to encourage the base controller to position itself in areas where the arm reaches the object. Following a simulation-to-reality approach, first we create a realistic simulation of the robotic environment in Unity, and integrate it in Robot Operating System (ROS). The drivers for both the base and the arm are also implemented. The DRL-based agent is trained in simulation and, both the robot and target poses are randomised to make the learnt base controller robust to uncertainties. We propose a task-specific setup for TD3, which includes state/action spaces, reward function and neural architectures. We compare the proposed method with the baseline work and show that the combination of TD3 and the proposed setup leads to a $$11\%$$ 11 % higher success rate than with the baseline, with an overall success rate of $$97\%$$ 97 % . Finally, the learnt agent is deployed and validated in the real robotic system where we obtain a promising success rate of $$75\%$$ 75 % .
This paper describes the dynamic mosaic planning method developed in the context of the PICKPLACE European project. The dynamic planner has allowed the development of a robotic system capable of packing a wide variety of objects without having to adjust to each reference. The mosaic planning system consists of three modules: First, the picked item monitoring module monitors the grabbed item to find out how the robot has picked it. At the same time, the destination container is monitored online to obtain the actual status of the packaging. To this end, we present a novel heuristic algorithm that, based on the point cloud of the scene, estimates the empty volume inside the container as empty maximal spaces (EMS). Finally, we present the development of the dynamic IK-PAL mosaic planner that allows us to dynamically estimate the optimal packing pose considering both the status of the picked part and the estimated EMSs. The developed method has been successfully integrated in a real robotic picking and packing system and validated with 7 tests of increasing complexity. In these tests, we demonstrate the flexibility of the presented system in handling a wide range of objects in a real dynamic packaging environment. To our knowledge, this is the first time that a complete online picking and packing system is deployed in a real robotic scenario allowing to create mosaics with arbitrary objects and to consider the dynamics of a real robotic packing system.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
334 Leonard St
Brooklyn, NY 11211
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.