Industrial robots and associated control methods are continuously developing. With the recent progress in the field of artificial intelligence, new perspectives in industrial robot control strategies have emerged, and prospects towards cognitive robots have arisen. AI-based robotic systems are strongly becoming one of the main areas of focus, as flexibility and deep understanding of complex manufacturing processes are becoming the key advantage to raise competitiveness. This review first expresses the significance of smart industrial robot control in manufacturing towards future factories by listing the needs, requirements and introducing the envisioned concept of smart industrial robots. Secondly, the current trends that are based on different learning strategies and methods are explored. Current computer-vision, deep reinforcement learning and imitation learning based robot control approaches and possible applications in manufacturing are investigated. Gaps, challenges, limitations and open issues are identified along the way.
Smart manufacturing and smart factories depend on automation and robotics, whereas human–robot collaboration (HRC) contributes to increasing the effectiveness and productivity of today’s and future factories. Industrial robots especially in HRC settings can be hazardous if safety is not addressed properly. In this review, we look at the collaboration levels of HRC and what safety actions have been used to address safety. One hundred and ninety-three articles were identified from which, after screening and eligibility stages, 46 articles were used for the extraction stage. Predefined parameters such as: devices, algorithms, collaboration level, safety action, and standards used for HRC were extracted. Despite close human and robot collaboration, 25% of all reviewed studies did not use any safety actions, and more than 50% did not use any standard to address safety issues. This review shows HRC trends and what kind of functionalities are lacking in today’s HRC systems. HRC systems can be a tremendously complex process; therefore, proper safety mechanisms must be addressed at an early stage of development.
This paper presents the development of a bin-picking solution based on low-cost vision systems for the manipulation of automotive electrical connectors using machine learning techniques. The automotive sector has always been in a state of constant growth and change, which also implies constant challenges in the wire harnesses sector, and the emerging growth of electric cars is proof of this and represents a challenge for the industry. Traditionally, this sector is based on strong human work manufacturing and the need arises to make the digital transition, supported in the context of Industry 4.0, allowing the automation of processes and freeing operators for other activities with more added value. Depending on the car model and its feature packs, a connector can interface with a different number of wires, but the connector holes are the same. Holes not connected with wires need to be sealed, mainly to guarantee the tightness of the cable. Seals are inserted manually or, more recently, through robotic stations. Due to the huge variety of references and connector configurations, layout errors sometimes occur during seal insertion due to changed references or problems with the seal insertion machine. Consequently, faulty connectors are dumped into boxes, piling up different types of references. These connectors are not trash and need to be reused. This article proposes a bin-picking solution for classification, selection and separation, using a two-finger gripper, of these connectors for reuse in a new operation of removal and insertion of seals. Connectors are identified through a 3D vision system, consisting of an Intel RealSense camera for object depth information and the YOLOv5 algorithm for object classification. The advantage of this approach over other solutions is the ability to accurately detect and grasp small objects through a low-cost 3D camera even when the image resolution is low, benefiting from the power of machine learning algorithms.
Imitation learning is a discipline of machine learning primarily concerned with replicating observed behavior of agents known to perform well on a given task, collected in demonstration data sets. In this paper, we set out to introduce a pipeline for collecting demonstrations and training models that can produce motion plans for industrial robots. Object throwing is defined as the motivating use case. Multiple input data modalities are surveyed, and motion capture is selected as the most practicable. Two model architectures operating autoregressively are examined—feedforward and recurrent neural networks. Trained models execute throws on a real robot successfully, and a battery of quantitative evaluation metrics is proposed. Recurrent neural networks outperform feedforward ones in most respects, but this advantage is not universal or conclusive. The data collection, pre-processing and model training aspects of our proposed approach show promise, but further work is required in developing Cartesian motion planning tools before it is applicable in production applications.
Robots require a certain set of skills to perceive and analyse the environment and act accordingly. For tracked mobile robots getting good odometry data from sensory information is a challenging key prerequisite to perform in an unstructured dynamic environment, thus an essential issue in the tracked mobile robotics domain. In this article, we construct a ROS-based tracked mobile robot system taking the Jaguar V4 mobile robot as the base platform. On which several visual odometry solutions based on different cameras and methods (Intel RealSense T265, Zed camera, RTAB-Map RGBD) are integrated and benchmark comparison is performed. Analysis of new challenges faced by different methods while applied on a tracked vehicle as well as recommendations and conclusions are presented. Intel RealSense T265 solution proved to perform well in uncertain conditions which involves bounded vibrations and low lighting conditions with low latency, which result in good map generation. Further evaluations with a path planning algorithm and Intel RealSense T265 were conducted to test the effect of the robot's motion profiles on odometry data accuracy.
Parcel sorting is becoming a significant challenge for delivery distribution centers and is mostly automated by using high-throughput sorting machinery, but manual work is still used to feed these machines by placing the parcels on the conveyor belt. In this paper, an AI-based robotic solution that automates the parcel placement task was developed. The architecture of the proposed system along with methods on how to implement it are described by using the currently available hardware and software components. The described choices lead to a well-functioning system and the gained insights will facilitate building similar systems for parcel delivery automation.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.