In this study, an Artificial Intelligence of Things (AIoT)-based automated picking system was proposed for the development of an online shop and the services for automated shipping systems. Speed and convenience are two key points in Industry 4.0 and Society 5.0. In the context of online shopping, speed and convenience can be provided by integrating e-commerce platforms with AIoT systems and robots that are following consumers’ needs. Therefore, this proposed system diverts consumers who are moved by AIoT, while robotic manipulators replace human tasks to pick. To prove this idea, we implemented a modified YOLO (You Only Look Once) algorithm as a detection and localization tool for items purchased by consumers. At the same time, the modified YOLOv2 with data-driven mode was used for the process of taking goods from unstructured shop shelves. Our system performance is proven by experiments to meet the expectations in evaluating efficiency, speed, and convenience of the system in Society 5.0’s context.
This paper proposes the object localization and depth estimation to select and set goals for robots via machine vision. An algorithm based on a deep region-based convolution neural network (R-CNN) will recognize targets and non-targets. After the targets are recognized, we employed both the k-nearest neighbors (kNN) and the fuzzy inference system (FIS) to localize two-dimension (2D) positions. Moreover, based on the field of view (FoV) and a disparity map, the depth is estimated by a mono camera mounted on the end-effector with an eye-in-hand manipulator structure. Although using a single mono camera, the system can easily find the camera baseline by only shifting the end-effector a few millimeters towards the x-axis. Thus, we can obtain and identify the depth of the layered environment in 3D points, which form a dataset to recognize the junction box covers on the table. Experimental tests confirmed that the algorithm could accurately distinguish junction box covers or non-targets and could estimate whether the targets are within the depth for grasping by three-finger grippers. Furthermore, the proposed optimized depth error of-0.0005%, and localization method could precisely position the junction box cover with recognizing and picking error rates 0.993 and 98.529% respectively. INDEX TERMS Region-based convolution neural network, eye-in-hand manipulator, machine vision, robotics, and automation.
Purpose
– Extensive efforts have been conducted on the elimination of position sensors in servomotor control. The purpose of this paper is to aim at estimating the servomotor speed without using position sensors and the knowledge of its parameters by artificial neural networks (ANNs).
Design/methodology/approach
– A neural speed observer based on the Elman neural network (NN) structure takes only motor voltages and currents as inputs.
Findings
– After offline NNs training, the observer is incorporated into a DSP-based drive and sensorless control is achieved.
Research limitations/implications
– Future work will consider to reduce the computation time for NNs training and to adaptively tune parameters on line.
Practical implications
– The experimental results of the proposed method are presented to show the effectiveness.
Originality/value
– This paper achieves sensorless servomotor control by ANNs which are seldom studied.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.