This study presents a robot movement in tracking 2D objects. This input image is transformed into another image by certain techniques. In this study, by utilizing image processing, the robot can work to detect objects in the form of hexagons. Apart from detecting the detected shape, this image processing is also for detecting color. So that a hexagon-shaped 2D object will be detected with the magenta color that has been set in advance. The movement of this robot is to follow the motion of objects horizontally. While the object is shifted to the right, the robot will move to the right, and if the object is shifted to the left, the robot will move to the left. Robot movement is controlled by fuzzy logic. There are 5 membership functions to divide the object’s position area and 5 membership functions output to adjust the speed and direction of the robot’s motion.
The development of image processing science is needed to solve problems that are often faced by humans, especially in the field of computer vision. One application of the image processing system is on a package delivery mission during the Covid-19 pandemic. Drones are used to send packages by detecting the presence of Qr Code to determine the point of delivery location. In this study, tests will be carried out on the maximum distance (vertical and horizontal) that can be detected by the Qr Code detection system and the length of time to detect the presence of the Qr Code (time spent). The test shows that the greater the data collection distance (vertical and horizontal), the longer the system detects the presence of the Qr Code. The maximum horizontal distance that the Qr Code can detect is 155 cm, while the vertical distance is 115 cm. The detection distance at vertical is smaller than horizontal because the vertical distance is affected by the field of view (FoV).
In the last few decades, digital image degradation issues, such as blur and noise due to the scanning process or the presence of spots, underwriting, overwriting or bleed-through/show-through effects on the image’s background has been a popular research field. To solve this problem, many background removal methods has been introduced in the literature which are based on local or adaptive filters in order to deal with the low-contrast issue. For this paper, we will be focusing on the bleed-through/show-through effects, which is already resolved in literature by an analogy between the front-ground and the background of the image, that is to say, a recognition of two images is required. To fix that problem, we suggest a new restoration method using blind source separation based on copulas theory that models the dependency structure, with the aim of improving text readability and OCR efficiency.
One of the practical researches of humanoid robots is research on the use of humanoid robots to play soccer. Research in this field is also encouraged by the existence of various humanoid robot soccer competitions. In humanoid robots for soccer, one of the important aspects is the robot’s ability to detect the ball, goal, field boundaries and other players, both friend players and opposing players. This study focuses on the ball detection system which is a basic ability that humanoid robots need to have. The ball detection system developed in this study uses the YOLOv3 method. The test results show that the system built and trained with 3000 image samples can detect balls at a distance of 50 to 900 cm. The time it takes to detect the ball is about 0.033 seconds.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.