This paper extends the topic of monocular pose estimation of an object using Aruco tags imaged by RGB cameras. The accuracy of the Open CV Camera calibration and Aruco pose estimation pipelines is tested in detail by performing standardized tests with multiple Intel Realsense D435 Cameras. Analyzing the results led to a way to significantly improve the performance of Aruco tag localization which involved designing a 3D Aruco board, which is a set of Aruco tags placed at an angle to each other, and developing a library to combine the pose data from the individual tags for both higher accuracy and stability.
This paper presents an approach to compensate for the effect of thermal expansion on the structure of an industrial robot and thus to reduce the repeatability difference of the robot in cold and warm conditions. In contrast to previous research in this area that deals with absolute accuracy, this article is focused on determining achievable repeatability. To unify and to increase the robot repeatability, the measurements with highly accurate sensors were performed under different conditions on an industrial robot ABB IRB1200, which was equipped with thermal sensors, mounted on a pre-defined position around joints. The performed measurements allowed to implement a temperature-based prediction model of the end effector positioning error. Subsequent tests have shown that the implemented model used for the error compensation proved to be highly effective. Using the methodology presented in this article, the impact of drift can be reduced by up to 89.9%. A robot upgraded with a compensation principle described in this article does not have to be warmed up as it works with the same low repeatability error in the entire range of the achievable temperatures.
Soft gripping, in which the gripper adapts to differently shaped objects, is in great demand for use in unknown or dynamically changing environments and is one of the main research subjects in soft robotics. Several systems have already been created, one of which is a passive shape-adaptable finger based on the FinRay effect. The geometric shape of this finger ensures that the finger wraps around the object it grips. FinRay fingers have been studied in several studies, which have changed the internal structure and examined how gripping force’s dependence on finger deformation changes. So far, however, no specific way has been determined to evaluate the proposed finger regarding its ability to wrap around the object. This work comes up with a new and simple method to evaluate the finger’s wrapping around the object mathematically. Based on this evaluation method, several different patterns of the internal structure of FinRay fingers were tested. The fingers were first tested in a simulation program, which simulated a steel roller indentation with a diameter of 20 mm in the middle of the finger’s contact surface. Based on the simulation results, selected types of structure were made by the Fused Filament Fabrication method from a flexible filament and tested on a real test rig to verify the results of the simulation and compare it with the real behaviour. According to the methodology used, the results show that the most suitable structure of the selected tested fingers from the point of view of wrapping the finger around the object is a structure without internal filling. Designers can simply use the new evaluation method to compare their designed finger variants and select the most suitable one according to the ability to wrap around the gripped object. They can also use graphs from this work’s results and determine the finger’s dimensions without internal filling according to the required forces and deflection.
A depth camera outputs an image in which each pixel depicts the distance between the camera plane and the corresponding point on the image plane. Low-cost depth cameras are becoming commonplace and given their applications in the field of machine vision, one must carefully select the right device according to the environment in which the camera will be used given the accuracy these cameras can be associated with factors such as distance from the target, luminosity of the environment, etc. This paper aims to compare three depth cameras currently available in the market, Intel RealSense D435, which uses stereo vision to compute depth at pixels, ASUS Xtion and Microsoft Kinect 2 represent Time of flight-based depth cameras. The comparison will be based on how the cameras perform at different distances from a flat surface and we will check if the colour of the surface affects the depth image quality.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.