This paper presents a method for high precision visual pose estimation along with a simple setup procedure. Robotics for industrial solutions is a rapidly growing field and these robots require very precise position information to perform manipulations. This is usually accomplished using e.g. fixtures or feeders, both expensive hardware solutions. To enable fast changes in production, more flexible solutions are required, one possibility being visual pose estimation. Although many current pose estimation algorithms show increased performance in terms of recognition rates on public datasets, they do not focus on actual applications, neither in setup complexity or high accuracy during object localization. In contrast, our method focuses on solving a number of specific pose estimation problems in a seamless manner with a simple setup procedure. Our method relies on a number of workcell constraints and employs a novel method for automatically finding stable object poses. In addition, we use an active rendering method for refining the estimated object poses, giving a very fine localization, suitable for robotic manipulation. Experiments with current state-of-the-art 2D algorithms and our method show an average improvement from 9 mm to 0.95 mm uncertainty. The method was also used by the winning team at the 2018 World Robot Summit Assembly Challenge.
Gripper design process is one of the interesting challenges in the context of grasping within industry. Typically, simple parallel-finger grippers, which are easy to install and maintain, are used in platforms for robotic grasping. The context switches in these platforms require frequent exchange of gripper fingers to accommodate grasping of new products, while subjected to numerous constraints, such as workcell uncertainties due to the vision systems used. The design of these fingers consumes the man-hours of experienced engineers, and involves a lot of trial-and-error testing. In our previous work, we have presented a method to automatically compute the optimal finger shapes for defined task contexts in simulation. In this paper, we show the performance of our method in an industrial grasping scenario. We first analyze the uncertainties of the used vision system, which are the major source of grasping error. Then, we perform the experiments, both in simulation and in a real setting. The experiments confirmed the validity of our approach. The computed finger design was employed in a real industrial assembly scenario.
This paper introduces a novel network architecture for the classification of large-scale point clouds. The network is used to classify metadata from cuneiform tablets. As more than half a million tablets remain unprocessed, this can help create an overview of the tablets. The network is tested on a comparison dataset and obtains state-of-theart performance. We also introduce new metadata classification tasks on which the network shows promising results. Finally, we introduce the novel Maximum Attention visualization, demonstrating that the trained network focuses on the intended features. Code available at https: //github.com/fhagelskjaer/dlc-cuneiform
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.