Motion capture setups are used in numerous fields. Studies based on motion capture data can be found in biomechanical, sport or animal science. Clinical science studies include gait analysis as well as balance, posture and motor control. Robotic applications encompass object tracking. Today’s life applications includes entertainment or augmented reality. Still, few studies investigate the positioning performance of motion capture setups. In this paper, we study the positioning performance of one player in the optoelectronic motion capture based on markers: Vicon system. Our protocol includes evaluations of static and dynamic performances. Mean error as well as positioning variabilities are studied with calibrated ground truth setups that are not based on other motion capture modalities. We introduce a new setup that enables directly estimating the absolute positioning accuracy for dynamic experiments contrary to state-of-the art works that rely on inter-marker distances. The system performs well on static experiments with a mean absolute error of 0.15 mm and a variability lower than 0.025 mm. Our dynamic experiments were carried out at speeds found in real applications. Our work suggests that the system error is less than 2 mm. We also found that marker size and Vicon sampling rate must be carefully chosen with respect to the speed encountered in the application in order to reach optimal positioning performance that can go to 0.3 mm for our dynamic study.
Real-time vision-based navigation is a difficult task largely due to the limited optical properties of single cameras that are usually mounted on robots. Multiple camera systems such as polydioptric sensors provide more efficient and precise solutions for autonomous navigation. They are particularly suitable for motion estimation because they allow one to formulate a linear optimization. These sensors capture the visual information in a more complete form called the plenoptic function that encodes the spatial and temporal light radiance of the scene. The polydioptric sensors are rarely used in robotics because they are usually thought to increase the amount of data produced and require more computational power. This paper shows that these cameras provide more accurate estimation results in mobile robotics navigation if designed properly. It also shows that a plenoptic vision sensor with a resolution ranging from 3 × 3 to 40 × 30 pixels camera, provides higher accuracy than a mono-SLAM running on a 320 × 240 pixels camera. The paper also gives a complete scheme to design usable real-time plenoptic cameras for mobile robotics applications by establishing the link between velocity, resolution and motion estimation accuracy. Finally, experiments on a mobile robot are shown allowing for a comparison between optimal plenoptic visual sensors and single high-resolution cameras. The estimation with the plenoptic sensor is more accurate than a monocular high-definition camera with a processing time 100 times lower.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.