We propose a novel method for the robust, non-contact, and six degrees of freedom (6-DOF) motion sensing of an arbitrary rigid body using multi-view laser Doppler measurements. The proposed method reconstructs the 6-DOF motion from fragmentary velocities on the surface of the target. It is unique compared to conventional contact-less motion sensing methods since it is robust against lack-of-feature objects and environments. By discussing the formulation of motion reconstruction by fragmentary velocities, we show that at least three viewpoints are essential for 6-DOF motion reconstruction. Further, we claim that the condition number of the measurement matrix can be a measure of system accuracy, and numerical simulation is performed to find an appropriate system configuration. The proposed method was implemented using a laser Doppler velocimeter, a galvanometer scanner, and some mirrors. We introduce the methods for calibration, coordinate system selection, and the calculation pipeline, all of which contribute to the accuracy of the proposed system. For evaluation, the proposed system is compared with an off-line chessboard-tracking scheme of a 500 fps camera. Experiments of measuring six different motion patterns are demonstrated to show the robustness of the proposed method against different kinds of motion. We also conduct evaluations with different distances and velocities. The mean value error is less than 1.3 deg/s in rotation and 3.2 mm/s in translation, and is robust against changes in distance and velocity. For speed evaluation, the throughput of the proposed method is approximately 250 Hz and the latency is approximately 20 ms.
The visual appearance of an object can be disguised by projecting virtual shading as if overwriting the material. However, conventional projection-mapping methods depend on markers on a target or a model of the target shape, which limits the types of targets and the visual quality. In this paper, we focus on the fact that the shading of a virtual material in a virtual scene is mainly characterized by surface normals of the target, and we attempt to realize markerless and modelless projection mapping for material representation. In order to deal with various targets, including static, dynamic, rigid, soft, and fluid objects, without any interference with visible light, we measure surface normals in the infrared region in real time and project material shading with a novel high-speed texturing algorithm in screen space. Our system achieved 500-fps high-speed projection mapping of a uniform material and a tileable-textured material with millisecond-order latency, and it realized dynamic and flexible material representation for unknown objects. We also demonstrated advanced applications and showed the expressive shading performance of our technique.
We have developed a high-speed vision chip using 3D stacking technology to address the increasing demand for high-speed vision chips in diverse applications. The chip comprises a 1/3.2-inch, 1.27 Mpixel, 500 fps (0.31 Mpixel, 1000 fps, 2 × 2 binning) vision chip with 3D-stacked column-parallel Analog-to-Digital Converters (ADCs) and 140 Giga Operation per Second (GOPS) programmable Single Instruction Multiple Data (SIMD) column-parallel PEs for new sensing applications. The 3D-stacked structure and column parallel processing architecture achieve high sensitivity, high resolution, and high-accuracy object positioning.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.