Accurately estimating the 3D position of underwater objects is of great interest when doing research on marine animals. An inherent problem of 3D reconstruction of underwater positions is the presence of refraction which invalidates the assumption of a single viewpoint. Three ways of performing 3D reconstruction on underwater objects are compared in this work: an approach relying solely on in-air camera calibration, an approach with the camera calibration performed under water and an approach based on ray tracing with Snell's law. As expected, the in-air camera calibration showed to be the most inaccurate as it does not take refraction into account. The precision of the estimated 3D positions based on the underwater camera calibration and the ray tracing based approach were, on the other hand, almost identical. However, the ray tracing based approach is found to be advantageous as it is far more flexible in terms of the calibration procedure due to the decoupling of the intrinsic and extrinsic camera parameters.
We propose a novel multi-pose loss function to train a neural network for 6D pose estimation, using synthetic data and evaluating it on real images. Our loss is inspired by the VSD (Visible Surface Discrepancy) metric and relies on a differentiable renderer and CAD models. This novel multipose approach produces multiple weighted pose estimates to avoid getting stuck in local minima. Our method resolves pose ambiguities without using predefined symmetries. It is trained only on synthetic data. We test on real-world RGB images from the T-LESS dataset, containing highly symmetric objects common in industrial settings. We show that our solution can be used to replace the codebook in a state-of-the-art approach. So far, the codebook approach has had the shortest inference time in the field. Our approach reduces inference time further while a) avoiding discretization, b) requiring a much smaller memory footprint and c) improving pose recall. 3
Purpose: The advances in artificial intelligence have started to reach a level where autonomous systems are becoming increasingly popular as a way to aid people in their everyday life. Such intelligent systems may especially be beneficially for people struggling to complete common everyday tasks, such as individuals with movement-related disabilities. The focus of this paper is hence to review recent work in using computer vision for semi-autonomous control of assistive robotic manipulators (ARMs).Methods: Four databases were searched using a block search, yielding 257 papers which were reduced to 14 papers after apply-ing various filtering criteria. Each paper was reviewed with focus on the hardware used, the autonomous behaviour achieved using computer vision and the scheme for semi-autonomous control of the system. Each of the reviewed systems were also sought characterized by grading their level of autonomy on a pre-defined scale.Conclusions: A re-occurring issue in the reviewed systems was the inability to handle arbitrary objects. This makes the systems unlikely to perform well outside a controlled environment, such as a lab. This issue could be addressed by having the systems recognize good grasping points or primitive shapes instead of specific pre-defined objects. Most of the reviewed systems did also use a rather simple strategy for the semi-autonomous control, where they switch either between full manual control or full automatic control. An alternative could be a control scheme relying on adaptive blending which could provide a more seamless experience for the user KEYWORDS
Spinal cord injury can leave the affected individual severely disabled with a low level of independence and quality of life. Assistive upper-limb exoskeletons are one of the solutions that can enable an individual with tetraplegia (paralysis in both arms and legs) to perform simple activities of daily living by mobilizing the arm. Providing an efficient user interface that can provide full continuous control of such a device—safely and intuitively—with multiple degrees of freedom (DOFs) still remains a challenge. In this study, a control interface for an assistive upper-limb exoskeleton with five DOFs based on an intraoral tongue-computer interface (ITCI) for individuals with tetraplegia was proposed. Furthermore, we evaluated eyes-free use of the ITCI for the first time and compared two tongue-operated control methods, one based on tongue gestures and the other based on dynamic virtual buttons and a joystick-like control. Ten able-bodied participants tongue controlled the exoskeleton for a drinking task with and without visual feedback on a screen in three experimental sessions. As a baseline, the participants performed the drinking task with a standard gamepad. The results showed that it was possible to control the exoskeleton with the tongue even without visual feedback and to perform the drinking task at 65.1% of the speed of the gamepad. In a clinical case study, an individual with tetraplegia further succeeded to fully control the exoskeleton and perform the drinking task only 5.6% slower than the able-bodied group. This study demonstrated the first single-modal control interface that can enable individuals with complete tetraplegia to fully and continuously control a five-DOF upper limb exoskeleton and perform a drinking task after only 2 h of training. The interface was used both with and without visual feedback.
Wheelchair mounted upper limb exoskeletons offer an alternative way to support disabled individuals in their activities of daily living (ADL). Key challenges in exoskeleton technology include innovative mechanical design and implementation of a control method that can assure a safe and comfortable interaction between the human upper limb and exoskeleton. In this article, we present a mechanical design of a four degrees of freedom (DOF) wheelchair mounted upper limb exoskeleton. The design takes advantage of non-backdrivable mechanism that can hold the output position without energy consumption and provide assistance to the completely paralyzed users. Moreover, a PD-based trajectory tracking control is implemented to enhance the performance of human exoskeleton system for two different tasks. Preliminary results are provided to show the effectiveness and reliability of using the proposed design for physically disabled people.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.