Abstract-Thinking about intelligent robots involves consideration of how such systems can be enabled to perceive, interpret and act in arbitrary and dynamic environments. While sensor perception and model interpretation focus on the robot's internal representation of the world rather passively, robot grasping capabilities are needed to actively execute tasks, modify scenarios and thereby reach versatile goals. These capabilities should also include the generation of stable grasps to safely handle even objects unknown to the robot. We believe that the key to this ability is not to select a good grasp depending on the identification of an object (e.g. as a cup), but on its shape (e.g. as a composition of shape primitives). In this paper, we envelop given 3D data points into primitive box shapes by a fit-and-split algorithm that is based on an efficient Minimum Volume Bounding Box implementation. Though box shapes are not able to approximate arbitrary data in a precise manner, they give efficient clues for planning grasps on arbitrary objects. We present the algorithm and experiments using the 3D grasping simulator GraspIt! [1].
In the field of minimally invasive surgery one barrier in clinical practice is the limited field of view provided by endoscopic cameras. We propose an image mosaicking approach to extend the field of view for real-time visualization by stitching several video frames. The approach is based on feature tracking and a robust estimation of the imageto-image transformations. We compare its performance to that of a state-of-the-art approach. Our method shows superior accuracy at frame rates of 6.8 to 8.1 frames per second, which allows for real-time visualization of the extended field of view.
Computer assistance in Minimally Invasive Surgery is a very active field of research. Many systems designed for Computer Assisted Surgery require information about the instruments' positions and orientations. Our main focus lies on tracking a laparoscopic ultrasound probe to generate 3D ultrasound volumes. State-of-the-art tracking methods such as optical or electromagnetic tracking systems measure pose with respect to a fixed extra-body coordinate system. This causes inaccuracies of the reconstructed ultrasound volume in the case of patient motion, e.g. due to respiration. We propose attaching an endoscopic camera to the ultrasound probe and calculating the camera motion from the video sequence with respect to the organ surface. We adapt algorithms developed for solving the relative pose problem to recreate the camera path during the ultrasound sweep over the organ. By this image-based motion estimation camera motion can only be determined up to an unknown scale factor, known as the depth-speed-ambiguity. We show, how this problem can be overcome in the given scenario, exploiting the fact, that the distance of the camera to the organ surface is fixed and known. Preprocessing steps are applied to compensate for endoscopic image quality deficiencies
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.