In retinal surgery, surgeons face difficulties such as indirect visualization of surgical targets, physiological tremor and lack of tactile feedback, which increase the risk of retinal damage caused by incorrect surgical gestures. In this context, intra-ocular proximity sensing has the potential to overcome current technical limitations and increase surgical safety. In this paper we present a system for detecting unintentional collisions between surgical tools and the retina using the visual feedback provided by the opthalmic stereo microscope. Using stereo images, proximity between surgical tools and the retinal surface can be detected when their relative stereo disparity is small. For this purpose, we developed a system comprised of two modules. The first is a module for tracking the surgical tool position on both stereo images. The second is a disparity tracking module for estimating a stereo disparity map of the retinal surface. Both modules were specially tailored for coping with the challenging visualization conditions in retinal surgery. The potential clinical value of the proposed method is demonstrated by extensive testing using a silicon phantom eye and recorded rabbit in vivo data.
Many robotics tasks require a robot to share the same workspace with humans. In such settings, it is important that the robot performs in such a way that does not cause distress to humans in the workspace. In this paper, we address the problem of designing robot controllers which minimize the stress caused by the robot while performing a given task. We present a novel, data-driven algorithm which computes human-friendly trajectories. The algorithm utilizes biofeedback measurements and combines a set of geometric controllers to achieve human friendliness. We evaluate the comfort level of the human using a Galvanic Skin Response (GSR) sensor. We present results from a human tracking task, in which the robot is required to stay within a specified distance without causing high stress values.
Our pilot study suggests that QE can be used to generate precise 3D reconstructions of airways. This technique is atraumatic, does not require ionizing radiation, and integrates easily into standard airway assessment protocols. We conjecture that this technology will be useful for staging airway disease and assessing surgical outcomes.
Abstract:We address the problem of propagating a piece of information among robots scattered in an environment. Initially, a single robot has the information. This robot searches for other robots to pass it along. When a robot is discovered, it can participate in the process by searching for other robots. Since our motivation for studying this problem is to form an ad-hoc network, we call it the Network Formation Problem. In this paper, we study the case where the environment is a rectangle and the robots' locations are unknown but chosen uniformly at random. We present an efficient network formation algorithm, Stripes, and show that its expected performance is within a logarithmic factor of the optimal performance. We also compare Stripes with an intuitive network formation algorithm in simulations.The feasibility of Stripes is demonstrated with a proof-of-concept implementation.
Humans rely on a finely tuned ability to recognize and adapt to socially relevant patterns in their everyday face-to-face interactions. This allows them to anticipate the actions of others, coordinate their behaviors, and create shared meaningto communicate. Social robots must likewise be able to recognize and perform relevant social patterns, including interactional synchrony, imitation, and particular sequences of behaviors. We use existing empirical work in the social sciences and observations of human interaction to develop nonverbal interactive capabilities for a robot in the context of shadow puppet play, where people interact through shadows of hands cast against a wall. We show how information theoretic quantities can be used to model interaction between humans and to generate interactive controllers for a robot. Finally, we evaluate the resulting model in an embodied human-robot interaction study. We show the benefit of modeling interaction as a joint process rather than modeling individual agents.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.