SUMMARYThis paper presents the computation of the safe working zone (SWZ) of a parallel manipulator having three degrees of freedom. The SWZ is defined as a continuous subset of the workspace, wherein the manipulator does not suffer any singularity, and is also free from the issues of link interference and physical limits on its joints. The proposed theory is illustrated via application to two parallel manipulators: a planar 3-R̲RR manipulator and a spatial manipulator, namely, MaPaMan-I. It is also shown how the analyses can be applied to any parallel manipulator having three degrees of freedom, planar or spatial.
The use of robots in medical applications is ever increasing with many diagnostic and therapeutic robots being developed and implemented in hospitals. Research in bio-robotics has been focused on solving many complicated problems and automating tasks that require precision and accuracy. There are as well many tedious, but simple tasks, carried out in hospital settings which can be automated easily and provide ease to the patients as well as the hospital staff. Wound cleaning is one such task and to design a system to automate this task is the main aim of this paper. The proposed solution mainly consists of an image processing component which identifies the location and critical parameters of the wound, and a robotic arm which plays a major role in cleaning the wound. The system designed is theoretically evaluated, simulated and is currently being developed and tested. This system could form basis for a completely autonomous wound therapy system with additional features integrated in future.
Redundant robots allow multiple robot joint configurations for the same end-effector pose by moving only in null space. Robot's motions in null space are not intuitive to predict in general and in particular for medical personnel. In this work, we present a control concept that allows the operator to focus on the correct end-effector pose during time-critical tasks, e.g. change of the endoscope pose during a surgical intervention, while the shape of the redundant robotic structure is handled autonomously based on previously learnt preferred shapes close to the actual end-effector pose. We investigated the benefit of the proposed learned task space control over naive task space control that required an operator to manually control a virtual robot in task space and null space independently. In a first user study, we found that learned task space control significantly reduced the effort -as measured by task duration and task load -for operators compared to naive task space control.
Purpose Understanding the properties and aspects of the robotic system is essential to a successful medical intervention, as different capabilities and limits characterize each. Robot positioning is a crucial step in the surgical setup that ensures proper reachability to the desired port locations and facilitates docking procedures. This very demanding task requires much experience to master, especially with multiple trocars, increasing the barrier of entry for surgeons in training. Methods Previously, we demonstrated an Augmented Reality-based system to visualize the rotational workspace of the robotic system and proved it helps the surgical staff to optimize patient positioning for single-port interventions. In this work, we implemented a new algorithm to allow for an automatic, real-time robotic arm positioning for multiple ports. Results Our system, based on the rotational workspace data of the robotic arm and the set of trocar locations, can calculate the optimal position of the robotic arm in milliseconds for the positional and in seconds for the rotational workspace in virtual and augmented reality setups. Conclusions Following the previous work, we extended our system to support multiple ports to cover a broader range of surgical procedures and introduced the automatic positioning component. Our solution can decrease the surgical setup time and eliminate the need to repositioning the robot mid-procedure and is suitable both for the preoperative planning step using VR and in the operating room—running on an AR headset.
We are developing a robotic system for future application in minimally invasive laser osteotomy. This paper presents the mechanical system concept as a macro-milli-micro system and focuses on designing and evaluating the milli-system. The millisystem consists of an articulated tendon-driven robotic endoscope with seven rigid links with an outer diameter of 8 mm connected by six discrete rotational joints (±30 • ). These joints can be controlled individually, however, controlling one joint's motion influences all joints located more distally, making joint control an interesting challenge. Controlling each joint as desired will allow positioning the micro-system mounted at the endoscope's tip. The micro-system is itself a robot that will accurately position the laser. The robotic endoscope incorporates a hollow core with a diameter of 4.8 mm that holds a supply channel for the micro-system with the necessary means for actuation and surgical intervention. We demonstrated the functionality of the robotic endoscope in tracking experiments. Despite the joints' mutual influence, the articulated robotic endoscope could be handled successfully and achieved an angular settling error of less than 1 • in the individual joints. The overall robotic system's functionality was successfully demonstrated with a time-synchronized joint movement of the macro-system (serial manipulator) and the robotic endoscope.
In robot-assisted surgeries, the surgeon focuses on the surgical tool and its pose, and not on the complete robot’s shape. However, the joints of redundant robots (robots that have more degrees of freedom (DoF) than needed for the positioning of surgical tools) might move in unexpected/undesired ways. Joint motions that lead to patient or collisions are safety critical. We assume that the medical personnel in the operating room can best decide if a planned robot motions come too close to the patient or not. Therefore, we propose an augmented reality-based solution to interact with the robot during surgery planning, and intervention. The tool can be used to command a robot by drawing a trajectory in augmented reality (AR), visualizing the robot movement to check if it is safe before execution. The proposed solution allows surgeons to plan safe robot motion paths before-hand and adapt them when necessary in situ. As a proof-of-concept, we implemented and demonstrated the proposed solution on a 7-DoF redundant robot by commanding different trajectories. The control architecture to plan and execute motion for a surgical robot using AR is a key result of this work.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.