The OntoSPM Collaborative Action has been in operation for 24 months, with a growing dedicated membership. Its main result is a modular ontology, undergoing constant updates and extensions, based on the experts' suggestions. It remains an open collaborative action, which always welcomes new contributors and applications.
Automation of surgical processes (SPs) is an utterly complex, yet highly demanded feature by medical experts. Currently, surgical tools with advanced sensory and diagnostic capabilities are only available. A major criticism towards the newly developed instruments that they are not fitting into the existing medical workflow often creating more annoyance than benefit for the surgeon. The first step in achieving streamlined integration of computer technologies is gaining a better understanding of the SP. Surgical ontologies provide a generic platform for describing elements of the surgical procedures. Surgical Process Models (SPMs) built on top of these ontologies have the potential to accurately represent the surgical workflow. SPMs provide the opportunity to use ontological terms as the basis of automation, allowing the developed algorithm to easily integrate into the surgical workflow, and to apply the automated SPMs wherever the linked ontological term appears in the workflow. In this work, as an example to this concept, the subtask level ontological term “blunt dissection” was targeted for automation. We implemented a computer vision-driven approach to demonstrate that automation on this task level is feasible. The algorithm was tested on an experimental silicone phantom as well as in several ex vivo environments. The implementation used the da Vinci surgical robot, controlled via the Da Vinci Research Kit (DVRK), relying on a shared code-base among the DVRK institutions. It is believed that developing and linking further building blocks of lower level surgical subtasks could lead to the introduction of automated soft tissue surgery. In the future, the building blocks could be individually unit tested, leading to incremental automation of the domain. This framework could potentially standardize surgical performance, eventually improving patient outcomes.
With the introduction of telerobotic systems, it has become possible for surgeons to perform medical operations at greater physical distances from their patients. Whether in an adjacent room or on another continent, these systems enable greater flexibility in mitigating adverse surgical conditions. These ideas originally came from the space research, where further needs emerged to advance robots that could resolve surgical cases previously not treatable. The concept of providing surgical aid to astronauts in outer space yielded to telerobotic surgical care on Earth, benefiting around 1 million patients per year. As the field continues to develop and becomes more prevalent, it is worth looking back to the origins of the technology and the early days of robotic telesurgery. While many of the early prototypes and technologies never reached patients, their engineering components and innovative concepts directly lead to the birth of modern surgical robots.
Abstract. Fiducial localization in volumetric images is a common task performed by image-guided navigation and augmented reality systems. These systems often rely on fiducials for image-space to physical-space registration, or as easily identifiable structures for registration validation purposes. Automated methods for fiducial localization in volumetric images are available. Unfortunately, these methods are not generalizable as they explicitly utilize strong a priori knowledge such as fiducial intensity values in CT, or known spatial configurations as part of the algorithm. Thus, manual localization has remained the most general approach, readily applicable across fiducial types and imaging modalities. The main drawbacks of manual localization are the variability and accuracy errors associated with visual localization. We describe a semi-automatic fiducial localization approach that combines the strengths of the human operator and an underlying computational system. The operator identifies the rough location of the fiducial, and the computational system accurately localizes it via intensity based registration using the mutual information similarity measure. This approach is generic, implicitly accommodating for all fiducial types and imaging modalities. The framework was evaluated using five fiducial types and three imaging modalities. We obtained a maximal localization accuracy error of 0.35mm, with a maximal precision variability of 0.5mm.
Medical imaging introduced the greatest paradigm change in the history of modern medicine, and particularly ultrasound (US) is becoming the most widespread imaging modality. The integration of digital imaging into the surgical domain opens new frontiers in diagnostics and intervention, and the combination of robotics leads to improved accuracy and targeting capabilities. This paper reviews the state-of-the-art in US-based robotic platforms, identifying the main research and clinical trends, reviewing current capabilities and limitations. The focus of the study includes non-autonomous US-based systems, US-based automated robotic navigation systems and USguided autonomous tools. These areas outline future development, projecting a swarm of new applications in the computer-assisted surgical domain.
No abstract
Surgical Process Modeling is a growing field of biomedical data science, aiming to create and support context aware surgical systems. As a part of it, novel research intends to provide standardized, formal description of surgical processes. Surgical workflow recordings based on ontologies can provide objective measurements of surgical skill, thus standardizing surgical performance. Comparing the operational phase to the calculated optimal process could allow for new, context aware surgical training, evaluation and assistant systems. In this paper, we present a new software tool, named OntoFlow, developed to record ontology-based surgical workflow during the clinical practice, with post-event editing and reviewing capabilities. OntoFlow directly accesses the background ontology, therefore it can speed up the process of ontology development. As a surgical workflow reviewing software it can also be used as a training tool for surgical residents.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.