Abstract-The impressive development of medical imaging technology during the last decades provided physicians with an increasing amount of patient specific anatomical and functional data. In addition, the increasing use of non-ionizing real-time imaging, in particular ultrasound and optical imaging, during surgical procedures created the need for design and development of new visualization and display technology allowing physicians to take full advantage of rich sources of heterogeneous preoperative and intraoperative data. During 90's, medical augmented reality was proposed as a paradigm bringing new visualization and interaction solutions into perspective. This paper not only reviews the related literature but also establishes the relationship between subsets of this body of work in medical augmented reality. It finally discusses the remaining challenges for this young and active multidisciplinary research community.
Abstract. The idea of in-situ visualization for surgical procedures has been widely discussed in the community [1,2,3,4]. While the tracking technology offers nowadays a sufficient accuracy and visualization devices have been developed that fit seamlessly into the operational workflow [1,3], one crucial problem remains, which has been discussed already in the first paper on medical augmented reality [4]. Even though the data is presented at the correct place, the physician often perceives the spatial position of the visualization to be closer or further because of virtual/real overlay. This paper describes and evaluates novel visualization techniques that are designed to overcome misleading depth perception of trivially superimposed virtual images on the real view. We have invited 20 surgeons to evaluate seven different visualization techniques using a head mounted display (HMD). The evaluation has been divided into two parts. In the first part, the depth perception of each kind of visualization is evaluated quantitatively. In the second part, the visualizations are evaluated qualitatively in regard to user friendliness and intuitiveness. This evaluation with a relevant number of surgeons using a state-of-the-art system is meant to guide future research and development on medical augmented reality.
Abstract. Workflow recovery is crucial for designing context-sensitive service systems in future operating rooms. Abstract knowledge about actions which are being performed is particularly valuable in the OR. This knowledge can be used for many applications such as optimizing the workflow, recovering average workflows for guiding and evaluating training surgeons, automatic report generation and ultimately for monitoring in a context aware operating room.This paper describes a novel way for automatic recovery of the surgical workflow. Our algorithms perform this task without an implicit or explicit model of the surgery. This is achieved by the synchronization of multidimensional state vectors of signals recorded in different operations of the same type. We use an enhanced version of the dynamic time warp algorithm to calculate the temporal registration. The algorithms have been tested on 17 signals of six different surgeries of the same type. The results on this dataset are very promising because the algorithms register the steps in the surgery correctly up to seconds, which is our sampling rate. Our software visualizes the temporal registration by displaying the videos of different surgeries of the same type with varying duration precisely synchronized to each other. The synchronized videos of one surgery are either slowed down or speeded up in order to show the same steps as the ones presented in the videos of the other surgery.
6 The virtual mirror penetrating the 3D virtual space could reflect (a) surfaces or (c) rendered volumes, providing desired views of the 3D object from any viewpoint. In augmented laparoscopic surgery, the virtual mirror could provide additional views, solving the (b) 3D ambiguities of 2D projections. It also reflects the virtual models of tracked surgical instruments further improving hand-eye coordination.
Abstract. Several visualization methods for intraoperative navigation systems were proposed in the past. In standard slice based navigation, three dimensional imaging data is visualized on a two dimensional user interface in the surgery room. Another technology is the in-situ visualization i.e. the superimposition of imaging data directly into the view of the surgeon, spatially registered with the patient. Thus, the three dimensional information is represented on a three dimensional interface. We created a hybrid navigation interface combining an augmented reality visualization system, which is based on a stereoscopic head mounted display, with a standard two dimensional navigation interface. Using an experimental setup, trauma surgeons performed a drilling task using the standard slice based navigation system, different visualization modes of an augmented reality system, and the combination of both. The integration of a standard slice based navigation interface into an augmented reality visualization overcomes the shortcomings of both systems.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.