Objective: The focal length of available optical seethrough (OST) head-mounted displays (HMDs) is at least 2 m therefore, during manual tasks, the user eye cannot keep in focus both the virtual and real content at the same time. Another perceptual limitation is related to the vergence-accommodation conflict (VAC), this latter being present in binocular vision only. This paper investigates the effect of incorrect focus cues on the user performance, visual comfort and workload during the execution of augmented reality (AR) guided manual task with one of the most advanced OST HMD, the Microsoft HoloLens. Methods: An experimental study was designed to investigate the performance of 20 subjects in a connect-the-dots task, with and without the use of AR. The following tests were planned: AR guided monocular and binocular; Naked-eye monocular and binocular. Each trial was analyzed to evaluate the accuracy in connecting dots; NASA Task Load Index and Likert questionnaires were used to assess the workload and the visual comfort. Results: No statistically significant differences were found in the workload, and in the perceived comfort between the AR guided binocular and monocular test. User performances were significantly better during the Naked eye tests. No statistically significant differences in performances were found in the monocular and binocular tests. The maximum error in AR tests was 5.9 mm. Conclusion: Even if there is a growing interest in using commercial OST-HMD, for guiding high-precision manual tasks, attention should be paid to the limitations of the available technology not designed for the peripersonal space.
Orthopaedic simulators are popular in innovative surgical training programs, where trainees gain procedural experience in a safe and controlled environment. Recent studies suggest that an ideal simulator should combine haptic, visual, and audio technology to create an immersive training environment. This article explores the potentialities of mixed-reality using the HoloLens to develop a hybrid training system for orthopaedic open surgery. Hip arthroplasty, one of the most common orthopaedic procedures, was chosen as a benchmark to evaluate the proposed system. Patient-specific anatomical 3D models were extracted from a patient computed tomography to implement the virtual content and to fabricate the physical components of the simulator. Rapid prototyping was used to create synthetic bones. The Vuforia SDK was utilized to register virtual and physical contents. The Unity3D game engine was employed to develop the software allowing interactions with the virtual content using head movements, gestures, and voice commands. Quantitative tests were performed to estimate the accuracy of the system by evaluating the perceived position of augmented reality targets. Mean and maximum errors matched the requirements of the target application. Qualitative tests were carried out to evaluate workload and usability of the HoloLens for our orthopaedic simulator, considering visual and audio perception and interaction and ergonomics issues. The perceived overall workload was low, and the self-assessed performance was considered satisfactory. Visual and audio perception and gesture and voice interactions obtained a positive feedback. Postural discomfort and visual fatigue obtained a nonnegative evaluation for a simulation session of 40 minutes. These results encourage using mixed-reality to implement a hybrid simulator for orthopaedic open surgery. An optimal design of the simulation tasks and equipment setup is required to minimize the user discomfort. Future works will include Face Validity, Content Validity, and Construct Validity to complete the assessment of the hip arthroplasty simulator.
This simulator can be used in the field of abdominal surgery to train students and as a testing environment to assess and validate innovative surgical technologies.
The alignment accuracy obtained demonstrates the feasibility of the approach, which can be adopted in advanced AR simulations, in particular as an aid to the identification and isolation of tubular structures. Copyright © 2015 John Wiley & Sons, Ltd.
Hybrid surgical simulators based on augmented reality (AR) solutions benefit from the advantages of both the box trainers and the virtual reality simulators. This paper reports on the results of a long development stage of a hybrid simulator for laparoscopic cholecystectomy that integrates real and the virtual components. We first outline the specifications of the AR simulator and then we explain the strategy adopted for implementing it based on a careful selection of its simulated anatomical components, and characterized by a real-time tracking of both a target anatomy and of the laparoscope. The former is tracked by means of an electromagnetic field generator, while the latter requires an additional camera for video tracking. The new system was evaluated in terms of AR visualization accuracy, realism, and hardware robustness. Obtained results show that the accuracy of AR visualization is adequate for training purposes. The qualitative evaluation confirms the robust-ness and the realism of the simulator. In conclusion, the proposed AR simulator satisfies all the initial specifications in terms of anatomical appearance, modularity, reusability, minimization of spare parts cost, and ability to record surgical errors and to track in real-time the Calot's triangle and the laparoscope. Thus, the proposed system could be an effective training tool for learning the task of identification and isolation of Calot's triangle in laparoscopic cholecystectomy. Moreover, the presented strategy could be applied to simulate other surgical procedures involving the task of identification and isolation of generic tubular structures, such as blood vessels, biliary tree, and nerves, which are not directly visible.
The strategy proposed to sensorize endovascular instruments paves the way for the development of surgical strategies with reduced radiation dose and contrast medium injection. Further in vitro, animal and clinical experiments are necessary for complete surgical validation.
Background: In the context of guided surgery, augmented reality (AR) represents a groundbreaking improvement. The Video and Optical See-Through Augmented Reality Surgical System (VOSTARS) is a new AR wearable head-mounted display (HMD), recently developed as an advanced navigation tool for maxillofacial and plastic surgery and other non-endoscopic surgeries. In this study, we report results of phantom tests with VOSTARS aimed to evaluate its feasibility and accuracy in performing maxillofacial surgical tasks. Methods: An early prototype of VOSTARS was used. Le Fort 1 osteotomy was selected as the experimental task to be performed under VOSTARS guidance. A dedicated set-up was prepared, including the design of a maxillofacial phantom, an ad hoc tracker anchored to the occlusal splint, and cutting templates for accuracy assessment. Both qualitative and quantitative assessments were carried out. Results: VOSTARS, used in combination with the designed maxilla tracker, showed excellent tracking robustness under operating room lighting. Accuracy tests showed that 100% of Le Fort 1 trajectories were traced with an accuracy of ±1.0 mm, and on average, 88% of the trajectory’s length was within ±0.5 mm accuracy. Conclusions: Our preliminary results suggest that the VOSTARS system can be a feasible and accurate solution for guiding maxillofacial surgical tasks, paving the way to its validation in clinical trials and for a wide spectrum of maxillofacial applications.
Augmented reality (AR) Head-Mounted Displays (HMDs) are emerging as the most efficient output medium to support manual tasks performed under direct vision. Despite that, technological and human-factor limitations still hinder their routine use for aiding high-precision manual tasks in the peripersonal space. To overcome such limitations, in this work, we show the results of a user study aimed to validate qualitatively and quantitatively a recently developed AR platform specifically conceived for guiding complex 3D trajectory tracing tasks. The AR platform comprises a new-concept AR video see-through (VST) HMD and a dedicated software framework for the effective deployment of the AR application. In the experiments, the subjects were asked to perform 3D trajectory tracing tasks on 3D-printed replica of planar structures or more elaborated bony anatomies. The accuracy of the trajectories traced by the subjects was evaluated by using templates designed ad hoc to match the surface of the phantoms. The quantitative results suggest that the AR platform could be used to guide high-precision tasks: on average more than 94% of the traced trajectories stayed within an error margin lower than 1 mm. The results confirm that the proposed AR platform will boost the profitable adoption of AR HMDs to guide high precision manual tasks in the peripersonal space.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.