The authors present a method to interconnect the Visualisation Toolkit (VTK) and Unity. This integration enables them to exploit the visualisation capabilities of VTK with Unity's widespread support of virtual, augmented, and mixed reality displays, and interaction and manipulation devices, for the development of medical image applications for virtual environments. The proposed method utilises OpenGL context sharing between Unity and VTK to render VTK objects into the Unity scene via a Unity native plugin. The proposed method is demonstrated in a simple Unity application that performs VTK volume rendering to display thoracic computed tomography and cardiac magnetic resonance images. Quantitative measurements of the achieved frame rates show that this approach provides over 90 fps using standard hardware, which is suitable for current augmented reality/virtual reality display devices.
Objective: Advances in artificial intelligence (AI) have demonstrated potential to improve medical diagnosis. We piloted the end-to-end automation of the midtrimester screening ultrasound scan using AI-enabled tools. Methods:A prospective method comparison study was conducted. Participants had both standard and AI-assisted US scans performed. The AI tools automated image acquisition, biometric measurement, and report production. A feedback survey captured the sonographers' perceptions of scanning.Results: Twenty-three subjects were studied. The average time saving per scan was 7.62 min (34.7%) with the AI-assisted method (p < 0.0001). There was no difference in reporting time. There were no clinically significant differences in biometric measurements between the two methods. The AI tools saved a satisfactory view in 93% of the cases (four core views only), and 73% for the full 13 views, compared to 98% for both using the manual scan. Survey responses suggest that the AI tools helped sonographers to concentrate on image interpretation by removing disruptive tasks. Conclusion:Separating freehand scanning from image capture and measurement resulted in a faster scan and altered workflow. Removing repetitive tasks may allow more attention to be directed identifying fetal malformation. Further work is required to improve the image plane detection algorithm for use in real time.
Objectives: To investigate how virtual reality (VR) imaging impacts decision-making in atrioventricular valve surgery.Methods: This was a single-center retrospective study involving 15 children and adolescents, median age 6 years (range, 0.33-16) requiring surgical repair of the atrioventricular valves between the years 2016 and 2019. The patients' preoperative 3-dimesnional (3D) echocardiographic data were used to create 3D visualization in a VR application. Five pediatric cardiothoracic surgeons completed a questionnaire formulated to compare their surgical decisions regarding the cases after reviewing conventionally presented 2-dimesnional and 3D echocardiographic images and again after visualization of 3D echocardiograms using the VR platform. Finally, intraoperative findings were shared with surgeons to confirm assessment of the pathology.Results: In 67% of cases presented with VR, surgeons reported having "more" or "much more" confidence in their understanding of each patient's pathology and their surgical approach. In all but one case, surgeons were at least as confident after reviewing the VR compared with standard imaging. The case where surgeons reported to be least confident on VR had the worst technical quality of data used. After viewing patient cases on VR, surgeons reported that they would have made minor modifications to surgical approach in 53% and major modifications in 7% of cases. Conclusions:The main impact of viewing imaging on VR is the improved clarity of the anatomical structures. Surgeons reported that this would have impacted the surgical approach in the majority of cases. Poor-quality 3D echocardiographic data were associated with a negative impact of VR visualization; thus. quality assessment of imaging is necessary before projecting in a VR format. (JTCVS Techniques 2021;7:269-77)Virtual reality imaging for 3D echocardiography in use. CENTRAL MESSAGEVirtual reality dynamic 3-dimensional echocardiographic imaging improves surgical insight for atrioventricular valve repair planning in congenital heart disease for clinical use. PERSPECTIVEThis study demonstrates the potential clinical benefits and value of virtual reality in surgical planning for congenital heart disease and other structural heart defects. The observed benefits are improved user interaction and visualization of valve apparatus in a beating heart compared with image visualization using standard techniques.See Commentary on page 278.
We present a novel divergence free mixture model for multiphase flows and the related fluid‐solid coupling. The new mixture model is built upon a volume‐weighted mixture velocity so that the divergence free condition is satisfied for miscible and immiscible multiphase fluids. The proposed mixture velocity can be solved efficiently by adapted single phase incompressible solvers, allowing for larger time steps and smaller volume deviations. Besides, the drift velocity formulation is corrected to ensure mass conservation during the simulation. The new approach increases the accuracy of multiphase fluid simulation by several orders. The capability of the new divergence‐free mixture model is demonstrated by simulating different multiphase flow phenomena including mixing and unmixing of multiple fluids, fluid‐solid coupling involving deformable solids and granular materials.
The goal of this review is to illustrate the emerging use of multimodal virtual reality that can benefit learning-based games. The review begins with an introduction to multimodal virtual reality in serious games and we provide a brief discussion of why cognitive processes involved in learning and training are enhanced under immersive virtual environments. We initially outline studies that have used eye tracking and haptic feedback independently in serious games, and then review some innovative applications that have already combined eye tracking and haptic devices in order to provide applicable multimodal frameworks for learning-based games. Finally, some general conclusions are identified and clarified in order to advance current understanding in multimodal serious game production as well as exploring possible areas for new applications.
Designing and creating complex and interactive animation is still a challenge in the field of virtual reality, which has to handle various aspects of functional requirements (e.g. graphics, physics, AI, multimodal inputs and outputs, and massive data assets management). In this paper, a semantic framework is proposed to model the construction of interactive animation and promote animation assets reuse in a systematic and standardized way. As its ontological implementation, two domainspecific ontologies for the hand-gesture-based interaction and animation data repository have been developed in the context of Chinese traditional shadow play art. Finally, prototype of interactive Chinese shadow play performance system using deep motion sensor device is presented as the usage example.
U n d e r s t a n d i n gt h ei mp a c t o f mu l t i mo d a l i n t e r a c t i o nu s i n gg a z ei n f o r me dmi d-a i r g e s t u r ec o n t r o l i n3 Dv i r t u a l o b j e c t sma n i p u l a t i o n
The intricate nature of congenital heart disease requires understanding of the complex, patient-specific three-dimensional dynamic anatomy of the heart, from imaging data such as three-dimensional echocardiography for successful outcomes from surgical and interventional procedures. Conventional clinical systems use flat screens, and therefore, display remains two-dimensional, which undermines the full understanding of the three-dimensional dynamic data. Additionally, the control of three-dimensional visualisation with two-dimensional tools is often difficult, so used only by imaging specialists. In this paper, we describe a virtual reality system for immersive surgery planning using dynamic three-dimensional echocardiography, which enables fast prototyping for visualisation such as volume rendering, multiplanar reformatting, flow visualisation and advanced interaction such as three-dimensional cropping, windowing, measurement, haptic feedback, automatic image orientation and multiuser interactions. The available features were evaluated by imaging and nonimaging clinicians, showing that the virtual reality system can help improve the understanding and communication of three-dimensional echocardiography imaging and potentially benefit congenital heart disease treatment.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.