IntroductionLiver surgery is widely used as a treatment modality for various liver pathologies. Despite significant improvement in clinical care, operative strategies and technology over the last few decades, liver surgery is still risky and optimal preoperative planning and anatomical assessment are necessary to minimize risks of serious complications. 3D printing technology is rapidly expanding and its applications in medicine are growing, but its applications in liver surgery are still limited. This article describes development of models of hepatic structures specific to a patient diagnosed with an operable hepatic malignancy. MethodsAnatomy data was segmented and extracted form CT and MRI liver of a single patient with a resectable liver tumour. The digital data of the extracted anatomical surfaces was then edited and smoothed resulting in a set of digital 3D models of the hepatic vein, portal vein with tumour, biliary tree with gallbladder and hepatic artery. These were then 3D printed. ResultsThe final models of the liver structures and tumour is provide good anatomical detail and representation of the spatial relationships between the liver tumour and adjacent hepatic structures. It can be easily manipulated and explored from different angles. ConclusionsA graspable, patient specific, 3D printed models of liver structures could provide an improved understanding of the complex liver anatomy, better navigation in difficult areas and allow surgeons to anticipate anatomical issues that might arrive during the operation. Further research into adequate imaging, liver specific volumetric software, and segmentation algorithms are worth considering to optimize this application. Methods Data Extraction and SegmentationRetrospectively collected radiology image data from a patient with an operable malignant hepatic tumour consisted of a standard CT angiogram of abdomen and pelvis and MRI of liver performed using a standard hepatic imaging protocol with gadolinium contrast. The CT slides were 3mm thick and MRI slides were 8.99 mm thick, both yielding anisotropic voxels when viewed as 3D. The data of both scans was stored in Digital Imaging and Communications in Medicine (DICOM) files. Amira 4.5.4. visualisation software (FEI, Hillsboro, USA) was used to view and segment the data. All scans were interrogated in three planes and pixels containing image data for hepatic and portal veins, hepatic artery, biliary structures and tumour were manually selected. (Figure 1). Due to varying image quality between the two radiology modalities used, MRI data was used to segment the biliary tree, portal vein, hepatic veins and tumour, whilst the CT was used to collect data for the hepatic artery. Segmentation was completed with a combination of manual, and "region growing"techniques, where the latter was used for large regions of similar density signal. Surgical InnovationMadurska, Poyade, Eason, Rea, Watson/ Development of patient specific 3D printed liver 5 model for preoperative planning Surface extraction and model pr...
Current methods used to communicate and present the complex arrangement of vasculature related to the brain and spinal cord is limited in undergraduate veterinary neuroanatomy training. Traditionally it is taught with 2-dimensional (2D) diagrams, photographs and medical imaging scans which show a fixed viewpoint. 2D representations of 3-dimensional (3D) objects however lead to loss of spatial information, which can present problems when translating this to the patient. Computer-assisted learning packages with interactive 3D anatomical models have become established in medical training, yet equivalent resources are scarce in veterinary education. For this reason, we set out to develop a workflow methodology creating an interactive model depicting the vasculature of the canine brain that could be used in undergraduate education. Using MR images of a dog and several commonly available software programs, we set out to show how combining image editing, segmentation and surface generation, 3D modeling and texturing can result in the creation of a fully interactive application for veterinary training. In addition to clearly identifying a workflow methodology for the creation of this dataset, we have also demonstrated how an interactive tutorial and self-assessment tool can be incorporated into this. In conclusion, we present a workflow which has been successful in developing a 3D reconstruction of the canine brain and associated vasculature through segmentation, surface generation and post-processing of readily available medical imaging data. The reconstructed model was implemented into an interactive application for veterinary education that has been designed to target the problems associated with learning neuroanatomy, primarily the inability to visualise complex spatial arrangements from 2D resources. The lack of similar resources in this field suggests this workflow is original within a veterinary context. There is great potential to explore this method, and introduce a new dimension into veterinary education and training.
Neuroanatomy can be challenging to both teach and learn within the undergraduate veterinary medicine and surgery curriculum. Traditional techniques have been used for many years, but there has now been a progression to move towards alternative digital models and interactive 3D models to engage the learner. However, digital innovations in the curriculum have typically involved the medical curriculum rather than the veterinary curriculum. Therefore, we aimed to create a simple workflow methodology to highlight the simplicity there is in creating a mobile augmented reality application of basic canine head anatomy. Using canine CT and MRI scans and widely available software programs, we demonstrate how to create an interactive model of head anatomy. This was applied to augmented reality for a popular Android mobile device to demonstrate the user-friendly interface. Here we present the processes, challenges and resolutions for the creation of a highly accurate, data based anatomical model that could potentially be used in the veterinary curriculum. This proof of concept study provides an excellent framework for the creation of augmented reality training products for veterinary education. The lack of similar resources within this field provides the ideal platform to extend this into other areas of veterinary education and beyond.
High-profile accidents in the Chemical sector across research and manufacturing scaleshave provided strong drivers to develop a new benchmark in safety training and compliance. Herein, we describe the design, implementation, and standardized psychological evaluation of virtual reality (VR) applied to process safety training. Through a specific industrial case study, we show that testable learning of complex safetyspecific tasks in VR is statistically equivalent to traditional slidebased video training. However, VR training presents a measurable positive improvement on trainees' perception of overall learning and their feeling of presence in the task during training. It has also been shown that knowledge retention from video lectures can be overestimated, if not controlled. Through these resultsand our transferable blueprint for robustly assessing any new VR training platformwe envisage a range of technologically enabled efforts to enhance safety performance in both laboratory-and plant-based activities. Implications for physical resource-saving projects are also described.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.