Purpose Dual‐energy computed tomography (DECT) has shown great potential in many clinical applications. By incorporating the information from two different energy spectra, DECT provides higher contrast and reveals more material differences of tissues compared to conventional single‐energy CT (SECT). Recent research shows that automatic multi‐organ segmentation of DECT data can improve DECT clinical applications. However, most segmentation methods are designed for SECT, while DECT has been significantly less pronounced in research. Therefore, a novel approach is required that is able to take full advantage of the extra information provided by DECT. Methods In the scope of this work, we proposed four three‐dimensional (3D) fully convolutional neural network algorithms for the automatic segmentation of DECT data. We incorporated the extra energy information differently and embedded the fusion of information in each of the network architectures. Results Quantitative evaluation using 45 thorax/abdomen DECT datasets acquired with a clinical dual‐source CT system was investigated. The segmentation of six thoracic and abdominal organs (left and right lungs, liver, spleen, and left and right kidneys) were evaluated using a fivefold cross‐validation strategy. In all of the tests, we achieved the best average Dice coefficients of 98% for the right lung, 98% for the left lung, 96% for the liver, 92% for the spleen, 95% for the right kidney, 93% for the left kidney, respectively. The network architectures exploit dual‐energy spectra and outperform deep learning for SECT. Conclusions The results of the cross‐validation show that our methods are feasible and promising. Successful tests on special clinical cases reveal that our methods have high adaptability in the practical application.
In brain tumor surgery, the quality and safety of the procedure can be impacted by intra-operative tissue deformation, called brain shift. Brain shift can move the surgical targets and other vital structures such as blood vessels, thus invalidating the presurgical plan. Intra-operative ultrasound (iUS) is a convenient and cost-effective imaging tool to track brain shift and tumor resection. Accurate image registration techniques that update pre-surgical MRI based on iUS are crucial but challenging. The MICCAI Challenge 2018 for Correction of Brain shift with Intra-Operative UltraSound (CuRIOUS2018) provided a public platform to benchmark MRI-iUS registration algorithms on newly released clinical datasets. In this work, we present the data, setup, evaluation, and results of CuRIOUS 2018, which received 6 fully automated algorithms from leading academic and industrial research groups. All algorithms were first trained with the public RESECT database, and then ranked based on a test dataset of 10 additional cases with identical data curation and annotation protocols as the RESECT database. The article compares the results of all participating teams and discusses the insights gained from the challenge, as well as future work.
Purpose Radiation doses accumulated during very complicated image‐guided x‐ray procedures have the potential to cause stochastic, but also deterministic effects, such as skin rashes or even hair loss. To monitor and reduce radiation‐related risks to patients’ skin, x‐ray imaging devices are equipped with online air kerma monitoring components. Traditionally, such measurements have been used to estimate skin entrance dose by (a) estimating air kerma at the interventional reference point (IRP), (b) forward projecting the dose distribution, and (c) considering a backscatter factor among other correction factors. Unfortunately, the complicated interaction between incident x‐ray photons, secondary electrons, and skin tissue cannot be properly accounted for by assuming a linear relationship between forward projected air kerma and a backscatter factor. Gold standard skin dose models are therefore determined using Monte Carlo (MC) techniques. However, MC simulations are computationally complex in general and possible acceleration mainly depends on the employed hardware and variance reduction techniques. To obtain reliable and fast dose estimates, we propose to combine MC‐based simulations with learning‐based methods. Methods The basic idea of our method is to approximate the radiation physics to calculate a first‐order exposure estimate quickly. This initial estimate is then refined using prior knowledge derived from MC simulations. To this end, the primary photon propagation inside a voxelized patient model is estimated using a less accurate but fast photon ray casting (RC) simulation based on the Beer–Lambert law. The results of the RC simulation are then fed into a convolutional neural network (CNN), which maps the propagation of primary photons to the dose deposition inside the patient model. Additionally, the patient model itself including anatomy and material properties, such as mass density and mass energy‐absorption coefficients, are fed into the CNN as well. The CNN is trained using smoothed results of MC simulations as output and RC simulations of identical imaging settings and patient models as input. Results In total, 163 MC and associated RC simulations are carried out for the head, thorax, abdomen, and pelvis in three different voxel phantoms. We used 108 or 109 primarily emitted photons sampled from a 125 kV peak voltage spectrum, respectively. Edge‐preserving smoothing (EPS) is applied to reduce (a) general stochastic uncertainties and (b) stochastic uncertainty concerning MC simulations of less primary photons. The CNN is trained using seven imaging settings of the abdomen in a single phantom. Testing its performance on the remaining datasets, the CNN is capable of estimating skin dose with an error of below 10% for the majority of test cases. Conclusion The combination of deep neural networks and MC simulation of particle physics has the potential to decrease the computational complexity of accurate skin dose estimation. The proposed approach can provide dose distributions in under one second when runni...
Purpose With X-ray radiation protection and dose management constantly gaining interest in interventional radiology, novel procedures often undergo prospective dose studies using anthropomorphic phantoms to determine expected reference organ-equivalent dose values. Due to inherent uncertainties, such as impact of exact patient positioning, generalized geometry of the phantoms, limited dosimeter positioning options, and composition of tissue-equivalent materials, these dose values might not allow for patient-specific risk assessment. Therefore, first the aim of this study is to quantify the influence of these parameters on local X-ray dose to evaluate their relevance in the assessment of patientspecific organ doses. Second, this knowledge further enables validating a simulation approach, which allows employing physiological material models and patient-specific geometries.Methods Phantom dosimetry experiments using MOSFET dosimeters were conducted reproducing imaging scenarios in prostatic arterial embolization (PAE). Associated organequivalent dose of prostate, bladder, colon and skin was determined. Dose deviation induced by possible small displacements of the patient was reproduced by moving the X-ray source. Dose deviation induced by geometric and material differences was investigated by analyzing two different commonly used phantoms. We reconstructed the experiments using Monte Carlo (MC) simulations, a reference male geom-1 Pattern Recognition Lab, FAU Erlangen-Nürnberg, etry, and different material properties to validate simulations and experiments against each other. ResultsOverall, MC simulated organ dose values are in accordance with the measured ones for the majority of cases. Marginal displacements of X-ray source relative to the phantoms lead to deviations of 6 % to 135 % in organ dose values, while skin dose remains relatively constant. Regarding the impact of phantom material composition, underestimation of internal organ dose values by 12 % to 20 % is prevalent in all simulated phantoms. Skin dose, however, can be estimated with low deviation of 1 % to 8 % at least for two materials.Conclusions Prospective reference dose studies might not extend to precise patient-specific dose assessment. Therefore online organ dose assessment tools, based on advanced patient modeling and MC methods are desirable.
We study statistics dependence of the probability distributions and the means of measured moments of conserved quantities, respectively. The required statistics of all interested moments and their products are estimated based on a simple simulation. We also explain why the measured moments are underestimated when the statistics are insufficient. With the statistics at RHIC/BES, the second and third order moments can be reliably obtained based on the method of Centrality bin width correction (CBWC), which can not be applied for the fourth order moments at low energy. With planning statistics at RHIC/BES II, and improved CBWC method, κσ 2 in a finer centrality bin scale should be measurable. This will help us to understand the current observation of energy and centrality dependence of high-order moments.
In this study, we propose a novel point cloud based 3D registration and segmentation framework using reinforcement learning. An artificial agent, implemented as a distinct actor based on value networks, is trained to predict the optimal piece-wise linear transformation of a point cloud for the joint tasks of registration and segmentation. The actor network estimates a set of plausible actions and the value network aims to select the optimal action for the current observation. Point-wise features that comprise spatial positions (and surface normal vectors in the case of structured meshes), and their corresponding image features, are used to encode the observation and represent the underlying 3D volume. The actor and value networks are applied iteratively to estimate a sequence of transformations that enable accurate delineation of object boundaries. The proposed approach was extensively evaluated in both segmentation and registration tasks using a variety of challenging clinical datasets. Our method has fewer trainable parameters and lower computational complexity compared to the 3D U-Net, and it is independent of the volume resolution. We show that the proposed method is applicable to mono- and multi-modal segmentation tasks, achieving significant improvements over the state-of-the-art for the latter. The flexibility of the proposed framework is further demonstrated for a multi-modal registration application. As we learn to predict actions rather than a target, the proposed method is more robust compared to the 3D U-Net when dealing with previously unseen datasets, acquired using different protocols or modalities. As a result, the proposed method provides a promising multi-purpose segmentation and registration framework, particular in the context of image-guided interventions.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.