Background: Patient suitability for magnetic resonance-guided high intensity focused ultrasound (MRgHIFU) ablation of pelvic tumors is initially evaluated clinically for treatment feasibility using referral images, acquired using standard supine diagnostic imaging, followed by MR screening of potential patients lying on the MRgHIFU couch in a 'best-guess' treatment position. Existing evaluation methods result in !40% of referred patients being screened out because of tumor non-targetability. We hypothesize that this process could be improved by development of a novel algorithm for predicting tumor coverage from referral imaging. Methods: The algorithm was developed from volunteer images and tested with patient data. MR images were acquired for five healthy volunteers and five patients with recurrent gynaecological cancer. Subjects were MR imaged supine and in oblique-supine-decubitus MRgHIFU treatment positions. Body outline and bones were segmented for all subjects, with organs-at-risk and tumors also segmented for patients. Supine images were aligned with treatment images to simulate a treatment dataset. Target coverage (of patient tumors and volunteer intra-pelvic soft tissue), i.e. the volume reachable by the MRgHIFU focus, was quantified. Target coverage predicted from supine imaging was compared to that from treatment imaging. Results: Mean (±standard deviation) absolute difference between supine-predicted and treatment-predicted coverage for 5 volunteers was 9 ± 6% (range: 2-22%) and for 4 patients, was 12 ± 7% (range: 4-21%), excluding a patient with poor acoustic coupling (coverage difference was 53%). Conclusion: Prediction of MRgHIFU target coverage from referral imaging appears feasible, facilitating further development of automated evaluation of patient suitability for MRgHIFU.
A: Convolutional neural networks (CNNs) have found applications in many image processing tasks, such as feature extraction, image classification, and object recognition. It has also been shown that the inverse of CNNs, so-called deconvolutional neural networks, can be used for inverse problems such as plasma tomography. In essence, plasma tomography consists in reconstructing the 2D plasma profile on a poloidal cross-section of a fusion device, based on line-integrated measurements from multiple radiation detectors. Since the reconstruction process is computationally intensive, a deconvolutional neural network trained to produce the same results will yield a significant computational speedup, at the expense of a small error which can be assessed using different metrics. In this work, we discuss the design principles behind such networks, including the use of multiple layers, how they can be stacked, and how their dimensions can be tuned according to the number of detectors and the desired tomographic resolution for a given fusion device. We describe the application of such networks at JET and COMPASS, where at JET we use the bolometer system, and at COMPASS we use the soft X-ray diagnostic based on photodiode arrays. K : Computerized Tomography (CT) and Computed Radiography (CR); Plasma diagnostics -interferometry, spectroscopy and imaging 1Corresponding author. 2See the author list of Overview of the JET preparation for Deuterium-Tritium Operation by E. Joffrin et al. in Nucl.
Background: Coronavirus disease 2019 (COVID-19) is a pandemic disease. Fast and accurate diagnosis of COVID-19 from chest radiography may enable more efficient allocation of scarce medical resources and hence improved patient outcomes. Deep learning classification of chest radiographs may be a plausible step towards this. We hypothesize that bone suppression of chest radiographs may improve the performance of deep learning classification of COVID-19 phenomena in chest radiographs. Methods: Two bone suppression methods (Gusarev et al. and Rajaraman et al.) were implemented. The Gusarev and Rajaraman methods were trained on 217 pairs of normal and bone-suppressed chest radiographs from the X-ray Bone Shadow Suppression dataset (https://www.kaggle.com/hmchuong/xray-bone-shadowsupression). Two classifier methods with different network architectures were implemented. Binary classifier models were trained on the public RICORD-1c and RSNA Pneumonia Challenge datasets. An external test dataset was created retrospectively from a set of 320 COVID-19 positive patients from Queen Elizabeth Hospital (Hong Kong, China) and a set of 518 non-COVID-19 patients from Pamela Youde NethersoleEastern Hospital (Hong Kong, China), and used to evaluate the effect of bone suppression on classifier performance. Classification performance, quantified by sensitivity, specificity, negative predictive value (NPV), accuracy and area under the receiver operating curve (AUC), for non-suppressed radiographs was compared to that for bone suppressed radiographs. Some of the pre-trained models used in this study are published at (https://github.com/danielnflam).Results: Bone suppression of external test data was found to significantly (P<0.05) improve AUC for one classifier architecture [from 0.698 (non-suppressed) to 0.732 (Rajaraman-suppressed)]. For the other classifier architecture, suppression did not significantly (P>0.05) improve or worsen classifier performance.
Background: Accurate assessment of coronavirus disease 2019 (COVID-19) lung involvement through chest radiograph plays an important role in effective management of the infection. This study aims to develop a two-step feature merging method to integrate image features from deep learning and radiomics to differentiate COVID-19, non-COVID-19 pneumonia and normal chest radiographs (CXR).Methods: In this study, a deformable convolutional neural network (deformable CNN) was developed and used as a feature extractor to obtain 1,024-dimensional deep learning latent representation (DLR) features.Then 1,069-dimensional radiomics features were extracted from the region of interest (ROI) guided by deformable CNN's attention. The two feature sets were concatenated to generate a merged feature set for classification. For comparative experiments, the same process has been applied to the DLR-only feature set for verifying the effectiveness of feature concatenation.Results: Using the merged feature set resulted in an overall average accuracy of 91.0% for three-class classification, representing a statistically significant improvement of 0.6% compared to the DLR-only classification. The recall and precision of classification into the COVID-19 class were 0.926 and 0.976, respectively. The feature merging method was shown to significantly improve the classification performance as compared to using only deep learning features, regardless of choice of classifier (P value <0.0001). Three classes' F1-score were 0.892, 0.890, and 0.950 correspondingly (i.e., normal, non-COVID-19 pneumonia, COVID-19).Conclusions: A two-step COVID-19 classification framework integrating information from both DLR and radiomics features (guided by deep learning attention mechanism) has been developed. The proposed feature merging method has been shown to improve the performance of chest radiograph classification as compared to the case of using only deep learning features.
Background: Patient suitability for magnetic resonance-guided high intensity focused ultrasound (MRgHIFU) therapy of pelvic tumors is currently assessed by visual estimation of the proportion of tumor that can be reached by the device's focus (coverage). Since it is important to assess whether enough energy reaches the tumor to achieve ablation, a methodology for estimating the proportion of the tumor that can be ablated (treatability) was developed. Predicted treatability was compared against clinically achieved thermal ablation. Methods: MR Dixon sequence images of five patients with recurrent gynecological tumors were acquired during their treatment. Acousto-thermal simulations were performed using k-Wave for three exposure points (the deepest and shallowest reachable focal points within the tumor, identified from tumor coverage analysis, and a point halfway in-between) per patient. Interpolation between the resulting simulated ablated tissue volumes was used to estimate the maximum treatable depth and hence, tumor treatability. Predicted treatability was compared both to predicted tumor coverage and to the clinically treated tumor volume. The intended and simulated volumes and positions of ablated tissues were compared. Results: Predicted treatability was less than coverage by 52% (range: 31-78%) of the tumor volume. Predicted and clinical treatability differed by 9% (range: 1-25%) of tumor volume. Ablated tissue volume and position varied with beam path length through tissue. Conclusion: Tumor coverage overestimated patient suitability for MRgHIFU therapy. Employing patient-specific simulations improved treatability assessment. Patient treatability assessment using simulations is feasible.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.