In radiotherapy (RT), organ motion caused by breathing prevents accurate patient positioning, radiation dose, and target volume determination. Most of the motion-compensated trial techniques require collaboration of the patient and expensive equipment. Estimating the motion between two computed tomography (CT) three-dimensional scans at the extremes of the breathing cycle and including this information in the RT planning has been shyly considered, mainly because that is a tedious manual task. This paper proposes a method to compute in a fully automatic fashion the spatial correspondence between those sets of volumetric CT data. Given the large ambiguity present in this problem, the method aims to reduce gradually this uncertainty through two main phases: a similarity-parametrization data analysis and a projection-regularization phase. Results on a real study show a high accuracy in establishing the spatial correspondence between both sets. Embedding this method in RT planning tools is foreseen, after making some suggested improvements and proving the validity of the two-scan approach.
Glioblastoma is the most frequent aggressive primary brain tumor amongst human adults. Its standard treatment involves chemotherapy, for which the drug temozolomide is a common choice. These are heterogeneous and variable tumors which might benefit from personalized, data-based therapy strategies, and for which there is room for improvement in therapy response follow-up, investigated with preclinical models. This study addresses a preclinical question that involves distinguishing between treated and control (untreated) mice bearing glioblastoma, using machine learning techniques, from magnetic resonance-based data in two modalities: MRI and MRSI. It aims to go beyond the comparison of methods for such discrimination to provide an analytical pipeline that could be used in subsequent human studies. This analytical pipeline is meant to be a usable and interpretable tool for the radiology expert in the hope that such interpretation helps revealing new insights about the problem itself. For that, we propose coupling source extraction-based and radiomics-based data transformations with feature selection. Special attention is paid to the generation of radiologist-friendly visual nosological representations of the analyzed tumors.
Purpose: To compare tumor motion amplitudes measured with 2D fluoroscopic images (FI) and with an inhale/exhale CT (IECT) technique Materials and methods: Tumor motion of 52 patients (39 lung patients and 13 liver patients) was obtained with both FI and IECT. For FI, tumor detection and tracking was performed by means of a software developed by the authors. Motion amplitude and, thus, internal target volume (ITV), were defined to cover the positions where the tumor spends 95% of the time. The algorithm was validated against two different respiratory motion phantoms. Motion amplitude in IECT was defined as the difference in the position of the centroid of the gross tumor volume in the image sets of both treatments. Results: Important differences exist when defining ITVs with FI and IECT. Overall, differences larger than 5 mm were obtained for 49%, 31%, and 9.6% of the patients in Superior-Inferior (SI), Anterior-Posterior (AP), and Lateral (LAT) directions, respectively. For tumor location, larger differences were found for tumors in the liver (73.6% SI, 27.3% AP, and 6.7% in LAT had differences larger than 5 mm), while tumors in the upper lobe benefitted less using FI (differences larger than 5 mm were only present in 27.6% (SI), 36.7% (AP), and 0% (LAT) of the patients). Conclusions: Use of FI with the linac built-in CBCT system is feasible for ITV definition. Large differences between motion amplitudes detected with FI and IECT methods were found. The method presented in this work based on FI could represent an improvement in ITV definition compared to the method based on IECT due to FI permits tumor motion acquisition in a more realistic situation than IECT.
Dynamic speckle is an interferometric phenomenon, which has been considered a sensitive way to monitor the weak changes in biological samples, and therefore it is a reliable tool that can be applied in many areas, from medicine to farming. Its use demanded the appearance of a series of methods to illuminate, process the images and provide their analysis. For this reason, for its implementation it requires systems of acquisition of images and algorithms of detection or identification of the biological material of the rest of the whole of the image. This work proposes an algorithm that allows the acquisition and segmentation of biological samples of plant origin. The algorithm developed requires the CMOS camera of a cell phone for the acquisition and transmission of the images of size 720x480 pixels, a computer for the management, reception and processing of the same, a wireless local area network, a He-Ne laser 633 nm with 10 mW of power as a coherent light source, an optical diffuser and an aluminum surface for the placement of biological samples. The study showed satisfactory results to acquire images and store them, allowing their subsequent segmentation.
Machine learning (ML) methods have shown great potential for the analysis of data involved in medical decisions. However, for these methods to be incorpored in the medical pipeline, they must be made interpretable not only to the data analyst, but also to the medical expert. In this work, we have applied a combination of feature transformation, selection and classification using ML and statistical methods to differentiate between control (untreated) and Temozolomide (TMZ)-treated tumour tissue from a glioblastoma (brain tumour) murine model. As input, we have used T2 weighted magnetic resonance images (MRI) and spectroscopic imaging (MRSI). Radiomics features have been extracted from the MRI dataset, while convex Nonnegative Matrix Factorization (Convex-NMF) was used to extract sources from the MRSI dataset. Exhaustive feature selection has revealed parsimonious feature subsets that facilitate the expert interpretation of results while retaining a high discriminatory ability.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.