Background: The purpose of this study was to investigate the value of wavelet-transformed radiomic MRI in predicting the pathological complete response (pCR) to neoadjuvant chemotherapy (NAC) for patients with locally advanced breast cancer (LABC). Methods: Fifty-five female patients with LABC who underwent contrast-enhanced MRI (CE-MRI) examination prior to NAC were collected for the retrospective study. According to the pathological assessment after NAC, patient responses to NAC were categorized into pCR and non-pCR. Three groups of radiomic textures were calculated in the segmented lesions, including (1) volumetric textures, (2) peripheral textures, and (3) wavelet-transformed textures. Six models for the prediction of pCR were Model I: group (1), Model II: group (1) + (2), Model III: group (3), Model IV: group (1) + (3), Model V: group (2) + (3), and Model VI: group (1) + (2) + (3). The performance of predicting models was compared using the area under the receiver operating characteristic (ROC) curves (AUC). Results: The AUCs of the six models for the prediction of pCR were 0.816 ± 0.033 (Model I), 0.823 ± 0.020 (Model II), 0.888 ± 0.025 (Model III), 0.876 ± 0.015 (Model IV), 0.885 ± 0.030 (Model V), and 0.874 ± 0.019 (Model VI). The performance of four models with wavelet-transformed textures (Models III, IV, V, and VI) was significantly better than those without wavelet-transformed textures (Model I and II). In addition, the inclusion of volumetric textures or peripheral textures or both did not result in any improvements in performance. Conclusions: Wavelet-transformed textures outperformed volumetric and/or peripheral textures in the radiomic MRI prediction of pCR to NAC for patients with LABC, which can potentially serve as a surrogate biomarker for the prediction of the response of LABC to NAC.
This paper presents an efficient inverse kinematics (IK) approach which features fast computing performance for a PUMA560-structured robot manipulator. By properties of the orthogonal matrix and block matrix, the complex IK matrix equations are transformed into eight pure algebraic equations that contain the six unknown joint angle variables, which makes the solving compact without computing the reverses of the 4×4 homogeneous transformation matrices. Moreover, the appropriate combination of related equations ensures that the solutions are free of extraneous roots in the solving process, and the wrist singularity problem of the robot is also addressed. Finally, a case study is given to show the effectiveness of the proposed algorithm.
Background Glioma is the most common brain malignant tumor, with a high morbidity rate and a mortality rate of more than three percent, which seriously endangers human health. The main method of acquiring brain tumors in the clinic is MRI. Segmentation of brain tumor regions from multi-modal MRI scan images is helpful for treatment inspection, post-diagnosis monitoring, and effect evaluation of patients. However, the common operation in clinical brain tumor segmentation is still manual segmentation, lead to its time-consuming and large performance difference between different operators, a consistent and accurate automatic segmentation method is urgently needed. With the continuous development of deep learning, researchers have designed many automatic segmentation algorithms; however, there are still some problems: (1) The research of segmentation algorithm mostly stays on the 2D plane, this will reduce the accuracy of 3D image feature extraction to a certain extent. (2) MRI images have gray-scale offset fields that make it difficult to divide the contours accurately. Methods To meet the above challenges, we propose an automatic brain tumor MRI data segmentation framework which is called AGSE-VNet. In our study, the Squeeze and Excite (SE) module is added to each encoder, the Attention Guide Filter (AG) module is added to each decoder, using the channel relationship to automatically enhance the useful information in the channel to suppress the useless information, and use the attention mechanism to guide the edge information and remove the influence of irrelevant information such as noise. Results We used the BraTS2020 challenge online verification tool to evaluate our approach. The focus of verification is that the Dice scores of the whole tumor, tumor core and enhanced tumor are 0.68, 0.85 and 0.70, respectively. Conclusion Although MRI images have different intensities, AGSE-VNet is not affected by the size of the tumor, and can more accurately extract the features of the three regions, it has achieved impressive results and made outstanding contributions to the clinical diagnosis and treatment of brain tumor patients.
As a highly malignant tumor, the incidence and mortality of glioma are not optimistic. Predicting the survival time of patients with glioma by extracting the feature information from gliomas is beneficial for doctors to develop more targeted treatments. Magnetic resonance imaging (MRI) is a way to quickly and clearly capture the details of brain tissue. However, manually segmenting brain tumors from MRI will cost doctors a lot of energy, and doctors can only vaguely estimate the survival time of glioma patients, which are not conducive to the formulation of treatment plans. Therefore, automatically segmenting brain tumors and accurately predicting survival time has important significance. In this article, we first propose the NLSE-VNet model, which integrates the Non-Local module and the Squeeze-and-Excitation module into V-Net to segment three brain tumor sub-regions in multimodal MRI. Then extract the intensity, texture, wavelet, shape and other radiological features from the tumor area, and use the CNN network to extract the deep features. The factor analysis method is used to reduce the dimensionality of features, and finally the dimensionality-reduced features and clinical features such as age and tumor grade are combined into the random forest regression model to predict survival. We evaluate the effect on the BraTS 2019 and BraTS 2020 datasets. The average Dice of brain tumor segmentation tasks up to 79% and the average RMSE of the survival predictive task is as low as 311.5. The results indicate that the method in this paper has great advantages in segmentation and survival prediction of gliomas.
Glioma is the most common primary central nervous system tumor, accounting for about half of all intracranial primary tumors. As a non-invasive examination method, MRI has an extremely important guiding role in the clinical intervention of tumors. However, manually segmenting brain tumors from MRI requires a lot of time and energy for doctors, which affects the implementation of follow-up diagnosis and treatment plans. With the development of deep learning, medical image segmentation is gradually automated. However, brain tumors are easily confused with strokes and serious imbalances between classes make brain tumor segmentation one of the most difficult tasks in MRI segmentation. In order to solve these problems, we propose a deep multi-task learning framework and integrate a multi-depth fusion module in the framework to accurately segment brain tumors. In this framework, we have added a distance transform decoder based on the V-Net, which can make the segmentation contour generated by the mask decoder more accurate and reduce the generation of rough boundaries. In order to combine the different tasks of the two decoders, we weighted and added their corresponding loss functions, where the distance map prediction regularized the mask prediction. At the same time, the multi-depth fusion module in the encoder can enhance the ability of the network to extract features. The accuracy of the model will be evaluated online using the multispectral MRI records of the BraTS 2018, BraTS 2019, and BraTS 2020 datasets. This method obtains high-quality segmentation results, and the average Dice is as high as 78%. The experimental results show that this model has great potential in segmenting brain tumors automatically and accurately.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.