Background: Accurate lymph nodes (LNs) assessment is important for rectal cancer (RC) staging in multiparametric magnetic resonance imaging (mpMRI). However, it is incredibly time-consumming to identify all the LNs in scan region. This study aims to develop and validate a deep-learning-based, fully-automated lymph node detection and segmentation (auto-LNDS) model based on mpMRI. Methods: In total, 5789 annotated LNs (diameter 3 mm) in mpMRI from 293 patients with RC in a single center were enrolled. Fused T2-weighted images (T2WI) and diffusion-weighted images (DWI) provided input for the deep learning framework Mask R-CNN through transfer learning to generate the auto-LNDS model. The model was then validated both on the internal and external datasets consisting of 935 LNs and 1198 LNs, respectively. The performance for LNs detection was evaluated using sensitivity, positive predictive value (PPV), and false positive rate per case (FP/vol), and segmentation performance was evaluated using the Dice similarity coefficient (DSC). Findings: For LNs detection, auto-LNDS achieved sensitivity, PPV, and FP/vol of 80.0%, 73.5% and 8.6 in internal testing, and 62.6%, 64.5%, and 8.2 in external testing, respectively, significantly better than the performance of junior radiologists. The time taken for model detection and segmentation was 1.3 s/case, compared with 200 s/ case for the radiologists. For LNs segmentation, the DSC of the model was in the range of 0.81À0.82. Interpretation: This deep learningÀbased auto-LNDS model can achieve pelvic LNseffectively based on mpMRI for RC, and holds great potential for facilitating N-staging in clinical practice.
BackgroundPreoperative differentiation of borderline from malignant epithelial ovarian tumors (BEOT from MEOT) can impact surgical management. MRI has improved this assessment but subjective interpretation by radiologists may lead to inconsistent results.PurposeTo develop and validate an objective MRI‐based machine‐learning (ML) assessment model for differentiating BEOT from MEOT, and compare the performance against radiologists' interpretation.Study TypeRetrospective study of eight clinical centers.PopulationIn all, 501 women with histopathologically‐confirmed BEOT (n = 165) or MEOT (n = 336) from 2010 to 2018 were enrolled. Three cohorts were constructed: a training cohort (n = 250), an internal validation cohort (n = 92), and an external validation cohort (n = 159).Field Strength/SequencePreoperative MRI within 2 weeks of surgery. Single‐ and multiparameter (MP) machine‐learning assessment models were built utilizing the following four MRI sequences: T2‐weighted imaging (T2WI), fat saturation (FS), diffusion‐weighted imaging (DWI), apparent diffusion coefficient (ADC), and contrast‐enhanced (CE)‐T1WI.AssessmentDiagnostic performance of the models was assessed for both whole tumor (WT) and solid tumor (ST) components. Assessment of the performance of the model in discriminating BEOT vs. early‐stage MEOT was made. Six radiologists of varying experience also interpreted the MR images.Statistical TestsMann–Whitney U‐test: significance of the clinical characteristics; chi‐square test: difference of label; DeLong test: difference of receiver operating characteristic (ROC).ResultsThe MP‐ST model performed better than the MP‐WT model for both the internal validation cohort (area under the curve [AUC] = 0.932 vs. 0.917) and external validation cohort (AUC = 0.902 vs. 0.767). The model showed capability in discriminating BEOT vs. early‐stage MEOT, with AUCs of 0.909 and 0.920, respectively. Radiologist performance was considerably poorer than both the internal (mean AUC = 0.792; range, 0.679–0.924) and external (mean AUC = 0.797; range, 0.744–0.867) validation cohorts.Data ConclusionPerformance of the MRI‐based ML model was robust and superior to subjective assessment of radiologists. If our approach can be implemented in clinical practice, improved preoperative prediction could potentially lead to preserved ovarian function and fertility for some women.Level of EvidenceLevel 4.Technical EfficacyStage 2. J. Magn. Reson. Imaging 2020;52:897–904.
Segmentation of colorectal tumors is the basis of preoperative prediction, staging, and therapeutic response evaluation. Due to the blurred boundary between lesions and normal colorectal tissue, it is hard to realize accurate segmentation. Routinely manual or semi-manual segmentation methods are extremely tedious, time-consuming, and highly operator-dependent. In the framework of FCNs, a segmentation method for colorectal tumor was presented. Normalization was applied to reduce the differences among images. Borrowing from transfer learning, VGG-16 was employed to extract features from normalized images. We conducted five side-output blocks from the last convolutional layer of each block of VGG-16 along the network, these side-output blocks can deep dive multiscale features, and produced corresponding predictions. Finally, all of the predictions from side-output blocks were fused to determine the final boundaries of the tumors. A quantitative comparison of 2772 colorectal tumor manual segmentation results from T2-weighted magnetic resonance images shows that the average Dice similarity coefficient, positive predictive value, specificity, sensitivity, Hammoude distance, and Hausdorff distance were 83.56, 82.67, 96.75, 87.85%, 0.2694, and 8.20, respectively. The proposed method is superior to U-net in colorectal tumor segmentation (P < 0.05). There is no difference between cross-entropy loss and Dice-based loss in colorectal tumor segmentation (P > 0.05). The results indicate that the introduction of FCNs contributed to accurate segmentation of colorectal tumors. This method has the potential to replace the present time-consuming and nonreproducible manual segmentation method.
Objectives: To establish a radiomic algorithm based on grayscale ultrasound images and to make preoperative predictions of microvascular invasion (MVI) in hepatocellular carcinoma (HCC) patients. Methods: In this retrospective study, 322 cases of histopathologically confirmed HCC lesions were included. The classifications based on preoperative grayscale ultrasound images were performed in two stages: (1) classifier #1, MVI-negative and MVI-positive cases; (2) classifier #2, MVI-positive cases were further classified as M1 or M2 cases. The gross-tumoral region (GTR) and peri-tumoral region (PTR) signatures were combined to generate gross-and peri-tumoral region (GPTR) radiomic signatures. The optimal radiomic signatures were further incorporated with vital clinical information. Multivariable logistic regression was used to build radiomic models. Results: Finally, 1,595 radiomic features were extracted from each HCC lesion. At the classifier #1 stage, the radiomic signatures based on features of GTR, PTR, and GPTR showed area under the curve (AUC) values of 0.708 (95% CI, 0.603-0.812), 0.710 (95% CI, 0.609-0.811), and 0.726 (95% CI, 0.625-0.827), respectively. Upon incorporation of vital clinical information, the AUC of the GPTR radiomic algorithm was 0.744 (95% CI, 0.646-0.841). At the classifier #2 stage, the AUC of the GTR radiomic signature was 0.806 (95% CI, 0.667-0.944). Conclusions: Our radiomic algorithm based on grayscale ultrasound images has potential value to facilitate preoperative prediction of MVI in HCC patients. The GTR radiomic signature may be helpful for further discriminating between M1 and M2 levels among MVI-positive patients.
Objectives: To develop and validate a deep learning-based overall survival (OS) prediction model in patients with hepatocellular carcinoma (HCC) treated with transarterial chemoembolization (TACE) plus sorafenib. Methods: This retrospective multicenter study consisted of 201 patients with treatmentnaïve, unresectable HCC who were treated with TACE plus sorafenib. Data from 120 patients were used as the training set for model development. A deep learning signature was constructed using the deep image features from preoperative contrast-enhanced computed tomography images. An integrated nomogram was built using Cox regression by combining the deep learning signature and clinical features. The deep learning signature and nomograms were also externally validated in an independent validation set of 81 patients. C-index was used to evaluate the performance of OS prediction. Results: The median OS of the entire set was 19.2 months and no significant difference was found between the training and validation cohort (18.6 months vs. 19.5 months, P = 0.45). The deep learning signature achieved good prediction performance with a C-index of 0.717 in the training set and 0.714 in the validation set. The integrated nomogram showed significantly better prediction performance than the clinical nomogram in the training set (0.739 vs. 0.664, P = 0.002) and validation set (0.730 vs. 0.679, P = 0.023). Conclusion: The deep learning signature provided significant added value to clinical features in the development of an integrated nomogram which may act as a potential tool for individual prognosis prediction and identifying HCC patients who may benefit from the combination therapy of TACE plus sorafenib.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.