ObjectiveThis study aimed to evaluate and validate the performance of deep convolutional neural networks when discriminating different histologic types of ovarian tumor in ultrasound (US) images.Material and methodsOur retrospective study took 1142 US images from 328 patients from January 2019 to June 2021. Two tasks were proposed based on US images. Task 1 was to classify benign and high-grade serous carcinoma in original ovarian tumor US images, in which benign ovarian tumor was divided into six classes: mature cystic teratoma, endometriotic cyst, serous cystadenoma, granulosa-theca cell tumor, mucinous cystadenoma and simple cyst. The US images in task 2 were segmented. Deep convolutional neural networks (DCNN) were applied to classify different types of ovarian tumors in detail. We used transfer learning on six pre-trained DCNNs: VGG16, GoogleNet, ResNet34, ResNext50, DensNet121 and DensNet201. Several metrics were adopted to assess the model performance: accuracy, sensitivity, specificity, FI-score and the area under the receiver operating characteristic curve (AUC).ResultsThe DCNN performed better in labeled US images than in original US images. The best predictive performance came from the ResNext50 model. The model had an overall accuracy of 0.952 for in directly classifying the seven histologic types of ovarian tumors. It achieved a sensitivity of 90% and a specificity of 99.2% for high-grade serous carcinoma, and a sensitivity of over 90% and a specificity of over 95% in most benign pathological categories.ConclusionDCNN is a promising technique for classifying different histologic types of ovarian tumors in US images, and provide valuable computer-aided information.
Diagnostic results can be radically influenced by the quality of 2D ovarian-tumor ultrasound images. However, clinically processed 2D ovarian-tumor ultrasound images contain many artificially recognized symbols, such as fingers, crosses, dashed lines, and letters which assist artificial intelligence (AI) in image recognition. These symbols are widely distributed within the lesion’s boundary, which can also affect the useful feature-extraction-utilizing networks and thus decrease the accuracy of lesion classification and segmentation. Image inpainting techniques are used for noise and object elimination from images. To solve this problem, we observed the MMOTU dataset and built a 2D ovarian-tumor ultrasound image inpainting dataset by finely annotating the various symbols in the images. A novel framework called mask-guided generative adversarial network (MGGAN) is presented in this paper for 2D ovarian-tumor ultrasound images to remove various symbols from the images. The MGGAN performs to a high standard in corrupted regions by using an attention mechanism in the generator to pay more attention to valid information and ignore symbol information, making lesion boundaries more realistic. Moreover, fast Fourier convolutions (FFCs) and residual networks are used to increase the global field of perception; thus, our model can be applied to high-resolution ultrasound images. The greatest benefit of this algorithm is that it achieves pixel-level inpainting of distorted regions without clean images. Compared with other models, our model achieveed better results with only one stage in terms of objective and subjective evaluations. Our model obtained the best results for 256 × 256 and 512 × 512 resolutions. At a resolution of 256 × 256, our model achieved 0.9246 for SSIM, 22.66 for FID, and 0.07806 for LPIPS. At a resolution of 512 × 512, our model achieved 0.9208 for SSIM, 25.52 for FID, and 0.08300 for LPIPS. Our method can considerably improve the accuracy of computerized ovarian tumor diagnosis. The segmentation accuracy was improved from 71.51% to 76.06% for the Unet model and from 61.13% to 66.65% for the PSPnet model in clean images.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.