In this study, the super-resolution convolutional neural network (SRCNN) scheme, which is the emerging deep-learning-based super-resolution method for enhancing image resolution in chest CT images, was applied and evaluated using the post-processing approach. For evaluation, 89 chest CT cases were sampled from The Cancer Imaging Archive. The 89 CT cases were divided randomly into 45 training cases and 44 external test cases. The SRCNN was trained using the training dataset. With the trained SRCNN, a high-resolution image was reconstructed from a low-resolution image, which was down-sampled from an original test image. For quantitative evaluation, two image quality metrics were measured and compared to those of the conventional linear interpolation methods. The image restoration quality of the SRCNN scheme was significantly higher than that of the linear interpolation methods (p < 0.001 or p < 0.05). The high-resolution image reconstructed by the SRCNN scheme was highly restored and comparable to the original reference image, in particular, for a ×2 magnification. These results indicate that the SRCNN scheme significantly outperforms the linear interpolation methods for enhancing image resolution in chest CT images. The results also suggest that SRCNN may become a potential solution for generating high-resolution CT images from standard CT images.
Electronic cleansing (EC) is used for computational removal of residual feces and fluid tagged with an orally administered contrast agent on CT colonographic images to improve the visibility of polyps during virtual endoscopic "fly-through" reading. A recent trend in CT colonography is to perform a low-dose CT scanning protocol with the patient having undergone reduced-or noncathartic bowel preparation. Although several EC schemes exist, they have been developed for use with cathartic bowel preparation and highradiation-dose CT, and thus, at a low dose with noncathartic bowel preparation, they tend to generate cleansing artifacts that distract and mislead readers. Deep learning can be used for improvement of the image quality with EC at CT colonography. Deep learning EC can produce substantially fewer cleansing artifacts at dual-energy than at single-energy CT colonography, because the dual-energy information can be used to identify relevant material in the colon more precisely than is possible with the single x-ray attenuation value. Because the number of annotated training images is limited at CT colonography, transfer learning can be used for appropriate training of deep learning algorithms. The purposes of this article are to review the causes of cleansing artifacts that distract and mislead readers in conventional EC schemes, to describe the applications of deep learning and dual-energy CT colonography to EC of the colon, and to demonstrate the improvements in image quality with EC and deep learning at single-energy and dual-energy CT colonography with noncathartic bowel preparation. © RSNA, 2018 • Abbreviations: DCNN = deep convolutional neural network, EC = electronic cleansing, MFI = multimaterial feature image, 3D = three-dimensional © RSNA, 2018After completing this journal-based SA-CME activity, participants will be able to: ■ Describe the fundamentals of EC methods and the cleansing artifacts that the current EC methods generate. ■ Discuss an effective application of deep learning to virtual bowel cleansing. ■ Explain how the combined use of deep learning and dual-energy CT colonography can improve the image quality with EC. See rsna.org/learning-center-rg. SA-CME LEArning ObjECTivESThis copy is for personal use only. To order printed copies, contact reprints@rsna.org RG • Volume 38 Number 7Tachibana et al 2035
BackgroundOne of the dental health goals of Health Japan 21, in which the Japanese government clarified its health policy, was to ensure the use of fluoride toothpaste in 90% or more of schoolchildren. This goal was not achieved. The aim of this cross-sectional questionnaire study was to evaluate the characteristics of parents whose children use non-fluoride toothpaste.MethodsIn December 2010, questionnaire forms were sent to 18 elementary schools or school dentists. Students (6-12 years old) were asked to take the forms home for their parents to fill in, and to bring the completed questionnaire to school. The collected questionnaires were mailed from schools to the author’s institution by the end of March 2011. The relationship between fluoride in toothpaste and reasons for choice of toothpaste, the child’s toothbrushing habits, and attitude toward child caries prevention was examined in the 6,069 respondents who answered all the questions for the analyses and indicated that their children use toothpaste.ResultsNon-fluoride toothpaste users accounted for 5.1% of all toothpaste users. Among the children using non-fluoride toothpaste, significantly greater numbers gave ‘anti-gingivitis’, ‘halitosis prevention’ or ‘tartar control’ as reasons for choice of toothpaste; did not give ‘has fluoride’, ‘is cheaper’ or ‘tastes good’ as reasons for choice of toothpaste; or used toothpaste sometimes, or were in 4th - 6th grades. There was no significant relationship between use of non-fluoride toothpaste and measures taken for caries prevention in children. Multilevel (first level: individual, second level: school) logistic regression analysis indicated that use of non-fluoride toothpaste was significantly related to: giving ‘anti-gingivitis’ (odds ratio: 1.44) as a reason for choice of toothpaste; not giving ‘has fluoride’ (0.40), ‘tastes good’ (0.49) or ‘is cheaper’ (0.50) as the reason for choice of toothpaste; to toothbrushing less often (twice a day: 1.34, once a day or less: 1.46) and to using toothpaste less often (sometimes: 1.39).ConclusionsIt is necessary to teach parents that dental caries is the dental health issue with the highest priority for children, and therefore fluoride toothpaste should be used.
Purpose: To apply and evaluate a super-resolution scheme based on the super-resolution convolutional neural network (SRCNN) for enhancing image resolution in digital mammograms. Materials and Methods: A total of 711 mediolateral oblique (MLO) images including breast lesions were sampled from the Curated Breast Imaging Subset of the Digital Database for Screening Mammography (CBIS-DDSM). We first trained the super-resolution convolutional neural network (SRCNN), which is a deep-learning based super-resolution method. Using this trained SRCNN, high-resolution images were reconstructed from low-resolution images. We compared the image quality of the super-resolution method and that obtained using the linear interpolation methods (nearest neighbor and bilinear interpolations). To investigate the relationship between the image quality of the SRCNN-processed images and the clinical features of the mammographic lesions, we compared the image quality yielded by implementing the SRCNN, in terms of the breast density, the Breast Imaging-Reporting and Data System (BI-RADS) assessment, and the verified pathology information. For quantitative evaluation, peak signal-to-noise ratio (PSNR) and structural similarity (SSIM) were measured to assess the image restoration quality and the perceived image quality. Results: The super-resolution image quality yielded by the SRCNN was significantly higher than that obtained using linear interpolation methods (p < 0.001). The SRCNN-processed image quality in dense breasts, high-risk BI-RADS assessment groups, and pathology-verified malignant cases were significantly higher than that in low-density breasts, low-risk BI-RADS assessment groups, and benign cases, respectively (p < 0.01). Conclusion: SRCNN can significantly outperform conventional interpolation methods for enhancing image resolution in digital mammography. SRCNN can significantly improve the image quality of magnified mammograms, especially in dense breasts, high-risk How to cite this paper:
We have developed a deep learning-based approach to improve image quality of single-shot turbo spin-echo (SSTSE) images of female pelvis. We aimed to compare the deep learning-based single-shot turbo spin-echo (DL-SSTSE) images of female pelvis with turbo spin-echo (TSE) and conventional SSTSE images in terms of image quality. One hundred five and 21 subjects were used as training and test sets, respectively. We performed 6-fold cross validation. In the training process, low-quality images were generated from TSE images as input. TSE images were used as ground truth images. In the test process, the trained convolutional neural network was applied to SSTSE images. The output images were denoted as DL-SSTSE images. Apart from DL-SSTSE images, classical filtering methods were adopted to SSTSE images. Generated images were denoted as F-SSTSE images. Contrast ratio (CR) of gluteal fat and myometrium and signal-to-noise ratio (SNR) of gluteal fat were measured for all images. Two radiologists graded these images using a 5-point scale and evaluated the image quality with regard to overall image quality, contrast, noise, motion artifact, boundary sharpness of layers in the uterus, and the conspicuity of the ovaries. CRs, SNRs, and image quality scores were compared using the Steel-Dwass multiple comparison tests. CRs and SNRs were significantly higher in DL-SSTSE, F-SSTSE, and TSE images than in SSTSE images. Scores with regard to overall image quality, contrast, noise, and boundary sharpness of layers in the uterus were significantly higher on DL-SSTSE and TSE images than on SSTSE images. There were no significant differences in the CRs, SNRs, and respective scores between DL-SSTSE and TSE images. The score with regard to motion artifacts was significantly higher on DL-SSTSE, F-SSTSE, and SSTSE images than on TSE images. The score with regard to the conspicuity of ovaries was significantly higher on DL-SSTSE images than on F-SSTSE, SSTSE, and TSE images (P < .001). DL-SSTSE images showed higher image quality as compared with SSTSE images. In comparison with conventional TSE images, DL-SSTSE images had acceptable image quality while keeping the advantage of the motion artifact-robustness and acquisition time efficiency in SSTSE imaging.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.