Background Medical datasets, especially medical images, are often imbalanced due to the different incidences of various diseases. To address this problem, many methods have been proposed to synthesize medical images using generative adversarial networks (GANs) to enlarge training datasets for facilitating medical image analysis. For instance, conventional methods such as image-to-image translation techniques are used to synthesize fundus images with their respective vessel trees in the field of fundus image. Methods In order to improve the image quality and details of the synthetic images, three key aspects of the pipeline are mainly elaborated: the input mask, architecture of GANs, and the resolution of paired images. We propose a new preprocessing pipeline named multiple-channels-multiple-landmarks (MCML), aiming to synthesize color fundus images from a combination of vessel tree, optic disc, and optic cup images. We compared both single vessel mask input and MCML mask input on two public fundus image datasets (DRIVE and DRISHTI-GS) with different kinds of Pix2pix and Cycle-GAN architectures. A new Pix2pix structure with ResU-net generator is also designed, which has been compared with the other models. Results and conclusion As shown in the results, the proposed MCML method outperforms the single vessel-based methods for each architecture of GANs. Furthermore, we find that our Pix2pix model with ResU-net generator achieves superior PSNR and SSIM performance than the other GANs. High-resolution paired images are also beneficial for improving the performance of each GAN in this work. Finally, a Pix2pix network with ResU-net generator using MCML and high-resolution paired images are able to generate good and realistic fundus images in this work, indicating that our MCML method has great potential in the field of glaucoma computer-aided diagnosis based on fundus image.
Background Chest CT is used for the assessment of the severity of patients infected with novel coronavirus 2019 (COVID-19). We collected chest CT scans of 202 patients diagnosed with the COVID-19, and try to develop a rapid, accurate and automatic tool for severity screening follow-up therapeutic treatment. Methods A total of 729 2D axial plan slices with 246 severe cases and 483 non-severe cases were employed in this study. By taking the advantages of the pre-trained deep neural network, four pre-trained off-the-shelf deep models (Inception-V3, ResNet-50, ResNet-101, DenseNet-201) were exploited to extract the features from these CT scans. These features are then fed to multiple classifiers (linear discriminant, linear SVM, cubic SVM, KNN and Adaboost decision tree) to identify the severe and non-severe COVID-19 cases. Three validation strategies (holdout validation, tenfold cross-validation and leave-one-out) are employed to validate the feasibility of proposed pipelines. Results and conclusion The experimental results demonstrate that classification of the features from pre-trained deep models shows the promising application in COVID-19 severity screening, whereas the DenseNet-201 with cubic SVM model achieved the best performance. Specifically, it achieved the highest severity classification accuracy of 95.20% and 95.34% for tenfold cross-validation and leave-one-out, respectively. The established pipeline was able to achieve a rapid and accurate identification of the severity of COVID-19. This may assist the physicians to make more efficient and reliable decisions.
Background Differential diagnosis of primary central nervous system lymphoma (PCNSL) and glioblastoma (GBM) is useful to guide treatment strategies. Purpose To investigate the use of a convolutional neural network (CNN) model for differentiation of PCNSL and GBM without tumor delineation. Study Type Retrospective. Population A total of 289 patients with PCNSL (136) or GBM (153) were included, the average age of the cohort was 54 years, and there were 173 men and 116 women. Field Strength/Sequence 3.0 T Axial contrast‐enhanced T1‐weighted spin‐echo inversion recovery sequence (CE‐T1WI), T2‐weighted fluid‐attenuation inversion recovery sequence (FLAIR), and diffusion weighted imaging (DWI, b = 0 second/mm2, 1000 seconds/mm2). Assessment A single‐parametric CNN model was built using CE‐T1WI, FLAIR, and the apparent diffusion coefficient (ADC) map derived from DWI, respectively. A decision‐level fusion based multi‐parametric CNN model (DF‐CNN) was built by combining the predictions of single‐parametric CNN models through logistic regression. An image‐level fusion based multi‐parametric CNN model (IF‐CNN) was built using the integrated multi‐parametric MR images. The radiomics models were developed. The diagnoses by three radiologists with 6 years (junior radiologist Y.Y.), 11 years (intermediate‐level radiologist Y.T.), and 21 years (senior radiologist Y.L.) of experience were obtained. Statistical Analysis The 5‐fold cross validation was used for model evaluation. The Pearson's chi‐squared test was used to compare the accuracies. U‐test and Fisher's exact test were used to compare clinical characteristics. Results The CE‐T1WI, FLAIR, and ADC based single‐parametric CNN model had accuracy of 0.884, 0.782, and 0.700, respectively. The DF‐CNN model had an accuracy of 0.899 which was higher than the IF‐CNN model (0.830, P = 0.021), but had no significant difference in accuracy compared to the radiomics model (0.865, P = 0.255), and the senior radiologist (0.906, P = 0.886). Data Conclusion A CNN model can differentiate PCNSL from GBM without tumor delineation, and comparable to the radiomics models and radiologists. Level of Evidence 4 Technical Efficacy Stage 2
Optical coherence tomography angiography (OCTA) is a relatively new imaging modality that generates microvasculature map. Meanwhile, deep learning has been recently attracting considerable attention in image‐to‐image translation, such as image denoising, super‐resolution and prediction. In this paper, we propose a deep learning based pipeline for OCTA. This pipeline consists of three parts: training data preparation, model learning and OCTA predicting using the trained model. To be mentioned, the datasets used in this work were automatically generated by a conventional system setup without any expert labeling. Promising results have been validated by in‐vivo animal experiments, which demonstrate that deep learning is able to outperform traditional OCTA methods. The image quality is improved in not only higher signal‐to‐noise ratio but also better vasculature connectivity by laser speckle eliminating, showing potential in clinical use. Schematic description of the deep learning based optical coherent tomography angiography pipeline.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.