Background Medical datasets, especially medical images, are often imbalanced due to the different incidences of various diseases. To address this problem, many methods have been proposed to synthesize medical images using generative adversarial networks (GANs) to enlarge training datasets for facilitating medical image analysis. For instance, conventional methods such as image-to-image translation techniques are used to synthesize fundus images with their respective vessel trees in the field of fundus image. Methods In order to improve the image quality and details of the synthetic images, three key aspects of the pipeline are mainly elaborated: the input mask, architecture of GANs, and the resolution of paired images. We propose a new preprocessing pipeline named multiple-channels-multiple-landmarks (MCML), aiming to synthesize color fundus images from a combination of vessel tree, optic disc, and optic cup images. We compared both single vessel mask input and MCML mask input on two public fundus image datasets (DRIVE and DRISHTI-GS) with different kinds of Pix2pix and Cycle-GAN architectures. A new Pix2pix structure with ResU-net generator is also designed, which has been compared with the other models. Results and conclusion As shown in the results, the proposed MCML method outperforms the single vessel-based methods for each architecture of GANs. Furthermore, we find that our Pix2pix model with ResU-net generator achieves superior PSNR and SSIM performance than the other GANs. High-resolution paired images are also beneficial for improving the performance of each GAN in this work. Finally, a Pix2pix network with ResU-net generator using MCML and high-resolution paired images are able to generate good and realistic fundus images in this work, indicating that our MCML method has great potential in the field of glaucoma computer-aided diagnosis based on fundus image.
Background Chest CT is used for the assessment of the severity of patients infected with novel coronavirus 2019 (COVID-19). We collected chest CT scans of 202 patients diagnosed with the COVID-19, and try to develop a rapid, accurate and automatic tool for severity screening follow-up therapeutic treatment. Methods A total of 729 2D axial plan slices with 246 severe cases and 483 non-severe cases were employed in this study. By taking the advantages of the pre-trained deep neural network, four pre-trained off-the-shelf deep models (Inception-V3, ResNet-50, ResNet-101, DenseNet-201) were exploited to extract the features from these CT scans. These features are then fed to multiple classifiers (linear discriminant, linear SVM, cubic SVM, KNN and Adaboost decision tree) to identify the severe and non-severe COVID-19 cases. Three validation strategies (holdout validation, tenfold cross-validation and leave-one-out) are employed to validate the feasibility of proposed pipelines. Results and conclusion The experimental results demonstrate that classification of the features from pre-trained deep models shows the promising application in COVID-19 severity screening, whereas the DenseNet-201 with cubic SVM model achieved the best performance. Specifically, it achieved the highest severity classification accuracy of 95.20% and 95.34% for tenfold cross-validation and leave-one-out, respectively. The established pipeline was able to achieve a rapid and accurate identification of the severity of COVID-19. This may assist the physicians to make more efficient and reliable decisions.
Optical coherence tomography angiography (OCTA) is a relatively new imaging modality that generates microvasculature map. Meanwhile, deep learning has been recently attracting considerable attention in image‐to‐image translation, such as image denoising, super‐resolution and prediction. In this paper, we propose a deep learning based pipeline for OCTA. This pipeline consists of three parts: training data preparation, model learning and OCTA predicting using the trained model. To be mentioned, the datasets used in this work were automatically generated by a conventional system setup without any expert labeling. Promising results have been validated by in‐vivo animal experiments, which demonstrate that deep learning is able to outperform traditional OCTA methods. The image quality is improved in not only higher signal‐to‐noise ratio but also better vasculature connectivity by laser speckle eliminating, showing potential in clinical use. Schematic description of the deep learning based optical coherent tomography angiography pipeline.
Background: To implement the real-time diagnosis of the severity of patients infected with novel coronavirus 2019 (COVID-19) and guide the follow-up therapeutic treatment, We collected chest CT scans of 202 patients diagnosed with the COVID-19 from three hospitals in Anhui Province, China.Methods: A total of 729 2D axial plan slices with 246 severe cases and 483 non-severe cases were employed in this study. Four pre-trained deep models (Inception-V3, ResNet-50, ResNet-101, DenseNet-201) with multiple classifiers (linear discriminant, linear SVM, cubic SVM, KNN and Adaboost decision tree) were applied to identify the severe and non-severe COVID-19 cases. Three validation strategies (holdout validation, 10-fold cross-validation and leave-one-out) are employed to validate the feasibility of proposed pipelines. Results and conclusion: The experimental results demonstrate that classification of the features from pre-trained deep models show the promising application in COVID-19 screening whereas the DenseNet-201 with cubic SVM model achieved the best performance. Specifically, it achieved the highest severity classification accuracy of 95.20% and 95.34% for 10-fold cross-validation and leave-one-out, respectively. The established pipeline was able to achieve a rapid and accurate identification of the severity of COVID-19. This may assist the physicians to make more efficient and reliable decisions.
Background Axial myopia is the most common type of myopia. However, due to the high incidence of myopia in Chinese children, few studies estimating the physiological elongation of the ocular axial length (AL), which does not cause myopia progression and differs from the non-physiological elongation of AL, have been conducted. The purpose of our study was to construct a machine learning (ML)-based model for estimating the physiological elongation of AL in a sample of Chinese school-aged myopic children. Methods In total, 1011 myopic children aged 6 to 18 years participated in this study. Cross-sectional datasets were used to optimize the ML algorithms. The input variables included age, sex, central corneal thickness (CCT), spherical equivalent refractive error (SER), mean K reading (K-mean), and white-to-white corneal diameter (WTW). The output variable was AL. A 5-fold cross-validation scheme was used to randomly divide all data into 5 groups, including 4 groups used as training data and one group used as validation data. Six types of ML algorithms were implemented in our models. The best-performing algorithm was applied to predict AL, and estimates of the physiological elongation of AL were obtained as the partial derivatives of ALpredicted-age curves based on an unchanged SER value with increasing age. Results Among the six algorithms, the robust linear regression model was the best model for predicting AL, with a R2 value of 0.87 and relatively minimal averaged errors between the predicted AL and true AL. Based on the partial derivatives of the ALpredicted-age curves, the estimated physiological AL elongation varied from 0.010 to 0.116 mm/year in male subjects and 0.003 to 0.110 mm/year in female subjects and was influenced by age, SER and K-mean. According to the model, the physiological elongation of AL linearly decreased with increasing age and was negatively correlated with the SER and the K-mean. Conclusions The physiological elongation of the AL is rarely recorded in clinical data in China. In cases of unavailable clinical data, an ML algorithm could provide practitioners a reasonable model that can be used to estimate the physiological elongation of AL, which is especially useful when monitoring myopia progression in orthokeratology lens wearers.
Background Differential diagnosis of primary central nervous system lymphoma (PCNSL) and glioblastoma (GBM) is useful to guide treatment strategies. Purpose To investigate the use of a convolutional neural network (CNN) model for differentiation of PCNSL and GBM without tumor delineation. Study Type Retrospective. Population A total of 289 patients with PCNSL (136) or GBM (153) were included, the average age of the cohort was 54 years, and there were 173 men and 116 women. Field Strength/Sequence 3.0 T Axial contrast‐enhanced T1‐weighted spin‐echo inversion recovery sequence (CE‐T1WI), T2‐weighted fluid‐attenuation inversion recovery sequence (FLAIR), and diffusion weighted imaging (DWI, b = 0 second/mm2, 1000 seconds/mm2). Assessment A single‐parametric CNN model was built using CE‐T1WI, FLAIR, and the apparent diffusion coefficient (ADC) map derived from DWI, respectively. A decision‐level fusion based multi‐parametric CNN model (DF‐CNN) was built by combining the predictions of single‐parametric CNN models through logistic regression. An image‐level fusion based multi‐parametric CNN model (IF‐CNN) was built using the integrated multi‐parametric MR images. The radiomics models were developed. The diagnoses by three radiologists with 6 years (junior radiologist Y.Y.), 11 years (intermediate‐level radiologist Y.T.), and 21 years (senior radiologist Y.L.) of experience were obtained. Statistical Analysis The 5‐fold cross validation was used for model evaluation. The Pearson's chi‐squared test was used to compare the accuracies. U‐test and Fisher's exact test were used to compare clinical characteristics. Results The CE‐T1WI, FLAIR, and ADC based single‐parametric CNN model had accuracy of 0.884, 0.782, and 0.700, respectively. The DF‐CNN model had an accuracy of 0.899 which was higher than the IF‐CNN model (0.830, P = 0.021), but had no significant difference in accuracy compared to the radiomics model (0.865, P = 0.255), and the senior radiologist (0.906, P = 0.886). Data Conclusion A CNN model can differentiate PCNSL from GBM without tumor delineation, and comparable to the radiomics models and radiologists. Level of Evidence 4 Technical Efficacy Stage 2
Microaneurysms (MAs) play an important role in the diagnosis of clinical diabetic retinopathy at the early stage. Annotation of MAs manually by experts is laborious and so it is essential to develop automatic segmentation methods. Automatic MA segmentation remains a challenging task mainly due to the low local contrast of the image and the small size of MAs. A deep learning-based method called U-Net has become one of the most popular methods for the medical image segmentation task. We propose an architecture for U-Net, named deep recurrent U-Net (DRU-Net), obtained by combining the deep residual model and recurrent convolutional operations into U-Net. In the MA segmentation task, DRU-Net can accumulate effective features much better than the typical U-Net. The proposed method is evaluated on two publicly available datasets: E-Ophtha and IDRiD. Our results show that the proposed DRU-Net achieves the best performance with 0.9999 accuracy value and 0.9943 area under curve (AUC) value on the E-Ophtha dataset. And on the IDRiD dataset, it has achieved 0.987 AUC value (to our knowledge, this is the first result of segmenting MAs on this dataset).
Purpose The objective of this study is to construct a computer aided diagnosis system for normal people and pneumoconiosis using X-raysand deep learning algorithms. Materials and methods 1760 anonymous digital X-ray images of real patients between January 2017 and June 2020 were collected for this experiment. In order to concentrate the feature extraction ability of the model more on the lung region and restrain the influence of external background factors, a two-stage pipeline from coarse to fine was established. First, the U-Net model was used to extract the lung regions on each sides of the collection images. Second, the ResNet-34 model with transfer learning strategy was implemented to learn the image features extracted in the lung region to achieve accurate classification of pneumoconiosis patients and normal people. Results Among the 1760 cases collected, the accuracy and the area under curve of the classification model were 92.46% and 89% respectively. Conclusion The successful application of deep learning in the diagnosis of pneumoconiosis further demonstrates the potential of medical artificial intelligence and proves the effectiveness of our proposed algorithm. However, when we further classified pneumoconiosis patients and normal subjects into four categories, we found that the overall accuracy decreased to 70.1%. We will use the CT modality in future studies to provide more details of lung regions.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.