PurposeThis study aimed to evaluate the accuracy and diagnostic test performance of the U-net-based segmentation method in neuromelanin magnetic resonance imaging (NM-MRI) compared to the established manual segmentation method for Parkinson’s disease (PD) diagnosis.MethodsNM-MRI datasets from two different 3T-scanners were used: a “principal dataset” with 122 participants and an “external validation dataset” with 24 participants, including 62 and 12 PD patients, respectively. Two radiologists performed SNpc manual segmentation. Inter-reader precision was determined using Dice coefficients. The U-net was trained with manual segmentation as ground truth and Dice coefficients used to measure accuracy. Training and validation steps were performed on the principal dataset using a 4-fold cross-validation method. We tested the U-net on the external validation dataset. SNpc hyperintense areas were estimated from U-net and manual segmentation masks, replicating a previously validated thresholding method, and their diagnostic test performances for PD determined.ResultsFor SNpc segmentation, U-net accuracy was comparable to inter-reader precision in the principal dataset (Dice coefficient: U-net, 0.83 ± 0.04; inter-reader, 0.83 ± 0.04), but lower in external validation dataset (Dice coefficient: U-net, 079 ± 0.04; inter-reader, 0.85 ± 0.03). Diagnostic test performances for PD were comparable between U-net and manual segmentation methods in both principal (area under the receiver operating characteristic curve: U-net, 0.950; manual, 0.948) and external (U-net, 0.944; manual, 0.931) datasets.ConclusionU-net segmentation provided relatively high accuracy in the evaluation of the SNpc in NM-MRI and yielded diagnostic performance comparable to that of the established manual method.Electronic supplementary materialThe online version of this article (10.1007/s00234-019-02279-w) contains supplementary material, which is available to authorized users.
BACKGROUND AND PURPOSE: Synthetic FLAIR images are of lower quality than conventional FLAIR images. Here, we aimed to improve the synthetic FLAIR image quality using deep learning with pixel-by-pixel translation through conditional generative adversarial network training. MATERIALS AND METHODS:Forty patients with MS were prospectively included and scanned (3T) to acquire synthetic MR imaging and conventional FLAIR images. Synthetic FLAIR images were created with the SyMRI software. Acquired data were divided into 30 training and 10 test datasets. A conditional generative adversarial network was trained to generate improved FLAIR images from raw synthetic MR imaging data using conventional FLAIR images as targets. The peak signal-to-noise ratio, normalized root mean square error, and the Dice index of MS lesion maps were calculated for synthetic and deep learning FLAIR images against conventional FLAIR images, respectively. Lesion conspicuity and the existence of artifacts were visually assessed. RESULTS:The peak signal-to-noise ratio and normalized root mean square error were significantly higher and lower, respectively, in generated-versus-synthetic FLAIR images in aggregate intracranial tissues and all tissue segments (all P Ͻ .001). The Dice index of lesion maps and visual lesion conspicuity were comparable between generated and synthetic FLAIR images (P ϭ 1 and .59, respectively). Generated FLAIR images showed fewer granular artifacts (P ϭ .003) and swelling artifacts (in all cases) than synthetic FLAIR images. CONCLUSIONS:Using deep learning, we improved the synthetic FLAIR image quality by generating FLAIR images that have contrast closer to that of conventional FLAIR images and fewer granular and swelling artifacts, while preserving the lesion contrast.ABBREVIATIONS: cGAN ϭ conditional generative adversarial network; DL ϭ deep learning; GAN ϭ generative adversarial network; NRMSE ϭ normalized root mean square error; PSNR ϭ peak signal-to-noise ratio Indicates open access to non-subscribers at www.ajnr.org http://dx.
Objectives Quantitative synthetic magnetic resonance imaging (MRI) enables synthesis of various contrast-weighted images as well as simultaneous quantification of T1 and T2 relaxation times and proton density. However, to date, it has been challenging to generate magnetic resonance angiography (MRA) images with synthetic MRI. The purpose of this study was to develop a deep learning algorithm to generate MRA images based on 3D synthetic MRI raw data. Materials and Methods Eleven healthy volunteers and 4 patients with intracranial aneurysms were included in this study. All participants underwent a time-of-flight (TOF) MRA sequence and a 3D-QALAS synthetic MRI sequence. The 3D-QALAS sequence acquires 5 raw images, which were used as the input for a deep learning network. The input was converted to its corresponding MRA images by a combination of a single-convolution and a U-net model with a 5-fold cross-validation, which were then compared with a simple linear combination model. Image quality was evaluated by calculating the peak signal-to-noise ratio (PSNR), structural similarity index measurements (SSIMs), and high frequency error norm (HFEN). These calculations were performed for deep learning MRA (DL-MRA) and linear combination MRA (linear-MR), relative to TOF-MRA, and compared with each other using a nonparametric Wilcoxon signed-rank test. Overall image quality and branch visualization, each scored on a 5-point Likert scale, were blindly and independently rated by 2 board-certified radiologists. Results Deep learning MRA was successfully obtained in all subjects. The mean PSNR, SSIM, and HFEN of the DL-MRA were significantly higher, higher, and lower, respectively, than those of the linear-MRA (PSNR, 35.3 ± 0.5 vs 34.0 ± 0.5, P < 0.001; SSIM, 0.93 ± 0.02 vs 0.82 ± 0.02, P < 0.001; HFEN, 0.61 ± 0.08 vs 0.86 ± 0.05, P < 0.001). The overall image quality of the DL-MRA was comparable to that of TOF-MRA (4.2 ± 0.7 vs 4.4 ± 0.7, P = 0.99), and both types of images were superior to that of linear-MRA (1.5 ± 0.6, for both P < 0.001). No significant differences were identified between DL-MRA and TOF-MRA in the branch visibility of intracranial arteries, except for ophthalmic artery (1.2 ± 0.5 vs 2.3 ± 1.2, P < 0.001). Conclusions Magnetic resonance angiography generated by deep learning from 3D synthetic MRI data visualized major intracranial arteries as effectively as TOF-MRA, with inherently aligned quantitative maps and multiple contrast-weighted images. Our proposed algorithm may be useful as a screening tool for intracranial aneurysms without requiring additional scanning time.
ObjectiveTo assist policymakers as they reflect on treatment protocols and approaches for the efficient delivery of medical care for multiple sclerosis (MS) patients in Japan.MethodsWe analyzed data from a large Japanese health insurance claims database. Using an algorithm based on diagnosis codes, all patients with a diagnosis of MS were identified; patients having a non‐MS demyelinating disease were excluded from the population. MS patient data were used for cross‐sectional analysis carried out on the data collected at a certain period. We identified a total of 1808 MS patients, and we analyzed data for 1133 patients with an observation period of ≥6 months from October 2013 to September 2014. Newly diagnosed MS patients were identified within the MS patients, and their data were used for longitudinal analysis, tracking each patient over a period of time.ResultsThe total per patient per month cost for MS was ¥93 542 (US$781, €695 as of October 2015). Disease‐modifying therapy drugs costs constituted half of the overall medical costs. For newly diagnosed MS patients, hospitalization costs were the largest component in the initial month, while drug costs were the largest component more than several months after the initial visit. There was a positive correlation between relapse frequency and medical cost.ConclusionsThese results provide up‐to‐date information on the demographics, medical treatment and cost status of MS in almost real‐time by using a claims database. They suggest that claims data analysis can effectively support medical policymaking.
Idiopathic normal pressure hydrocephalus (iNPH) and Alzheimer's disease (AD) are geriatric diseases and common causes of dementia. Recently, many studies on the segmentation, disease detection, or classification of MRI using deep learning have been conducted. The aim of this study was to differentiate iNPH and AD using a residual extraction approach in the deep learning method. Methods: Twenty-three patients with iNPH, 23 patients with AD and 23 healthy controls were included in this study. All patients and volunteers underwent brain MRI with a 3T unit, and we used only whole-brain three-dimensional (3D) T 1-weighted images. We designed a fully automated, end-to-end 3D deep learning classifier to differentiate iNPH, AD and control. We evaluated the performance of our model using a leaveone-out cross-validation test. We also evaluated the validity of the result by visualizing important areas in the process of differentiating AD and iNPH on the original input image using the Gradient-weighted Class Activation Mapping (Grad-CAM) technique. Results: Twenty-one out of 23 iNPH cases, 19 out of 23 AD cases and 22 out of 23 controls were correctly diagnosed. The accuracy was 0.90. In the Grad-CAM heat map, brain parenchyma surrounding the lateral ventricle was highlighted in about half of the iNPH cases that were successfully diagnosed. The medial temporal lobe or inferior horn of the lateral ventricle was highlighted in many successfully diagnosed cases of AD. About half of the successful cases showed nonspecific heat maps. Conclusions: Residual extraction approach in a deep learning method achieved a high accuracy for the differential diagnosis of iNPH, AD, and healthy controls trained with a small number of cases.
Hypertension requires strict treatment because it causes diseases that can lead to death. Although various classes of antihypertensive drugs are available, the actual status of antihypertensive drug selection and the transition in prescription patterns over time have not been fully examined. Therefore, we conducted a claims-based study using two claims databases (2008-16) to determine this status in Japan. We examined the prescription rate for each class of antihypertensive drugs in hypertensive patients and compared the patients' ages and the sizes of the medical institutions treating these patients. Among the 1 560 865 and 302 433 hypertensive patients in each database, calcium channel blockers (CCBs) (>60%) and angiotensin II receptor blockers (ARBs) (>55%) were the most frequently prescribed classes. The prescription rate of CCBs increased and ARBs decreased with the patients' ages. Although the Japanese guidelines for management of hypertension in 2014 changed the recommendation and indicated that β-blockers should not be used as first-line drugs, their prescription status did not change during this study period up to 2016. Use of CCBs and ARBs as first-line drugs differed by the types of patient comorbidities. Although ARBs or angiotensin-converting enzyme inhibitors were recommended for patients with some comorbidities, CCBs were used relatively frequently. In conclusion, the patients' ages and comorbidities and the sizes of the medical institutions affect the selection of antihypertensive drugs. Selection and use of drugs may not always follow the guidelines.
Rationale and Objectives: A more accurate lung nodule detection algorithm is needed. We developed a modified three-dimensional (3D) U-net deep-learning model for the automated detection of lung nodules on chest CT images. The purpose of this study was to evaluate the accuracy of the developed modified 3D U-net deep-learning model. Materials and Methods: In this Health Insurance Portability and Accountability Act-compliant, Institutional Review Board-approved retrospective study, the 3D U-net based deep-learning model was trained using the Lung Image Database Consortium and Image Database Resource Initiative dataset. For internal model validation, we used 89 chest CT scans that were not used for model training. For external model validation, we used 450 chest CT scans taken at an urban university hospital in Japan. Each case included at least one nodule of >5 mm identified by an experienced radiologist. We evaluated model accuracy using the competition performance metric (CPM) (average sensitivity at 1/8, 1/4, 1/2, 1, 2, 4, and 8 false-positives per scan). The 95% confidence interval (CI) was computed by bootstrapping 1000 times. Results: In the internal validation, the CPM was 94.7% (95% CI: 89.1%À98.6%). In the external validation, the CPM was 83.3% (95% CI: 79.4%À86.1%). Conclusion: The modified 3D U-net deep-learning model showed high performance in both internal and external validation.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.