Image-to-image translation is considered a new frontier in the field of medical image analysis, with numerous potential applications. However, a large portion of recent approaches offers individualized solutions based on specialized task-specific architectures or require refinement through non-end-to-end training. In this paper, we propose a new framework, named MedGAN, for medical image-to-image translation which operates on the image level in an end-toend manner. MedGAN builds upon recent advances in the field of generative adversarial networks (GANs) by merging the adversarial framework with a new combination of non-adversarial losses. We utilize a discriminator network as a trainable feature extractor which penalizes the discrepancy between the translated medical images and the desired modalities. Moreover, style-transfer losses are utilized to match the textures and fine-structures of the desired target images to the translated images. Additionally, we present a new generator architecture, titled CasNet, which enhances the sharpness of the translated medical outputs through progressive refinement via encoder-decoder pairs. Without any application-specific modifications, we apply MedGAN on three different tasks: PET-CT translation, correction of MR motion artefacts and PET image denoising. Perceptual analysis by radiologists and quantitative evaluations illustrate that the MedGAN outperforms other existing translation approaches.
The aims of this study were to train and evaluate deep learning models for automated segmentation of abdominal organs in whole-body magnetic resonance (MR) images from the UK Biobank (UKBB) and German National Cohort (GNC) MR imaging studies and to make these models available to the scientific community for analysis of these data sets. Methods: A total of 200 T1-weighted MR image data sets of healthy volunteers each from UKBB and GNC (400 data sets in total) were available in this study. Liver, spleen, left and right kidney, and pancreas were segmented manually on all 400 data sets, providing labeled ground truth data for training of a previously described U-Net-based deep learning framework for automated medical image segmentation (nnU-Net). The trained models were tested on all data sets using a 4-fold cross-validation scheme. Qualitative analysis of automated segmentation results was performed visually; performance metrics between automated and manual segmentation results were computed for quantitative analysis. In addition, interobserver segmentation variability between 2 human readers was assessed on a subset of the data. Results: Automated abdominal organ segmentation was performed with high qualitative and quantitative accuracy on UKBB and GNC data. In more than 90% of data sets, no or only minor visually detectable qualitative segmentation errors occurred. Mean Dice scores of automated segmentations compared with manual reference segmentations were well higher than 0.9 for the liver, spleen, and kidneys on UKBB and GNC data and around 0.82 and 0.89 for the pancreas on UKBB and GNC data, respectively. Mean average symmetric surface distance was between 0.3 and 1.5 mm for the liver, spleen, and kidneys and between 2 and 2.2 mm for pancreas segmentation. The quantitative accuracy of automated segmentation was comparable with the agreement between 2 human readers for all organs on UKBB and GNC data.Conclusion: Automated segmentation of abdominal organs is possible with high qualitative and quantitative accuracy on whole-body MR imaging data acquired as part of UKBB and GNC. The results obtained and deep learning models trained in this study can be used as a foundation for automated analysis of thousands of MR data sets of UKBB and GNC and thus contribute to tackling topical and original scientific questions.
Fully automated and fast assessment of visceral and subcutaneous adipose tissue compartments in whole-body MRI is feasible by means of a deep learning network. A robust and generalizable architecture was investigated which enables objective segmentation and quick phenotypic profiling.
Developing high-performance and energy-efficient algorithms for maximum matchings is becoming increasingly important in social network analysis, computational sciences, scheduling, and others. In this work, we propose the first maximum matching algorithm designed for FPGAs; it is energy-efficient and has provable guarantees on accuracy, performance, and storage utilization. To achieve this, we forego popular graph processing paradigms, such as vertex-centric programming, that often entail large communication costs. Instead, we propose a substream-centric approach, in which the input stream of data is divided into substreams processed independently to enable more parallelism while lowering communication costs. We base our work on the theory of streaming graph algorithms and analyze 14 models and 28 algorithms. We use this analysis to provide theoretical underpinning that matches the physical constraints of FPGA platforms. Our algorithm delivers high performance (more than 4× speedup over tuned parallel CPU variants), low memory, high accuracy, and effective usage of FPGA resources. The substream-centric approach could easily be extended to other algorithms to offer low-power and high-performance graph processing on FPGAs.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.