Delineation of the left ventricular cavity, myocardium, and right ventricle from cardiac magnetic resonance images (multi-slice 2-D cine MRI) is a common clinical task to establish diagnosis. The automation of the corresponding tasks has thus been the subject of intense research over the past decades. In this paper, we introduce the "Automatic Cardiac Diagnosis Challenge" dataset (ACDC), the largest publicly available and fully annotated dataset for the purpose of cardiac MRI (CMR) assessment. The dataset contains data from 150 multi-equipments CMRI recordings with reference measurements and classification from two medical experts. The overarching objective of this paper is to measure how far state-of-the-art deep learning methods can go at assessing CMRI, i.e., segmenting the myocardium and the two ventricles as well as classifying pathologies. In the wake of the 2017 MICCAI-ACDC challenge, we report results from deep learning methods provided by nine research groups for the segmentation task and four groups for the classification task. Results show that the best methods faithfully reproduce the expert analysis, leading to a mean value of 0.97 correlation score for the automatic extraction of clinical indices and an accuracy of 0.96 for automatic diagnosis. These results clearly open the door to highly accurate and fully automatic analysis of cardiac CMRI. We also identify scenarios for which deep learning methods are still failing. Both the dataset and detailed results are publicly available online, while the platform will remain open for new submissions.
In conjunction with the ISBI 2015 conference, we organized a longitudinal lesion segmentation challenge providing training and test data to registered participants. The training data consisted of five subjects with a mean of 4.4 time-points, and test data of fourteen subjects with a mean of 4.4 time-points. All 82 data sets had the white matter lesions associated with multiple sclerosis delineated by two human expert raters. Eleven teams submitted results using state-of-the-art lesion segmentation algorithms to the challenge, with ten teams presenting their results at the conference. We present a quantitative evaluation comparing the consistency of the two raters as well as exploring the performance of the eleven submitted results in addition to three other lesion segmentation algorithms. The challenge presented three unique opportunities: 1) the sharing of a rich data set; 2) collaboration and comparison of the various avenues of research being pursued in the community; and 3) a review and refinement of the evaluation metrics currently in use. We report on the performance of the challenge participants, as well as the construction and evaluation of a consensus delineation. The image data and manual delineations will continue to be available for download, through an evaluation website 1 as a resource for future researchers in the area. This data resource provides a platform to compare existing methods in a fair and consistent manner to each other and multiple manual raters.
Deep fully convolutional neural network (FCN) based architectures have shown great potential in medical image segmentation. However, such architectures usually have millions of parameters and inadequate number of training samples leading to over-fitting and poor generalization. In this paper, we present a novel highly parameter and memory efficient FCN based architecture for medical image analysis. We propose a novel up-sampling path which incorporates long skip and short-cut connections to overcome the feature map explosion in FCN like architectures. In order to processes the input images at multiple scales and view points simultaneously, we propose to incorporate Inception module's parallel structures. We also propose a novel dual loss function whose weighting scheme allows to combine advantages of cross-entropy and dice loss. We have validated our proposed network architecture on two publicly available datasets,
Emanating from the base of the Sun's corona, the solar wind fills the interplanetary medium with a magnetized stream of charged particles whose interaction with the Earth's magnetosphere has space weather consequences such as geomagnetic storms. Accurately predicting the solar wind through measurements of the spatiotemporally evolving conditions in the solar atmosphere is important but remains an unsolved problem in heliophysics and space weather research. In this work, we use deep learning for prediction of solar wind (SW) properties. We use extreme ultraviolet images of the solar corona from space-based observations to predict the SW speed from the National Aeronautics and Space Administration (NASA) OMNIWEB data set, measured at Lagragian Point 1. We evaluate our model against autoregressive and naive models and find that our model outperforms the benchmark models, obtaining a best fit correlation of 0.55 ± 0.03 with the observed data. Upon visualization and investigation of how the model uses data to make predictions, we find higher activation at the coronal holes for fast wind prediction (≈3 to 4 days prior to prediction), and at the active regions for slow wind prediction. These trends bear an uncanny similarity to the influence of regions potentially being the sources of fast and slow wind, as reported in literature. This suggests that our model was able to learn some of the salient associations between coronal and solar wind structure without built-in physics knowledge. Such an approach may help us discover hitherto unknown relationships in heliophysics data sets. Plain Language Summary The solar wind is a stream of particles coming from the Sun. The interaction of the solar wind with the Earth's magnetosphere gives rise to space weather effects, including geomagnetic storms, aurorae and disruptions to electrical distribution grids. Accurate prediction of the solar wind is of interest to government agencies and private industry. In this work, we explore the use of machine learning models to predict the solar wind speed as measured at the Lagrangian Point 1 (L1) between the Sun and Earth. The best performing method is a deep neural network that uses extreme ultraviolet (EUV) imagery data from National Aeronautics and Space Administration's (NASA's) Solar Dynamics Observatory (SDO) as input. Without explicitly building in physical relationships into the model, it is able to outperform a number of baseline models. We find the model pays attention to regions on the Sun that are in agreement with heuristics used in the literature (e.g. coronal holes for the fast solar wind). Such an approach may, in the future, help us discover new relationships in heliophysics.
Histopathology tissue analysis is considered the gold standard in cancer diagnosis and prognosis. Whole-slide imaging (WSI), i.e., the scanning and digitization of entire histology slides, are now being adopted across the world in pathology labs. Trained histopathologists can provide an accurate diagnosis of biopsy specimens based on WSI data. Given the dimensionality of WSIs and the increase in the number of potential cancer cases, analyzing these images is a time-consuming process. Automated segmentation of tumorous tissue helps in elevating the precision, speed, and reproducibility of research. In the recent past, deep learning-based techniques have provided state-of-the-art results in a wide variety of image analysis tasks, including the analysis of digitized slides. However, deep learning-based solutions pose many technical challenges, including the large size of WSI data, heterogeneity in images, and complexity of features. In this study, we propose a generalized deep learning-based framework for histopathology tissue analysis to address these challenges. Our framework is, in essence, a sequence of individual techniques in the preprocessing-training-inference pipeline which, in conjunction, improve the efficiency and the generalizability of the analysis. The combination of techniques we have introduced includes an ensemble segmentation model, division of the WSI into smaller overlapping patches while addressing class imbalances, efficient techniques for inference, and an efficient, patch-based uncertainty estimation framework. Our ensemble consists of DenseNet-121, Inception-ResNet-V2, and DeeplabV3Plus, where all the networks were trained end to end for every task. We demonstrate the efficacy and improved generalizability of our framework by evaluating it on a variety of histopathology tasks including breast cancer metastases (CAMELYON), colon cancer (DigestPath), and liver cancer (PAIP). Our proposed framework has state-of-the-art performance across all these tasks and is ranked within the top 5 currently for the challenges based on these datasets. The entire framework along with the trained models and the related documentation are made freely available at GitHub and PyPi. Our framework is expected to aid histopathologists in accurate and efficient initial diagnosis. Moreover, the estimated uncertainty maps will help clinicians to make informed decisions and further treatment planning or analysis.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.