We propose a novel attention gate (AG) model for medical image analysis that automatically learns to focus on target structures of varying shapes and sizes. Models trained with AGs implicitly learn to suppress irrelevant regions in an input image while highlighting salient features useful for a specific task. This enables us to eliminate the necessity of using explicit external tissue/organ localisation modules when using convolutional neural networks (CNNs). AGs can be easily integrated into standard CNN models such as VGG or U-Net architectures with minimal computational overhead while increasing the model sensitivity and prediction accuracy. The proposed AG models are evaluated on a variety of tasks, including medical image classification and segmentation. For classification, we demonstrate the use case of AGs in scan plane detection for fetal ultrasound screening. We show that the proposed attention mechanism can provide efficient object localisation while improving the overall prediction performance by reducing false positives. For segmentation, the proposed architecture is evaluated on two large 3D CT abdominal datasets with manual annotations for multiple organs. Experimental results show that AG models consistently improve the prediction performance of the base architectures across different datasets and training sizes while preserving computational efficiency. Moreover, AGs guide the model activations to be focused around salient regions, which provides better insights into how model predictions are made. The source code for the proposed AG models is publicly available.
Incorporation of prior knowledge about organ shape and location is key to improve performance of image analysis approaches. In particular, priors can be useful in cases where images are corrupted and contain artefacts due to limitations in image acquisition. The highly constrained nature of anatomical objects can be well captured with learning-based techniques. However, in most recent and promising techniques such as CNN-based segmentation it is not obvious how to incorporate such prior knowledge. State-of-the-art methods operate as pixel-wise classifiers where the training objectives do not incorporate the structure and inter-dependencies of the output. To overcome this limitation, we propose a generic training strategy that incorporates anatomical prior knowledge into CNNs through a new regularisation model, which is trained end-to-end. The new framework encourages models to follow the global anatomical properties of the underlying anatomy (e.g. shape, label structure) via learnt non-linear representations of the shape. We show that the proposed approach can be easily adapted to different analysis tasks (e.g. image enhancement, segmentation) and improve the prediction accuracy of the state-of-the-art models. The applicability of our approach is shown on multi-modal cardiac data sets and public benchmarks. In addition, we demonstrate how the learnt deep models of 3-D shapes can be interpreted and used as biomarkers for classification of cardiac pathologies.
EMPIRE10 (Evaluation of Methods for Pulmonary Image REgistration 2010) is a public platform for fair and meaningful comparison of registration algorithms which are applied to a database of intrapatient thoracic CT image pairs. Evaluation of nonrigid registration techniques is a nontrivial task. This is compounded by the fact that researchers typically test only on their own data, which varies widely. For this reason, reliable assessment and comparison of different registration algorithms has been virtually impossible in the past. In this work we present the results of the launch phase of EMPIRE10, which comprised the comprehensive evaluation and comparison of 20 individual algorithms from leading academic and industrial research groups. All algorithms are applied to the same set of 30 thoracic CT pairs. Algorithm settings and parameters are chosen by researchers expert in the configuration of their own method and the evaluation is independent, using the same criteria for all participants. All results are published on the EMPIRE10 website (http://empire10.isi.uu.nl). The challenge remains ongoing and open to new participants. Full results from 24 algorithms have been published at the time of writing. This paper details the organization of the challenge, the data and evaluation methods and the outcome of the initial launch with 20 algorithms. The gain in knowledge and future work are discussed.
Ischemic stroke is the most common cerebrovascular disease, and its diagnosis, treatment, and study relies on non-invasive imaging. Algorithms for stroke lesion segmentation from magnetic resonance imaging (MRI) volumes are intensely researched, but the reported results are largely incomparable due to different datasets and evaluation schemes. We approached this urgent problem of comparability with the Ischemic Stroke Lesion Segmentation (ISLES) challenge organized in conjunction with the MICCAI 2015 conference. In this paper we propose a common evaluation framework, describe the publicly available datasets, and present the results of the two sub-challenges: Sub-Acute Stroke Lesion Segmentation (SISS) and Stroke Perfusion Estimation (SPES). A total of 16 research groups participated with a wide range of state-of-the-art automatic segmentation algorithms. A thorough analysis of the obtained data enables a critical evaluation of the current state-of-the-art, recommendations for further developments, and the identification of remaining challenges. The segmentation of acute perfusion lesions addressed in SPES was found to be feasible. However, algorithms applied to sub-acute lesion segmentation in SISS still lack accuracy. Overall, no algorithmic characteristic of any method was found to perform superior to the others. Instead, the characteristics of stroke lesion appearances, their evolution, and the observed challenges should be studied in detail. The annotated ISLES image datasets continue to be publicly available through an online evaluation system to serve as an ongoing benchmarking resource (www.isles-challenge.org).
HighlightsThis work presents the methodologies and evaluation results for the WHS algorithms selected from the submissions to the Multi-Modality Whole Heart Segmentation (MM-WHS) challenge, in conjunction with MICCAI 2017.This work introduces the related information to the challenge, discusses the results from the conventional methods and deep learning-based algorithms, and provides insights to the future research.The challenge provides a fair and intuitive comparison framework for methods developed and being developed for WHS.The challenge provides the training datasets with manually delineated ground truths and evaluation for an ongoing development of MM-WHS algorithms.
Diffusion MRI is an exquisitely sensitive probe of tissue microstructure, and is currently the only non-invasive measure of the brain's fibre architecture. As this technique becomes more sophisticated and microstructurally informative, there is increasing value in comparing diffusion MRI with microscopic imaging in the same tissue samples. This study compared estimates of fibre orientation dispersion in white matter derived from diffusion MRI to reference measures of dispersion obtained from polarized light imaging and histology.Three post-mortem brain specimens were scanned with diffusion MRI and analyzed with a two-compartment dispersion model. The specimens were then sectioned for microscopy, including polarized light imaging estimates of fibre orientation and histological quantitative estimates of myelin and astrocytes. Dispersion estimates were correlated on region – and voxel-wise levels in the corpus callosum, the centrum semiovale and the corticospinal tract.The region-wise analysis yielded correlation coefficients of r = 0.79 for the diffusion MRI and histology comparison, while r = 0.60 was reported for the comparison with polarized light imaging. In the corpus callosum, we observed a pattern of higher dispersion at the midline compared to its lateral aspects. This pattern was present in all modalities and the dispersion profiles from microscopy and diffusion MRI were highly correlated. The astrocytes appeared to have minor contribution to dispersion observed with diffusion MRI.These results demonstrate that fibre orientation dispersion estimates from diffusion MRI represents the tissue architecture well. Dispersion models might be improved by more faithfully incorporating an informed mapping based on microscopy data.
Deformable image registration is an important tool in medical image analysis. In the case of lung computed tomography (CT) registration there are three major challenges: large motion of small features, sliding motions between organs, and changing image contrast due to compression. Recently, Markov random field (MRF)-based discrete optimization strategies have been proposed to overcome problems involved with continuous optimization for registration, in particular its susceptibility to local minima. However, to date the simplifications made to obtain tractable computational complexity reduced the registration accuracy. We address these challenges and preserve the potentially higher quality of discrete approaches with three novel contributions. First, we use an image-derived minimum spanning tree as a simplified graph structure, which copes well with the complex sliding motion and allows us to find the global optimum very efficiently. Second, a stochastic sampling approach for the similarity cost between images is introduced within a symmetric, diffeomorphic B-spline transformation model with diffusion regularization. The complexity is reduced by orders of magnitude and enables the minimization of much larger label spaces. In addition to the geometric transform labels, hyper-labels are introduced, which represent local intensity variations in this task, and allow for the direct estimation of lung ventilation. We validate the improvements in accuracy and performance on exhale-inhale CT volume pairs using a large number of expert landmarks.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.