We propose a method based on deep learning to perform cardiac segmentation on short axis Magnetic resonance imaging stacks iteratively from the top slice (around the base) to the bottom slice (around the apex). At each iteration, a novel variant of the U-net is applied to propagate the segmentation of a slice to the adjacent slice below it. In other words, the prediction of a segmentation of a slice is dependent upon the already existing segmentation of an adjacent slice. The 3-D consistency is hence explicitly enforced. The method is trained on a large database of 3078 cases from the U.K. Biobank. It is then tested on the 756 different cases from the U.K. Biobank and three other state-of-the-art cohorts (ACDC with 100 cases, Sunnybrook with 30 cases, and RVSC with 16 cases). Results comparable or even better than the state of the art in terms of distance measures are achieved. They also emphasize the assets of our method, namely, enhanced spatial consistency (currently neither considered nor achieved by the state of the art), and the generalization ability to unseen cases even from other databases.
We tested the hypothesis that a machine learning (ML) algorithm utilizing both complex echocardiographic data and clinical parameters could be used to phenogroup a heart failure (HF) cohort and identify patients with beneficial response to cardiac resynchronization therapy (CRT).
This paper presents a new registration algorithm, called Temporal Diffeomorphic Free Form Deformation (TDFFD), and its application to motion and strain quantification from a sequence of 3D ultrasound (US) images. The originality of our approach resides in enforcing time consistency by representing the 4D velocity field as the sum of continuous spatiotemporal B-Spline kernels. The spatiotemporal displacement field is then recovered through forward Eulerian integration of the non-stationary velocity field. The strain tensor is computed locally using the spatial derivatives of the reconstructed displacement field. The energy functional considered in this paper weighs two terms: the image similarity and a regularization term. The image similarity metric is the sum of squared differences between the intensities of each frame and a reference one. Any frame in the sequence can be chosen as reference. The regularization term is based on the incompressibility of myocardial tissue. TDFFD was compared to pairwise 3D FFD and 3D+t FFD, both on displacement and velocity fields, on a set of synthetic 3D US images with different noise levels. TDFFD showed increased robustness to noise compared to these two state-of-the-art algorithms. TDFFD also proved to be more resistant to a reduced temporal resolution when decimating this synthetic sequence. Finally, this synthetic dataset was used to determine optimal settings of the TDFFD algorithm. Subsequently, TDFFD was applied to a database of cardiac 3D US images of the left ventricle acquired from 9 healthy volunteers and 13 patients treated by Cardiac Resynchronization Therapy (CRT). On healthy cases, uniform strain patterns were observed over all myocardial segments, as physiologically expected. On all CRT patients, the improvement in synchrony of regional longitudinal strain correlated with CRT clinical outcome as quantified by the reduction of end-systolic left ventricular volume at follow-up (6 and 12 months), showing the potential of the proposed algorithm for the assessment of CRT.
Machine learning (ML) has been increasingly used within cardiology, particularly in the domain of cardiovascular imaging. Due to the inherent complexity and flexibility of ML algorithms, inconsistencies in the model performance and interpretation may occur. Several review articles have been recently published that introduce the fundamental principles and clinical application of ML for general cardiologists. The current document builds on these introductory principles and outlines a more comprehensive list of crucial responsibilities that need to be completed when developing ML models. The document thus aims to serve as a scientific foundation to aid investigators, data scientists, authors, editors, and reviewers involved in machine learning research with the intent of uniform reporting of ML investigations. An independent multidisciplinary panel of ML experts, clinicians, and statisticians worked together to review the theoretical rationale underlying seven sets of requirements that may reduce algorithmic errors and biases. Finally, the document summarizes a list of reporting items as an itemized checklist that highlight steps for ensuring correct application of ML models and the consistent reporting of model specifications and results. It is expected that the rapid pace of research and development and the increased availability of real-world evidence may require periodic updates to the checklist.
In this paper, we present a new method for the automatic comparison of myocardial motion patterns and the characterization of their degree of abnormality, based on a statistical atlas of motion built from a reference healthy population. Our main contribution is the computation of atlas-based indexes that quantify the abnormality in the motion of a given subject against a reference population, at every location in time and space. The critical computational cost inherent to the construction of an atlas is highly reduced by the definition of myocardial velocities under a small displacements hypothesis. The indexes we propose are of notable interest for the assessment of anomalies in cardiac mobility and synchronicity when applied, for instance, to candidate selection for cardiac resynchronization therapy (CRT). We built an atlas of normality using 2D ultrasound cardiac sequences from 21 healthy volunteers, to which we compared 14 CRT candidates with left ventricular dyssynchrony (LVDYS). We illustrate the potential of our approach in characterizing septal flash, a specific motion pattern related to LVDYS and recently introduced as a very good predictor of response to CRT.
Atlases and statistical models play important roles in the personalization and simulation of cardiac physiology. For the study of the heart, however, the construction of comprehensive atlases and spatio-temporal models is faced with a number of challenges, in particular the need to handle large and highly variable image datasets, the multi-region nature of the heart, and the presence of complex as well as small cardiovascular structures. In this paper, we present a detailed atlas and spatio-temporal statistical model of the human heart based on a large population of 3D+time multi-slice computed tomography sequences, and the framework for its construction. It uses spatial normalization based on nonrigid image registration to synthesize a population mean image and establish the spatial relationships between the mean and the subjects in the population. Temporal image registration is then applied to resolve each subject-specific cardiac motion and the resulting transformations are used to warp a surface mesh representation of the atlas to fit the images of the remaining cardiac phases in each subject. Subsequently, we demonstrate the construction of a spatio-temporal statistical model of shape such that the inter-subject and dynamic sources of variation are suitably separated. The framework is applied to a 3D+time data set of 138 subjects. The data is drawn from a variety of pathologies, which benefits its generalization to new subjects and physiological studies. The obtained level of detail and the extendability of the atlas present an advantage over most cardiac models published previously.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.