Delineation of the cardiac structures from 2D echocardiographic images is a common clinical task to establish a diagnosis. Over the past decades, the automation of this task has been the subject of intense research. In this paper, we evaluate how far the state-of-the-art encoder-decoder deep convolutional neural network methods can go at assessing 2D echocardiographic images, i.e. segmenting cardiac structures as well as estimating clinical indices, on a dataset especially designed to answer this objective. We therefore introduce the Cardiac Acquisitions for Multi-structure Ultrasound Segmentation (CA-MUS) dataset, the largest publicly-available and fully-annotated dataset for the purpose of echocardiographic assessment. The dataset contains two and four-chamber acquisitions from 500 patients with reference measurements from one cardiologist on the full dataset and from three cardiologists on a fold of 50 patients. Results show that encoder-decoder based architectures outperform state-of-the-art non-deep learning methods and faithfully reproduce the expert analysis for the end-diastolic and endsystolic left ventricular volumes, with a mean correlation of 0.95 and an absolute mean error of 9.5 ml. Concerning the ejection fraction of the left ventricle, results are more contrasted with a mean correlation coefficient of 0.80 and an absolute mean error of 5.6%. Although these results are below the inter-observer scores, they remain slightly worse than the intra-observer's ones. Based on this observation, areas for improvement are defined, which open the door for accurate and fully-automatic analysis of 2D echocardiographic images.Index Terms-Cardiac segmentation and diagnosis, deep learning, ultrasound, left ventricle, myocardium, left atrium.
Transthoracic echocardiography examinations are usually performed according to a protocol comprising different probe postures providing standard views of the heart. These are used as a basis when assessing cardiac function, and it is essential that the morphophysiological representations are correct. Clinical analysis is often initialized with the current view, and automatic classification can thus be useful in improving today's workflow. In this article, convolutional neural networks (CNNs) are used to create classification models predicting up to seven different cardiac views. Data sets of 2-D ultrasound acquired from studies totaling more than 500 patients and 7000 videos were included. State-of-the-art accuracies of (98.3±0.6)% and (98.9±0.6)% on single frames and sequences, respectively, and real-time performance with (4.4±0.3) ms per frame was achieved. Further, it was found that CNNs have the potential for use in automatic multiplanar reformatting and orientation guidance. Using 3-D data to train models applicable for 2-D classification, we achieved a median deviation of (4±3) • from the optimal orientations.
Volume and ejection fraction (EF) measurements of the left ventricle (LV) in 2-D echocardiography are associated with a high uncertainty not only due to interobserver variability of the manual measurement, but also due to ultrasound acquisition errors such as apical foreshortening. In this work, a real-time and fully automated EF measurement and foreshortening detection method is proposed. The method uses several deep learning components, such as view classification, cardiac cycle timing, segmentation and landmark extraction, to measure the amount of foreshortening, LV volume, and EF. A data set of 500 patients from an outpatient clinic was used to train the deep neural networks, while a separate data set of 100 patients from another clinic was used for evaluation, where LV volume and EF were measured by an expert using clinical protocols and software. A quantitative analysis using 3-D ultrasound showed that EF is considerably affected by apical foreshortening, and that the proposed method can detect and quantify the amount of apical foreshortening. The bias
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.