Efficiently obtaining a reliable coronary artery centerline from computed tomography angiography data is relevant in clinical practice. Whereas numerous methods have been presented for this purpose, up to now no standardized evaluation methodology has been published to reliably evaluate and compare the performance of the existing or newly developed coronary artery centerline extraction algorithms. This paper describes a standardized evaluation methodology and reference database for the quantitative evaluation of coronary artery centerline extraction algorithms. The contribution of this work is fourfold: 1) a method is described to create a consensus centerline with multiple observers, 2) well-defined measures are presented for the evaluation of coronary artery centerline extraction algorithms, 3) a database containing thirty-two cardiac CTA datasets with corresponding reference standard is described and made available, and 4) thirteen coronary artery centerline extraction algorithms, implemented by different research groups, are quantitatively evaluated and compared. The presented evaluation framework is made available to the medical imaging community for benchmarking existing or newly developed coronary centerline extraction algorithms.
Though conventional coronary angiography (CCA) has been the standard of reference for diagnosing coronary artery disease in the past decades, computed tomography angiography (CTA) has rapidly emerged, and is nowadays widely used in clinical practice. Here, we introduce a standardized evaluation framework to reliably evaluate and compare the performance of the algorithms devised to detect and quantify the coronary artery stenoses, and to segment the coronary artery lumen in CTA data. The objective of this evaluation framework is to demonstrate the feasibility of dedicated algorithms to: (1) (semi-)automatically detect and quantify stenosis on CTA, in comparison with quantitative coronary angiography (QCA) and CTA consensus reading, and (2) (semi-)automatically segment the coronary lumen on CTA, in comparison with expert's manual annotation. A database consisting of 48 multicenter multivendor cardiac CTA datasets with corresponding reference standards are described and made available. The algorithms from 11 research groups were quantitatively evaluated and compared. The results show that (1) some of the current stenosis detection/quantification algorithms may be used for triage or as a second-reader in clinical practice, and that (2) automatic lumen segmentation is possible with a precision similar to that obtained by experts. The framework is open for new submissions through the website, at http://coronary.bigr.nl/stenoses/.
The authors developed a fully automatic method that is capable of segmenting the pericardium and quantifying epicardial fat on non-enhanced cardiac CT scans. The authors demonstrated the feasibility of using this method to replace manual annotations by showing that the automatic method performs as good as manual annotation on a large dataset.
This paper describes an evaluation framework that allows a standardized and objective quantitative comparison of carotid artery lumen segmentation and stenosis grading algorithms. We describe the data repository comprising 56 multi-center, multi-vendor CTA datasets, their acquisition, the creation of the reference standard and the evaluation measures. This framework has been introduced at the MICCAI 2009 workshop 3D Segmentation in the Clinic: A Grand Challenge III, and we compare the results of eight teams that participated. These results show that automated segmentation of the vessel lumen is possible with a precision that is comparable to manual annotation. The framework is open for new submissions through the website http://cls2009.bigr.nl.
Accurate detection and quantification of coronary artery stenoses is an essential requirement for treatment planning of patients with suspected coronary artery disease. We present a method to automatically detect and quantify coronary artery stenoses in computed tomography coronary angiography. First, centerlines are extracted using a two-point minimum cost path approach and a subsequent refinement step. The resulting centerlines are used as an initialization for lumen segmentation, performed using graph cuts. Then, the expected diameter of the healthy lumen is estimated by applying robust kernel regression to the coronary artery lumen diameter profile. Finally, stenoses are detected and quantified by computing the difference between estimated and expected diameter profiles. We evaluated our method using the data provided in the Coronary Artery Stenoses Detection and Quantification Evaluation Framework. Using 30 testing datasets, the method achieved a detection sensitivity of 29% and a positive predictive value (PPV) of 24% as compared to quantitative coronary angiography (QCA), and a sensitivity of 21% and a PPV of 23% as compared manual assessment based on consensus reading of CTA by 3 observers. The stenoses degree was estimated with an absolute average difference of 31%, a root mean square difference of 39.3% when compared to QCA, and a weighted kappa value of 0.29 when compared to CTA. A Dice of 68 and 65% was reported for lumen segmentation of healthy and diseased vessel segments respectively. According to the ranking of the evaluation framework, our method finished fourth for stenosis detection, second for stenosis quantification and second for lumen segmentation.
The presented results, in combination with minimal user interaction and low computation time, show that minimum cost path approaches can effectively be applied as a preprocessing step for subsequent analysis in clinical practice and biomedical research.
2D/3D registration of patient vasculature from preinterventional computed tomography angiography (CTA) to interventional X-ray angiography is of interest to improve guidance in percutaneous coronary interventions. In this paper we present a novel feature based 2D/3D registration framework, that is based on probabilistic point correspondences, and show its usefulness on aligning 3D coronary artery centerlines derived from CTA images with their 2D projection derived from interventional X-ray angiography. The registration framework is an extension of the Gaussian mixture model (GMM) based point-set registration to the 2D/3D setting, with a modified distance metric. We also propose a way to incorporate orientation in the registration, and show its added value for artery registration on patient datasets as well as in simulation experiments. The oriented GMM registration achieved a median accuracy of 1.06 mm, with a convergence rate of 81% for nonrigid vessel centerline registration on 12 patient datasets, using a statistical shape model. The method thereby outperformed the iterative closest point algorithm, the GMM registration without orientation, and two recently published methods on 2D/3D coronary artery registration.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.