As decisions in cardiology increasingly rely on noninvasive methods, fast and precise image processing tools have become a crucial component of the analysis workflow. To the best of our knowledge, we propose the first automatic system for patient-specific modeling and quantification of the left heart valves, which operates on cardiac computed tomography (CT) and transesophageal echocardiogram (TEE) data. Robust algorithms, based on recent advances in discriminative learning, are used to estimate patient-specific parameters from sequences of volumes covering an entire cardiac cycle. A novel physiological model of the aortic and mitral valves is introduced, which captures complex morphologic, dynamic, and pathologic variations. This holistic representation is hierarchically defined on three abstraction levels: global location and rigid motion model, nonrigid landmark motion model, and comprehensive aortic-mitral model. First we compute the rough location and cardiac motion applying marginal space learning. The rapid and complex motion of the valves, represented by anatomical landmarks, is estimated using a novel trajectory spectrum learning algorithm. The obtained landmark model guides the fitting of the full physiological valve model, which is locally refined through learned boundary detectors. Measurements efficiently computed from the aortic-mitral representation support an effective morphological and functional clinical evaluation. Extensive experiments on a heterogeneous data set, cumulated to 1516 TEE volumes from 65 4-D TEE sequences and 690 cardiac CT volumes from 69 4-D CT sequences, demonstrated a speed of 4.8 seconds per volume and average accuracy of 1.45 mm with respect to expert defined ground-truth. Additional clinical validations prove the quantification precision to be in the range of inter-user variability. To the best of our knowledge this is the first time a patient-specific model of the aortic and mitral valves is automatically estimated from volumetric sequences.
Computed tomographic (CT) angiography has been improved significantly with the introduction of four- to 64-section spiral CT scanners, which offer rapid acquisition of isotropic data sets. A variety of techniques have been proposed for postprocessing of the resulting images. The most widely used techniques are multiplanar reformation (MPR), thin-slab maximum intensity projection, and volume rendering. Sophisticated segmentation algorithms, vessel analysis tools based on a centerline approach, and automatic lumen boundary definition are emerging techniques; bone removal with thresholding or subtraction algorithms has been introduced. These techniques increasingly provide a quality of vessel analysis comparable to that achieved with intraarterial three-dimensional rotational angiography. Neurovascular applications for these various image postprocessing methods include steno-occlusive disease, dural sinus thrombosis, vascular malformations, and cerebral aneurysms. However, one should keep in mind the potential pitfalls of these techniques and always double-check the final results with source or MPR imaging.
Automatic coronary centerline extraction and lumen segmentation facilitate the diagnosis of coronary artery disease (CAD), which is a leading cause of death in developed countries. Various coronary centerline extraction methods have been proposed and most of them are based on shortest path computation given one or two end points on the artery. The major variation of the shortest path based approaches is in the different vesselness measurements used for the path cost. An empirically designed measurement (e.g., the widely used Hessian vesselness) is by no means optimal in the use of image context information. In this paper, a machine learning based vesselness is proposed by exploiting the rich domain specific knowledge embedded in an expert-annotated dataset. For each voxel, we extract a set of geometric and image features. The probabilistic boosting tree (PBT) is then used to train a classifier, which assigns a high score to voxels inside the artery and a low score to those outside. The detection score can be treated as a vesselness measurement in the computation of the shortest path. Since the detection score measures the probability of a voxel to be inside the vessel lumen, it can also be used for the coronary lumen segmentation. To speed up the computation, we perform classification only for voxels around the heart surface, which is achieved by automatically segmenting the whole heart from the 3D volume in a preprocessing step. An efficient voxel-wise classification strategy is used to further improve the speed. Experiments demonstrate that the proposed learning based vesselness outperforms the conventional Hessian vesselness in both speed and accuracy. On average, it only takes approximately 2.3 seconds to process a large volume with a typical size of 512×512×200 voxels.
Abstract.Recently conducted clinical studies prove the utility of Coronary Computed Tomography Angiography (CCTA) as a viable alternative to invasive angiography for the detection of Coronary Artery Disease (CAD). This has lead to the development of several algorithms for automatic detection and grading of coronary stenoses. However, most of these methods focus on detecting calcified plaques only. A few methods that can also detect and grade non-calcified plaques require substantial user involvement. In this paper, we propose a fast and fully automatic system that is capable of detecting, grading and classifying coronary stenoses in CCTA caused by all types of plaques. We propose a four-step approach including a learning-based centerline verification step and a lumen crosssection estimation step using random regression forests. We show state-of-the-art performance of our method in experiments conducted on a set of 229 CCTA volumes. With an average processing time of 1.8 seconds per case after centerline extraction, our method is significantly faster than competing approaches.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.