Efficiently obtaining a reliable coronary artery centerline from computed tomography angiography data is relevant in clinical practice. Whereas numerous methods have been presented for this purpose, up to now no standardized evaluation methodology has been published to reliably evaluate and compare the performance of the existing or newly developed coronary artery centerline extraction algorithms. This paper describes a standardized evaluation methodology and reference database for the quantitative evaluation of coronary artery centerline extraction algorithms. The contribution of this work is fourfold: 1) a method is described to create a consensus centerline with multiple observers, 2) well-defined measures are presented for the evaluation of coronary artery centerline extraction algorithms, 3) a database containing thirty-two cardiac CTA datasets with corresponding reference standard is described and made available, and 4) thirteen coronary artery centerline extraction algorithms, implemented by different research groups, are quantitatively evaluated and compared. The presented evaluation framework is made available to the medical imaging community for benchmarking existing or newly developed coronary centerline extraction algorithms.
Quality of segmentations obtained by 3D Active Appearance Models (AAMs) crucially depends on underlying training data. MRI heart data, however, often come noisy, incomplete, with respiratoryinduced motion, and do not fulfill necessary requirements for building an AAM. Moreover, AAMs are known to fail when attempting to model local variations. Inspired by the recent work on split models [1] we propose an alternative to the methods based on pure 3D AAM segmentation. We interconnect a set of 2D AAMs by a 3D shape model. We show that our approach is able to cope with imperfect data and improves segmentations by 11% on average compared to 3D AAMs.
The manufacturing of structural parts made from carbon fiber composite materials is a complex process that requires extended quality control. To facilitate better decisions about the mechanical properties of the part and, consequently, the need for rework , a manufacturing database is proposed that creates a digital twin of the part as manufactured. The main contribution of the paper is to highlight how to merge incoming sensor data into the database and how to use these data to determine the margin of safety for the part. This is demonstrated on the example of an ADMP (automated dry material placement) process during the manufacturing of a section of an aircraft wing lower cover.
Automated fiber placement (AFP) is an advanced manufacturing technology that increases the rate of production of composite materials. At the same time, the need for adaptable and fast inline control methods of such parts raises. Existing inspection systems make use of handcrafted filter chains and feature detectors, tuned for a specific measurement methods by domain experts. These methods hardly scale to new defects or different measurement devices. In this paper, we propose to formulate AFP defect detection as an image segmentation problem that can be solved in an end-to-end fashion using artificially generated training data. We employ a probabilistic graphical model to generate training images and annotations. We then train a deep neural network based on recent architectures designed for image segmentation. This leads to an appealing method that scales well with new defect types and measurement devices and requires little real world data for training.
This work considers robot keypoint estimation on color images as a supervised machine learning task. We propose the use of probabilistically created renderings to overcome the lack of labeled real images. Rather than sampling from stationary distributions, our approach introduces a feedback mechanism that constantly adapts probability distributions according to current training progress. Initial results show, our approach achieves near-human-level accuracy on real images. Additionally, we demonstrate that feedback leads to fewer required training steps, while maintaining the same model quality on synthetic data sets.
Automatic tracking of coronary arteries in Computed Tomography Angiography (CTA) is a challenging task. To accomplish it we propose a method consisting of two main steps: (1) A 3D model of the heart is matched for detecting the approximate position of the heart. Based on this information candidates for origins of coronary arteries are calculated. (2) Fitting of cylindrical sampling patterns is performed for extracting the vessel tree of coronary arteries. Branching and termination are handled by depth-first search and noise level estimation respectively. Results show that – compared to human intra-observer variations – the presented method performs worse for accuracy measures (on average 39.4 scores), but slightly better for overlap measures (on average 51.5 scores).
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.