Automatic image processing methods are a prerequisite to efficiently analyze the large amount of image data produced by computed tomography (CT) scanners during cardiac exams. This paper introduces a model-based approach for the fully automatic segmentation of the whole heart (four chambers, myocardium, and great vessels) from 3-D CT images. Model adaptation is done by progressively increasing the degrees-of-freedom of the allowed deformations. This improves convergence as well as segmentation accuracy. The heart is first localized in the image using a 3-D implementation of the generalized Hough transform. Pose misalignment is corrected by matching the model to the image making use of a global similarity transformation. The complex initialization of the multicompartment mesh is then addressed by assigning an affine transformation to each anatomical region of the model. Finally, a deformable adaptation is performed to accurately match the boundaries of the patient's anatomy. A mean surface-to-surface error of 0.82 mm was measured in a leave-one-out quantitative validation carried out on 28 images. Moreover, the piecewise affine transformation introduced for mesh initialization and adaptation shows better interphase and interpatient shape variability characterization than commonly used principal component analysis.
Abstract. We present a fully automatic segmentation algorithm for the whole heart (four chambers, left ventricular myocardium and trunks of the aorta, the pulmonary artery and the pulmonary veins) in cardiac MR image volumes with nearly isotropic voxel resolution, based on shape-constrained deformable models. After automatic model initialization and reorientation to the cardiac axes, we apply a multi-stage adaptation scheme with progressively increasing degrees of freedom. Particular attention is paid to the calibration of the MR image intensities. Detailed evaluation results for the various anatomical heart regions are presented on a database of 42 patients. On calibrated images, we obtain an average segmentation error of 0.76mm.
In this paper we present technology used in spoken dialog systems for applications of a wide range. They include tasks from the travel domain and automatic switchboards as well as large scale directory assistance. The overall goal in developing spoken dialog systems is to allow for a natural and flexible dialog flow similar to human-human interaction. This imposes the challenging task to recognize and interpret user input, where he/she is allowed to choose from an unrestricted vocabulary and an infinite set of possible formulations. We therefore put emphasis on strategies that make the system more robust while still maintaining a high level of naturalness and flexibility. In view of this paradigm, we found that two fundamental principles characterize many of the proposed methods: 1) to consider available sources of information as early as possible, and 2) to keep alternative hypotheses and delay the decision for a single option as long as possible.We describe how our system architecture caters to incorporating application specific knowledge, including, for example, database constraints, in the determination of the best sentence hypothesis for a user turn. On the next higher level, we use the dialog history to assess the plausibility of a sentence hypothesis by applying consistency checks with information items from previous user turns. In particular, we demonstrate how combination decisions over several turns can be exploited to boost the recognition performance of the system. The dialog manager can also use information on the dialog flow to dynamically modify and tune the system for the specific dialog situations. An important means to increase the "intelligence" of a spoken dialog system is to use confidence measures. We propose methods to obtain confidence measures for semantic items, whole sentences and even full N-best lists and give examples for the benefits obtained from their application. Experiences from field tests with our systems are summarized that have been found crucial for the system acceptance.Index Terms-Application specific knowledge, combined decisions, confidence measures, dialog history, natural language understanding, spoken dialog systems.
Bone age assessment (BAA) on hand radiographs is a frequent and time consuming task in radiology. We present a method for (semi)automatic BAA which is done in several steps: (i) extract 14 epiphyseal regions from the radiographs, (ii) for each region, retain image features using the IRMA framework, (iii) use these features to build a classifier model (training phase), (iv) evaluate performance on cross validation schemes (testing phase), (v) classify unknown hand images (application phase). In this paper, we combine a support vector machine (SVM) with cross-correlation to a prototype image for each class. These prototypes are obtained choosing one random hand per class. A systematic evaluation is presented comparing nominal- and real-valued SVM with k nearest neighbor (kNN) classification on 1,097 hand radiographs of 30 diagnostic classes (0 19 years). Mean error in age prediction is 1.0 and 0.83 years for 5-NN and SVM, respectively. Accuracy of nominal- and real-valued SVM based on 6 prominent regions (prototypes) is 91.57% and 96.16%, respectively, for accepting about two years age range.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.