This paper presents a comparison study between 10 automatic and six interactive methods for liver segmentation from contrast-enhanced CT images. It is based on results from the "MICCAI 2007 Grand Challenge" workshop, where 16 teams evaluated their algorithms on a common database. A collection of 20 clinical images with reference segmentations was provided to train and tune algorithms in advance. Participants were also allowed to use additional proprietary training data for that purpose. All teams then had to apply their methods to 10 test datasets and submit the obtained results. Employed algorithms include statistical shape models, atlas registration, level-sets, graph-cuts and rule-based systems. All results were compared to reference segmentations five error measures that highlight different aspects of segmentation accuracy. All measures were combined according to a specific scoring system relating the obtained values to human expert variability. In general, interactive methods reached higher average scores than automatic approaches and featured a better consistency of segmentation quality. However, the best automatic methods (mainly based on statistical shape models with some additional free deformation) could compete well on the majority of test images. The study provides an insight in performance of different segmentation approaches under real-world conditions and highlights achievements and limitations of current image analysis techniques.
Current case definitions of Myalgic Encephalomyelitis (ME) and chronic fatigue syndrome (CFS) have been based on consensus methods, but empirical methods could be used to identify core symptoms and thereby improve the reliability. In the present study, several methods (i.e., continuous scores of symptoms, theoretically and empirically derived cut off scores of symptoms) were used to identify core symptoms best differentiating patients from controls. In addition, data mining with decision trees was conducted. Our study found a small number of core symptoms that have good sensitivity and specificity, and these included fatigue, post-exertional malaise, a neurocognitive symptom, and unrefreshing sleep. Outcomes from these analyses suggest that using empirically selected symptoms can help guide the creation of a more reliable case definition.
Background Considerable controversy has transpired regarding the core features of myalgic encephalomyelitis (ME) and chronic fatigue syndrome (CFS). Current case definitions differ in the number and types of symptoms required. This ambiguity impedes the search for biological markers and effective treatments. Purpose This study sought to empirically operationalize symptom criteria and identify which symptoms best characterize the illness. Methods Patients (n=236) and controls (n=86) completed the DePaul Symptom Questionnaire, rating the frequency and severity of 54 symptoms. Responses were compared to determine the threshold of frequency/severity ratings that best distinguished patients from controls. A Classification and Regression Tree (CART) algorithm was used to identify the combination of symptoms that most accurately classified patients and controls. Results A third of controls met the symptom criteria of a common CFS case definition when just symptom presence was required; however, when frequency/severity requirements were raised, only 5% met criteria. Employing these higher frequency/severity requirements, the CART algorithm identified three symptoms that accurately classified 95.4% of participants as patient or control: fatigue/extreme tiredness, inability to focus on multiple things simultaneously, and experiencing a dead/heavy feeling after starting to exercise. Conclusions Minimum frequency/severity thresholds should be specified in symptom criteria to reduce the likelihood of misclassification. Future research should continue to seek empirical support of the core symptoms of ME and CFS to further progress the search for biological markers and treatments.
Abstract-Defining the support (or frequency) of a subgraph is trivial when a database of graphs is given: it is simply the number of graphs in the database that contain the subgraph. However, if the input is one large graph, an appropriate support definition is much more difficult to find. In this paper we study the core problem, namely overlapping embeddings of the subgraph, in detail and suggest a definition that relies on the non-existence of equivalent ancestor embeddings in order to guarantee that the resulting support is anti-monotone. We prove this property and describe a method to compute the support defined in this way.
This paper uses an ensemble of classifiers and active learning strategies to predict radiologists’ assessment of the nodules of the Lung Image Database Consortium (LIDC). In particular, the paper presents machine learning classifiers that model agreement among ratings in seven semantic characteristics: spiculation, lobulation, texture, sphericity, margin, subtlety, and malignancy. The ensemble of classifiers (which can be considered as a computer panel of experts) uses 64 image features of the nodules across four categories (shape, intensity, texture, and size) to predict semantic characteristics. The active learning begins the training phase with nodules on which radiologists’ semantic ratings agree, and incrementally learns how to classify nodules on which the radiologists do not agree. Using our proposed approach, the classification accuracy of the ensemble of classifiers is higher than the accuracy of a single classifier. In the long run, our proposed approach can be used to increase consistency among radiological interpretations by providing physicians a “second read”
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.