Variations in the shape and appearance of anatomical structures in medical images are often relevant radiological signs of disease. Automatic tools can help automate parts of this manual process. A cloud-based evaluation framework is presented in this paper including results of benchmarking current state-of-the-art medical imaging algorithms for anatomical structure segmentation and landmark detection: the VISCERAL Anatomy benchmarks. The algorithms are implemented in virtual machines in the cloud where participants can only access the training data and can be run privately by the benchmark administrators to objectively compare their performance in an unseen common test set. Overall, 120 computed tomography and magnetic resonance patient volumes were manually annotated to create a standard Gold Corpus containing a total of 1295 structures and 1760 landmarks. Ten participants contributed with automatic algorithms for the organ segmentation task, and three for the landmark localization task. Different algorithms obtained the best scores in the four available imaging modalities and for subsets of anatomical structures. The annotation framework, resulting data set, evaluation setup, results and performance analysis from the three VISCERAL Anatomy benchmarks are presented in this article. Both the VISCERAL data set and Silver Corpus generated with the fusion of the participant algorithms on a larger set of non-manually-annotated medical images are available to the research community.
Objectives: The aims of this study were to determine the stability of radiomics features against computed tomography (CT) parameter variations and to study their discriminative power concerning tissue classification using a 3D-printed CT phantom based on real patient data. Materials and Methods: A radiopaque 3D phantom was developed using real patient data and a potassium iodide solution paper-printing technique. Normal liver tissue and 3 lesion types (benign cyst, hemangioma, and metastasis) were manually annotated in the phantom. The stability and discriminative power of 86 radiomics features were assessed in measurements taken from 240 CT series with 8 parameter variations of reconstruction algorithms, reconstruction kernels, slice thickness, and slice spacing. Pairwise parameter group and pairwise tissue class comparisons were performed using Wilcoxon signed rank tests. Results: In total, 19,264 feature stability tests and 8256 discriminative power tests were performed. The 8 CT parameter variation pairwise group comparisons had statistically significant differences on average in 78/86 radiomics features. On the other hand, 84% of the univariate radiomics feature tests had a successful and statistically significant differentiation of the 4 classes of liver tissue. The 86 radiomics features were ranked according to the cumulative sum of successful stability and discriminative power tests. Conclusions: The differences in radiomics feature values obtained from different types of liver tissue are generally greater than the intraclass differences resulting from CT parameter variations.
Background: The introduction of digital pathology into clinical practice has led to the development of clinical workflows with digital images, in connection with pathology reports. Still, most of the current work is time-consuming manual analysis of image areas at different scales. Links with data in the biomedical literature are rare, and a need for search based on visual similarity within whole slide images (WSIs) exists. Objectives: The main objective of the work presented is to integrate content-based visual retrieval with a WSI viewer in a prototype. Another objective is to connect cases analyzed in the viewer with cases or images from the biomedical literature, including the search through visual similarity and text. Methods: An innovative retrieval system for digital pathology is integrated with a WSI viewer, allowing to define regions of interest (ROIs) in images as queries for finding visually similar areas in the same or other images and to zoom in/out to find structures at varying magnification levels. The algorithms are based on a multimodal approach, exploiting both text information and content-based image features. Results: The retrieval system allows viewing WSIs and searching for regions that are visually similar to manually defined ROIs in various data sources (proprietary and public datasets, e.g., scientific literature). The system was tested by pathologists, highlighting its capabilities and suggesting ways to improve it and make it more usable in clinical practice. Conclusions: The developed system can enhance the practice of pathologists by enabling them to use their experience and knowledge to control artificial intelligence tools for navigating repositories of images for clinical decision support and teaching, where the comparison with visually similar cases can help to avoid misinterpretations. The system is available as open source, allowing the scientific community to test, ideate and develop similar systems for research and clinical practice.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.