Purpose Radiological imaging and image interpretation for clinical decision making are mostly specific to each body region such as head and neck, thorax, abdomen, pelvis, and extremities. In this study, we present a new solution to trim automatically the given axial image stack into image volumes satisfying the given body region definition. Methods The proposed approach consists of the following steps. First, a set of reference objects is selected and roughly segmented. Virtual landmarks (VLs) for the objects are then identified by using principal component analysis and recursive subdivision of the object via the principal axes system. The VLs can be defined based on just the binary objects or objects with gray values also considered. The VLs may lie anywhere with respect to the object, inside or outside, and rarely on the object surface, and are tethered to the object. Second, a classic neural network regressor is configured to learn the geometric mapping relationship between the VLs and the boundary locations of each body region. The trained network is then used to predict the locations of the body region boundaries. In this study, we focus on three body regions — thorax, abdomen, and pelvis, and predict their superior and inferior axial locations denoted by TS(I), TI(I), AS(I), AI(I), PS(I), and PI(I), respectively, for any given volume image I. Two kinds of reference objects — the skeleton and the lungs and airways, are employed to test the localization performance of the proposed approach. Results Our method is tested by using low‐dose unenhanced computed tomography (CT) images of 180 near whole‐body 18F‐fluorodeoxyglucose‐positron emission tomography/computed tomography (FDG‐PET/CT) scans (including 34 whole‐body scans) which are randomly divided into training and testing sets with a ratio of 85%:15%. The procedure is repeated six times and three times for the case of lungs and skeleton, respectively, with different divisions of the entire data set at this proportion. For the case of using skeleton as a reference object, the overall mean localization error for the six locations expressed as number of slices (nS) and distance (dS) in mm, is found to be nS: 3.4, 4.7, 4.1, 5.2, 5.2, and 3.9; dS: 13.4, 18.9, 16.5, 20.8, 20.8, and 15.5 mm for binary objects; nS: 4.1, 5.7, 4.3, 5.9, 5.9, and 4.0; dS: 16.2, 22.7, 17.2, 23.7, 23.7, and 16.1 mm for gray objects, respectively. For the case of using lungs and airways as a reference object, the corresponding results are, nS: 4.0, 5.3, 4.1, 6.9, 6.9, and 7.4; dS: 15.0, 19.7, 15.3, 26.2, 26.2, and 27.9 mm for binary objects; nS: 3.9, 5.4, 3.6, 7.2, 7.2, and 7.6; dS: 14.6, 20.1, 13.7, 27.3, 27.3, and 28.6 mm for gray objects, respectively. Conclusions Precise body region identification automatically in whole‐body or body region tomographic images is vital for numerous medical image analysis and analytics applications. Despite its importance, this issue has received very little attention in the literature. We present a solution to this problem in this study using the conce...
Radiological imaging and image interpretation for clinical decision making are mostly specific to each body region such as head & neck, thorax, abdomen, pelvis, and extremities. For automating image analysis and consistency of results, standardizing definitions of body regions and the various anatomic objects, tissue regions, and zones in them becomes essential. Assuming that a standardized definition of body regions is available, a fundamental early step needed in automated image and object analytics is to automatically trim the given image stack into image volumes exactly satisfying the body region definition. This paper presents a solution to this problem based on the concept of virtual landmarks and evaluates it on whole-body positron emission tomography/computed tomography (PET/CT) scans. The method first selects a (set of) reference object(s), segments it (them) roughly, and identifies virtual landmarks for the object(s). The geometric relationship between these landmarks and the boundary locations of body regions in the cranio-caudal direction is then learned through a neural network regressor, and the locations are predicted. Based on low-dose unenhanced CT images of 180 near whole-body PET/CT scans (which includes 34 whole-body PET/CT scans), the mean localization error for the boundaries of superior of thorax (TS) and inferior of thorax (TI), expressed as number of slices (slice spacing ≈ 4mm)), and using either the skeleton or the pleural spaces as reference objects, is found to be 3,2 (using skeleton) and 3, 5 (using pleural spaces) respectively, or in mm 13, 10 mm (using skeleton) and 10.5, 20 mm (using pleural spaces), respectively. Improvements of this performance via optimal selection of objects and virtual landmarks and other object analytics applications are currently being pursued. and the skeleton and pleural spaces used as a reference objects
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.