Digital tomosynthesis mammography (DTM) is a promising new modality for breast cancer detection. In DTM, projection-view images are acquired at a limited number of angles over a limited angular range and the imaged volume is reconstructed from the two-dimensional projections, thus providing three-dimensional structural information of the breast tissue. In this work, we investigated three representative reconstruction methods for this limited-angle cone-beam tomographic problem, including the backprojection (BP) method, the simultaneous algebraic reconstruction technique (SART) and the maximum likelihood method with the convex algorithm (ML-convex). The SART and ML-convex methods were both initialized with BP results to achieve efficient reconstruction. A second generation GE prototype tomosynthesis mammography system with a stationary digital detector was used for image acquisition. Projection-view images were acquired from 21 angles in 3 degrees increments over a +/- 30 degrees angular range. We used an American College of Radiology phantom and designed three additional phantoms to evaluate the image quality and reconstruction artifacts. In addition to visual comparison of the reconstructed images of different phantom sets, we employed the contrast-to-noise ratio (CNR), a line profile of features, an artifact spread function (ASF), a relative noise power spectrum (NPS), and a line object spread function (LOSF) to quantitatively evaluate the reconstruction results. It was found that for the phantoms with homogeneous background, the BP method resulted in less noisy tomosynthesized images and higher CNR values for masses than the SART and ML-convex methods. However, the two iterative methods provided greater contrast enhancement for both masses and calcification, sharper LOSF, and reduced interplane blurring and artifacts with better ASF behaviors for masses. For a contrast-detail phantom with heterogeneous tissue-mimicking background, the BP method had strong blurring artifacts along the x-ray source motion direction that obscured the contrast-detail objects, while the other two methods can remove the superimposed breast structures and significantly improve object conspicuity. With a properly selected relaxation parameter, the SART method with one iteration can provide tomosynthesized images comparable to those obtained from the ML-convex method with seven iterations, when BP results were used as initialization for both methods.
Deep learning is the state-of-the-art machine learning approach. The success of deep learning in many pattern recognition applications have brought excitement and high expectations that deep learning, or artificial intelligence (AI), can bring revolutionary changes in health care. Early studies of deep learning applied to lesion detection or classification have reported superior performance compared to those by conventional techniques or even better than radiologists in some tasks. The potential of applying deep-learning-based medical image analysis to computeraided diagnosis (CAD), thus providing decision support to clinicians and improving the accuracy and efficiency of various diagnostic and treatment processes, has spurred new research and development efforts in CAD. Despite the optimism in this new era of machine learning, the development and implementation of CAD or AI tools in clinical practice face many challenges. In this chapter, we will discuss some of these issues and efforts needed to develop robust deeplearning-based CAD tools and integrate these tools into the clinical workflow, thereby advancing towards the goal of providing reliable intelligent aids for patient care.
The roles of physicists in medical imaging have expanded over the years, from the study of imaging systems (sources and detectors) and dose to the assessment of image quality and perception, the development of image processing techniques, and the development of image analysis methods to assist in detection and diagnosis. The latter is a natural extension of medical physicists' goals in developing imaging techniques to help physicians acquire diagnostic information and improve clinical decisions. Studies indicate that radiologists do not detect all abnormalities on images that are visible on retrospective review, and they do not always correctly characterize abnormalities that are found. Since the 1950s, the potential use of computers had been considered for analysis of radiographic abnormalities. In the mid-1980s, however, medical physicists and radiologists began major research efforts for computer-aided detection or computer-aided diagnosis (CAD), that is, using the computer output as an aid to radiologists-as opposed to a completely automatic computer interpretation-focusing initially on methods for the detection of lesions on chest radiographs and mammograms. Since then, extensive investigations of computerized image analysis for detection or diagnosis of abnormalities in a variety of 2D and 3D medical images have been conducted. The growth of CAD over the past 20 years has been tremendous-from the early days of time-consuming film digitization and CPU-intensive computations on a limited number of cases to its current status in which developed CAD approaches are evaluated rigorously on large clinically relevant databases. CAD research by medical physicists includes many aspects-collecting relevant normal and pathological cases; developing computer algorithms appropriate for the medical interpretation task including those for segmentation, feature extraction, and classifier design; developing methodology for assessing CAD performance; validating the algorithms using appropriate cases to measure performance and robustness; conducting observer studies with which to evaluate radiologists in the diagnostic task without and with the use of the computer aid; and ultimately assessing performance with a clinical trial. Medical physicists also have an important role in quantitative imaging, by validating the quantitative integrity of scanners and developing imaging techniques, and image analysis tools that extract quantitative data in a more accurate and automated fashion. As imaging systems become more complex and the need for better quantitative information from images grows, the future includes the combined research efforts from physicists working in CAD with those working on quantitative imaging systems to readily yield information on morphology, function, molecular structure, and more-from animal imaging research to clinical patient care. A historical review of CAD and a discussion of challenges for the future are presented here, along with the extension to quantitative image analysis.
We are developing a computer-aided diagnosis (CAD) system to classify malignant and benign lung nodules found on CT scans. A fully automated system was designed to segment the nodule from its surrounding structured background in a local volume of interest (VOI) and to extract image features for classification. Image segmentation was performed with a three-dimensional (3D) active contour (AC) method. A data set of 96 lung nodules (44 malignant, 52 benign) from 58 patients was used in this study. The 3D AC model is based on two-dimensional AC with the addition of three new energy components to take advantage of 3D information: (1) 3D gradient, which guides the active contour to seek the object surface, (2) 3D curvature, which imposes a smoothness constraint in the z direction, and (3) mask energy, which penalizes contours that grow beyond the pleura or thoracic wall. The search for the best energy weights in the 3D AC model was guided by a simplex optimization method. Morphological and gray-level features were extracted from the segmented nodule. The rubber band straightening transform (RBST) was applied to the shell of voxels surrounding the nodule. Texture features based on run-length statistics were extracted from the RBST image. A linear discriminant analysis classifier with stepwise feature selection was designed using a second simplex optimization to select the most effective features. Leave-one-case-out resampling was used to train and test the CAD system. The system achieved a test area under the receiver operating characteristic curve (A z ) of 0.83±0.04. Our preliminary results indicate that use of the 3D AC model and the 3D texture features surrounding the nodule is a promising approach to the segmentation and classification of lung nodules with CAD. The segmentation performance of the 3D AC model trained with our data set was evaluated with 23 nodules available in the Lung Image Database Consortium (LIDC). The lung nodule volumes segmented by the 3D AC model for best classification were generally larger than those outlined by the LIDC radiologists using visual judgment of nodule boundaries.
The authors investigated the classification of regions of interest (ROI's) on mammograms as either mass or normal tissue using a convolution neural network (CNN). A CNN is a backpropagation neural network with two-dimensional (2-D) weight kernels that operate on images. A generalized, fast and stable implementation of the CNN was developed. The input images to the CNN were obtained from the ROI's using two techniques. The first technique employed averaging and subsampling. The second technique employed texture feature extraction methods applied to small subregions inside the ROI. Features computed over different subregions were arranged as texture images, which were subsequently used as CNN inputs. The effects of CNN architecture and texture feature parameters on classification accuracy were studied. Receiver operating characteristic (ROC) methodology was used to evaluate the classification accuracy. A data set consisting of 168 ROIs containing biopsy-proven masses and 504 ROI's containing normal breast tissue was extracted from 168 mammograms by radiologists experienced in mammography. This data set was used for training and testing the CNN. With the best combination of CNN architecture and texture feature parameters, the area under the test ROC curve reached 0.87, which corresponded to a true-positive fraction of 90% at a false positive fraction of 31%. The authors' results demonstrate the feasibility of using a CNN for classification of masses and normal tissue on mammograms.
We are developing a computer-aided diagnosis (CAD) system for lung nodule detection on thoracic helical computed tomography (CT) images. In the first stage of this CAD system, lung regions are identified by a k-means clustering technique. Each lung slice is classified as belonging to the upper, middle, or the lower part of the lung volume. Within each lung region, structures are segmented again using weighted k-means clustering. These structures may include true lung nodules and normal structures consisting mainly of blood vessels. Rule-based classifiers are designed to distinguish nodules and normal structures using 2D and 3D features. After rule-based classification, linear discriminant analysis (LDA) is used to further reduce the number of false positive (FP) objects. We performed a preliminary study using 1454 CT slices from 34 patients with 63 lung nodules. When only LDA classification was applied to the segmented objects, the sensitivity was 84% (53/63) with 5.48 (7961/1454) FP objects per slice. When rule-based classification was used before LDA, the free response receiver operating characteristic (FROC) curve improved over the entire sensitivity and specificity ranges of interest. In particular, the FP rate decreased to 1.74 (2530/1454) objects per slice at the same sensitivity. Thus, compared to FP reduction with LDA alone, the inclusion of rule-based classification lead to an improvement in detection accuracy for the CAD system. These preliminary results demonstrate the feasibility of our approach to lung nodule detection and FP reduction on CT images.
A new rubber band straightening transform (RBST) is introduced for characterization of mammographic masses as malignant or benign. The RBST transforms a band of pixels surrounding a segmented mass onto the Cartesian plane (the RBST image). The border of a mammographic mass appears approximately as a horizontal line, and possible speculations resemble vertical lines in the RBST image. In this study, the effectiveness of a set of directional textures extracted from the images before the RBST. A database of 168 mammograms containing biopsy-proven malignant and benign breast masses was digitized at a pixel size of 100 microns x 100 microns. Regions of interest (ROIs) containing the biopsied mass were extracted from each mammogram by an experienced radiologist. A clustering algorithm was employed for automated segmentation of each ROI into a mass object and background tissue. Texture features extracted from spatial gray-level dependence matrices and run-length statistics matrices were evaluated for three different regions and representations: (i) the entire ROI; (ii) a band of pixels surrounding the segmented mass object in the ROI; and (iii) the RBST image. Linear discriminant analysis was used for classification, and receiver operating characteristic (ROC) analysis was used to evaluate the classification accuracy. Using the ROC curves as the performance measure, features extracted from the RBST images were found to be significantly more effective than those extracted from the original images. Features extracted from the RBST images yielded an area (Az) of 0.94 under the ROC curve for classification of mammographic masses as malignant and benign.
We are developing new computer vision techniques for characterization of breast masses on mammograms. We had previously developed a characterization method based on texture features. The goal of the present work was to improve our characterization method by making use of morphological features. Toward this goal, we have developed a fully automated, three-stage segmentation method that includes clustering, active contour, and spiculation detection stages. After segmentation, morphological features describing the shape of the mass were extracted. Texture features were also extracted from a band of pixels surrounding the mass. Stepwise feature selection and linear discriminant analysis were employed in the morphological, texture, and combined feature spaces for classifier design. The classification accuracy was evaluated using the area Az under the receiver operating characteristic curve. A data set containing 249 films from 102 patients was used. When the leave-one-case-out method was applied to partition the data set into trainers and testers, the average test Az for the task of classifying the mass on a single mammographic view was 0.83 +/- 0.02, 0.84 +/- 0.02, and 0.87 +/- 0.02 in the morphological, texture, and combined feature spaces, respectively. The improvement obtained by supplementing texture features with morphological features in classification was statistically significant (p = 0.04). For classifying a mass as malignant or benign, we combined the leave-one-case-out discriminant scores from different views of a mass to obtain a summary score. In this task, the test Az value using the combined feature space was 0.91 +/- 0.02. Our results indicate that combining texture features with morphological features extracted from automatically segmented mass boundaries will be an effective approach for computer-aided characterization of mammographic masses.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
334 Leonard St
Brooklyn, NY 11211
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.