Several computer aided diagnosis (CAD) systems have been developed for mammography. They are widely used in certain countries such as the U.S. where mammography studies are conducted more frequently; however, they are not yet globally employed for clinical use due to their inconsistent performance, which can be attributed to their reliance on hand-crafted features. It is difficult to use hand-crafted features for mammogram images that vary due to factors such as the breast density of patients and differences in imaging devices. To address these problems, several studies have leveraged a deep convolutional neural network that does not require hand-crafted features. Among the recent object detectors, RetinaNet is particularly promising as it is a simpler one-stage object detector that is fast and efficient while achieving state-of-the-art performance. RetinaNet has been proven to perform conventional object detection tasks but has not been tested on detecting masses in mammograms. Thus, we propose a mass detection model based on RetinaNet. To validate its performance in diverse use cases, we construct several experimental setups using the public dataset INbreast and the in-house dataset GURO. In addition to training and testing on the same dataset (i.e., training and testing on INbreast), we evaluate our mass detection model in setups using additional training data (i.e., training on INbreast + GURO and testing on INbreast). We also evaluate our model in setups using pre-trained weights (i.e., using weights pre-trained on GURO, training and testing on INbreast). In all the experiments, our mass detection model achieves comparable or better performance than more complex state-of-the-art models including the two-stage object detector. Also, the results show that using the weights pre-trained on datasets achieves similar performance as directly using datasets in the training phase. Therefore, we make our mass detection model’s weights pre-trained on both GURO and INbreast publicly available. We expect that researchers who train RetinaNet on their in-house dataset for the mass detection task can use our pre-trained weights to leverage the features extracted from the datasets.
BackgroundAccurately detecting and examining lung nodules early is key in diagnosing lung cancers and thus one of the best ways to prevent lung cancer deaths. Radiologists spend countless hours detecting small spherical-shaped nodules in computed tomography (CT) images. In addition, even after detecting nodule candidates, a considerable amount of effort and time is required for them to determine whether they are real nodules. The aim of this paper is to introduce a high performance nodule classification method that uses three dimensional deep convolutional neural networks (DCNNs) and an ensemble method to distinguish nodules between non-nodules.MethodsIn this paper, we use a three dimensional deep convolutional neural network (3D DCNN) with shortcut connections and a 3D DCNN with dense connections for lung nodule classification. The shortcut connections and dense connections successfully alleviate the gradient vanishing problem by allowing the gradient to pass quickly and directly. Connections help deep structured networks to obtain general as well as distinctive features of lung nodules. Moreover, we increased the dimension of DCNNs from two to three to capture 3D features. Compared with shallow 3D CNNs used in previous studies, deep 3D CNNs more effectively capture the features of spherical-shaped nodules. In addition, we use an alternative ensemble method called the checkpoint ensemble method to boost performance.ResultsThe performance of our nodule classification method is compared with that of the state-of-the-art methods which were used in the LUng Nodule Analysis 2016 Challenge. Our method achieves higher competition performance metric (CPM) scores than the state-of-the-art methods using deep learning. In the experimental setup ESB-ALL, the 3D DCNN with shortcut connections and the 3D DCNN with dense connections using the checkpoint ensemble method achieved the highest CPM score of 0.910.ConclusionThe result demonstrates that our method of using a 3D DCNN with shortcut connections, a 3D DCNN with dense connections, and the checkpoint ensemble method is effective for capturing 3D features of nodules and distinguishing nodules between non-nodules.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.