2016
DOI: 10.1155/2016/6215085
|View full text |Cite
|
Sign up to set email alerts
|

Pulmonary Nodule Classification with Deep Convolutional Neural Networks on Computed Tomography Images

Abstract: Computer aided detection (CAD) systems can assist radiologists by offering a second opinion on early diagnosis of lung cancer. Classification and feature representation play critical roles in false-positive reduction (FPR) in lung nodule CAD. We design a deep convolutional neural networks method for nodule classification, which has an advantage of autolearning representation and strong generalization ability. A specified network structure for nodule images is proposed to solve the recognition of three types of… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
85
0
1

Year Published

2017
2017
2023
2023

Publication Types

Select...
7

Relationship

0
7

Authors

Journals

citations
Cited by 154 publications
(90 citation statements)
references
References 32 publications
(31 reference statements)
0
85
0
1
Order By: Relevance
“…As summarized in Table 2, most works employ simple Anavi et al (2015) Image retrieval Combines classical features with those from pre-trained CNN for image retrieval using SVM Bar et al (2015) Pathology detection Features from a pre-trained CNN and low level features are used to detect various diseases Anavi et al (2016) Image retrieval Continuation of Anavi et al (2015), adding age and gender as features Bar et al (2016) Pathology detection Continuation of Bar et al (2015), more experiments and adding feature selection Cicero et al (2016) Pathology detection GoogLeNet CNN detects five common abnormalities, trained and validated on a large data set Tuberculosis detection Processes entire radiographs with a pre-trained fine-tuned network with 6 convolution layers Kim and Hwang (2016) Tuberculosis detection MIL framework produces heat map of suspicious regions via deconvolution Shin et al (2016a) Pathology detection CNN detects 17 diseases, large data set (7k images), recurrent networks produce short captions Rajkomar et al (2017) Frontal/lateral classification Pre-trained CNN performs frontal/lateral classification task Yang et al (2016c) Bone suppression Cascade of CNNs at increasing resolution learns bone images from gradients of radiographs Wang et al (2016a) Nodule classification Combines classical features with CNN features from pre-trained ImageNet CNN Used a standard feature extractor and a pre-trained CNN to classify detected lesions as benign peri-fissural nodules van Detects nodules with pre-trained CNN features from orthogonal patches around candidate, classified with SVM Shen et al (2015b) Three CNNs at different scales estimate nodule malignancy scores of radiologists (LIDC-IDRI data set) Chen et al (2016e) Combines features from CNN, SDAE and classical features to characterize nodules from LIDC-IDRI data set Ciompi et al (2016) Multi-stream CNN to classify nodules into subtypes: solid, part-solid, non-solid, calcified, spiculated, perifissural Dou et al (2016b) Uses 3D CNN around nodule candidates; ranks #1 in LUNA16 nodule detection challenge Li et al (2016a) Detects nodules with 2D CNN that processes small patches around a nodule Setio et al (2016) Detects nodules with end-to-end trained multi-stream CNN with 9 patches per candidate Shen et al (2016) 3D CNN classifies volume centered on nodule as benign/malignant, results are combined to patient level prediction Sun et al (2016b) Same dataset as Shen et al (2015b)…”
Section: Eyementioning
confidence: 99%
“…As summarized in Table 2, most works employ simple Anavi et al (2015) Image retrieval Combines classical features with those from pre-trained CNN for image retrieval using SVM Bar et al (2015) Pathology detection Features from a pre-trained CNN and low level features are used to detect various diseases Anavi et al (2016) Image retrieval Continuation of Anavi et al (2015), adding age and gender as features Bar et al (2016) Pathology detection Continuation of Bar et al (2015), more experiments and adding feature selection Cicero et al (2016) Pathology detection GoogLeNet CNN detects five common abnormalities, trained and validated on a large data set Tuberculosis detection Processes entire radiographs with a pre-trained fine-tuned network with 6 convolution layers Kim and Hwang (2016) Tuberculosis detection MIL framework produces heat map of suspicious regions via deconvolution Shin et al (2016a) Pathology detection CNN detects 17 diseases, large data set (7k images), recurrent networks produce short captions Rajkomar et al (2017) Frontal/lateral classification Pre-trained CNN performs frontal/lateral classification task Yang et al (2016c) Bone suppression Cascade of CNNs at increasing resolution learns bone images from gradients of radiographs Wang et al (2016a) Nodule classification Combines classical features with CNN features from pre-trained ImageNet CNN Used a standard feature extractor and a pre-trained CNN to classify detected lesions as benign peri-fissural nodules van Detects nodules with pre-trained CNN features from orthogonal patches around candidate, classified with SVM Shen et al (2015b) Three CNNs at different scales estimate nodule malignancy scores of radiologists (LIDC-IDRI data set) Chen et al (2016e) Combines features from CNN, SDAE and classical features to characterize nodules from LIDC-IDRI data set Ciompi et al (2016) Multi-stream CNN to classify nodules into subtypes: solid, part-solid, non-solid, calcified, spiculated, perifissural Dou et al (2016b) Uses 3D CNN around nodule candidates; ranks #1 in LUNA16 nodule detection challenge Li et al (2016a) Detects nodules with 2D CNN that processes small patches around a nodule Setio et al (2016) Detects nodules with end-to-end trained multi-stream CNN with 9 patches per candidate Shen et al (2016) 3D CNN classifies volume centered on nodule as benign/malignant, results are combined to patient level prediction Sun et al (2016b) Same dataset as Shen et al (2015b)…”
Section: Eyementioning
confidence: 99%
“…With the increasing improvement of CAD systems, the majority of studies have demonstrated that CAD systems could detect more nodules than radiologists, even after double reading . Moreover, in comparison with most CAD systems based on supervised machine learning algorithms, multiple studies have shown that deep learning‐based CAD systems (DL‐CAD) have superior detection rates and further reduce false positive rates . However, CAD systems are far from perfect and thus require further development to be improved.…”
Section: Introductionmentioning
confidence: 99%
“…16,17 Moreover, in comparison with most CAD systems based on supervised machine learning algorithms, 18,19 multiple studies have shown that deep learning-based CAD systems (DL-CAD) have superior detection rates and further reduce false positive rates. [20][21][22] However, CAD systems are far from perfect and thus require further development to be improved.…”
Section: Introductionmentioning
confidence: 99%
“…93,[110][111][112] Figure 3 shows examples of CNNs, which can be used to classify data types as diverse as purchasing preferences and satellite images. Examples include tumor grading with MRI, 100 classification of pulmonary nodules with 2D CT, 113 and predicting response to neoadjuvant therapy with positron emission tomography (PET) images. Because images contain many common features that are relevant to classification (eg, edges, shapes, colors), the core layers of CNNs relating to these can be transferred to other classification tasks.…”
Section: Deep Learningmentioning
confidence: 99%
“…Even using only 2D images, CNNs have been shown to improve accuracy compared with conventional radiomic approaches. Examples include tumor grading with MRI, 100 classification of pulmonary nodules with 2D CT, 113 and predicting response to neoadjuvant therapy with positron emission tomography (PET) images. 114 To better use volumetric information from the 3D medical images, information from 2D patches of the 3 orthogonal views (axial, sagittal, and coronal) can be used, which is a method referred to as 2.5D.…”
Section: Deep Learningmentioning
confidence: 99%