2017 Eighth International Conference on Intelligent Computing and Information Systems (ICICIS) 2017
DOI: 10.1109/intelcis.2017.8260029
|View full text |Cite
|
Sign up to set email alerts
|

Lung nodule segmentation and detection in computed tomography

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
13
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
6
2
1

Relationship

0
9

Authors

Journals

citations
Cited by 24 publications
(13 citation statements)
references
References 25 publications
0
13
0
Order By: Relevance
“…Please note that, "TA" indicates the traditional algorithm, "ML" indicates the machine learning, and "CNN" indicates the convolution neural networks. In addition, "V -fps=n" means that when there are n false positives per scan, the corresponding recall rate is smaller than the value V. For example, the corresponding results of El-Regaily (V=0.705 -, n=4) indicate that when there are 4 false positives per scan, the corresponding recall rate is less than or equal to 0.705 (The original paper [15] only shows that when there are 4.1 false positives per scan, the corresponding recall rate is 0.705).…”
Section: Experimental Comparisonmentioning
confidence: 99%
“…Please note that, "TA" indicates the traditional algorithm, "ML" indicates the machine learning, and "CNN" indicates the convolution neural networks. In addition, "V -fps=n" means that when there are n false positives per scan, the corresponding recall rate is smaller than the value V. For example, the corresponding results of El-Regaily (V=0.705 -, n=4) indicate that when there are 4 false positives per scan, the corresponding recall rate is less than or equal to 0.705 (The original paper [15] only shows that when there are 4.1 false positives per scan, the corresponding recall rate is 0.705).…”
Section: Experimental Comparisonmentioning
confidence: 99%
“…From experiments, we have achieved 99.23% accuracy which is significantly better than the previously reported score as 96.6% [2],70.5% [9],88.0% [14],98.9% [18],97.2% [32],93.25% [28]. The resultant sensitivity rate is 96.875% of different classifiers which is also dominant in terms of sensitivity because the sensitivity is much better than others.In previous methodologies sensitivity rate is less then our results.…”
Section: Resultsmentioning
confidence: 64%
“…The resultant sensitivity rate is 96.875% of different classifiers which is also dominant in terms of sensitivity because the sensitivity is much better than others.In previous methodologies sensitivity rate is less then our results. Whereas the sensitivity of other methodologies are 96.6% [2], 77.7% [9], 84.6% [14], 98.4% [18], 96.0% [32], 93.12% [28].…”
Section: Resultsmentioning
confidence: 93%
“…On top of these learning methods are the deep learning methods [26,27]. There are many different models of deep learning that have been introduced, such as stacked auto-encoder (SAE), deep belief nets (DBN), convolutional neural networks (CNNs), and Deep Boltzmann Machines (DBM) [28][29][30][31]. The superiority of the deep learning models in terms of accuracy has been established.…”
Section: Introductionmentioning
confidence: 99%