Our system is currently under heavy load due to increased usage. We're actively working on upgrades to improve performance. Thank you for your patience.
2020
DOI: 10.47839/ijc.19.4.1986
|View full text |Cite
|
Sign up to set email alerts
|

Improved Predictive Sparse Decomposition Method With Densenet for Prediction of Lung Cancer

Abstract: Lung cancer is the second most common form of cancer in both men and women. It is responsible for at least 25% of all cancer-related deaths in the United States alone. Accurate and early diagnosis of this form of cancer can increase the rate of survival. Computed tomography (CT) imaging is one of the most accurate techniques for diagnosing the disease. In order to improve the classification accuracy of pulmonary lesions indicating lung cancer, this paper presents an improved method for training a densely conne… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
3

Relationship

2
5

Authors

Journals

citations
Cited by 16 publications
(2 citation statements)
references
References 34 publications
0
2
0
Order By: Relevance
“…Hence, deep learning techniques such as RNNs that consider the client's shopping behaviour and transaction sequences can be vital in detecting important fraud patterns [8], [47], [48]. For example, Benchaji et al [8] proposed a credit card fraud detection method using an LSTM network to achieve sequential modelling and ensure improved fraud detection.…”
Section: Related Workmentioning
confidence: 99%
“…Hence, deep learning techniques such as RNNs that consider the client's shopping behaviour and transaction sequences can be vital in detecting important fraud patterns [8], [47], [48]. For example, Benchaji et al [8] proposed a credit card fraud detection method using an LSTM network to achieve sequential modelling and ensure improved fraud detection.…”
Section: Related Workmentioning
confidence: 99%
“…The network learns a hidden correlation between the input features and reconstructs it at the output. Furthermore, a sparsity constraint can be imposed on the hidden units that enable the network to better represent the input [19,21,22], thereby allowing the supervised learning algorithm to perform better classification. Meanwhile, determining the optimal parameters to train the feature learning algorithm is essential in achieving excellent deep feature learning [23,24].…”
Section: Introductionmentioning
confidence: 99%