2020
DOI: 10.1016/j.compeleceng.2020.106624
|View full text |Cite
|
Sign up to set email alerts
|

Combination of loss functions for robust breast cancer prediction

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
13
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
9

Relationship

0
9

Authors

Journals

citations
Cited by 31 publications
(13 citation statements)
references
References 19 publications
0
13
0
Order By: Relevance
“…The CNN was noted as the second-best model with 96% accuracy, and the RNN was the third best model. The reason for the excellent performance of the ANN is that it is a very robust method against noise in training data and can implicitly reveal complex nonlinear relationships between dependent and independent variables [24] .…”
Section: Resultsmentioning
confidence: 99%
“…The CNN was noted as the second-best model with 96% accuracy, and the RNN was the third best model. The reason for the excellent performance of the ANN is that it is a very robust method against noise in training data and can implicitly reveal complex nonlinear relationships between dependent and independent variables [24] .…”
Section: Resultsmentioning
confidence: 99%
“…They used precision, recall, f1-Score, and accuracy to evaluate the performance of the proposed objective function. However, the new method was evaluated by doing experiments on Wisconsin Breast Cancer Diagnosis (WBCD) dataset, which is not the case for our study [7].…”
Section: Existing Literaturementioning
confidence: 99%
“…To have a well understanding of how our proposed approach behaves well, each metric of performance described above is computed using the formula in Equation (7).…”
Section: True Negative (Tn)mentioning
confidence: 99%
“…The smaller the loss function is, the better the robustness of the model will be. Common loss functions include 0-1 loss function, the log logarithmic loss function namely cross entropy loss function based on minimizing the negative likelihood, the squared loss function based on Ordinary further Squares, the absolute value loss function emphasized on the degree of deviation, the index loss function based on the Adaboost algorithm, Hinge loss function based on the support vector machine (SVM) [53], etc. They have different strengths and weaknesses.…”
Section: ) Loss Functionmentioning
confidence: 99%