2013
DOI: 10.1016/j.engappai.2013.06.013
|View full text |Cite
|
Sign up to set email alerts
|

Robust predictive model for evaluating breast cancer survivability

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
53
0

Year Published

2016
2016
2024
2024

Publication Types

Select...
4
2
2

Relationship

0
8

Authors

Journals

citations
Cited by 109 publications
(57 citation statements)
references
References 28 publications
0
53
0
Order By: Relevance
“…It is also an ensemble learning method. Unlike adaboosts, which adjust the weights of the samples after each iteration, GBM uses the difference between the last iteration's output and the target value to be the new target for the next iteration [12]. LightGBM is a novel GBM method that implements Gradient-based One-side Sampling (GOSS) and Exclusive Feature Bundling (EFB) techniques.…”
Section: Machine Learning Methodsmentioning
confidence: 99%
See 2 more Smart Citations
“…It is also an ensemble learning method. Unlike adaboosts, which adjust the weights of the samples after each iteration, GBM uses the difference between the last iteration's output and the target value to be the new target for the next iteration [12]. LightGBM is a novel GBM method that implements Gradient-based One-side Sampling (GOSS) and Exclusive Feature Bundling (EFB) techniques.…”
Section: Machine Learning Methodsmentioning
confidence: 99%
“…Machine learning techniques has been widely applied to predict outcomes for medical purposes [8][9][10][11][12]. Ensemble learning methods that train a number of weak base learners and then combine their outputs are popular in medical prediction researches [22].…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Our dividing co-training algorithm is designed to efficiently add the most confident unlabeled data to the labeled data pool and it is based on a graph based semi-supervised learning algorithm [27][28]. Compared to the existing algorithms [27][28], our proposed similarity function (Eq. 3) was modified and the weighted data were incorporated into the similarity function.…”
Section: Stage Three: Dividing Co-training Data Labelingmentioning
confidence: 99%
“…Thus, mortality rate for this cancer has declined in recent years [11]. Using methods that would decrease the maximum memory requirements can also help to utilize predictive systems on smartphones and as a result increase their application performance.…”
Section: Related Workmentioning
confidence: 99%