2007 IEEE International Conference on Signal Processing and Communications 2007
DOI: 10.1109/icspc.2007.4728281
|View full text |Cite
|
Sign up to set email alerts
|

AdaBoost Parallelization on PC Clusters with Virtual Shared Memory for Fast Feature Selection

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2009
2009
2015
2015

Publication Types

Select...
3
1
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(4 citation statements)
references
References 8 publications
0
4
0
Order By: Relevance
“…Recently works have proposed parallel versions of Adaboost for classification problems, either by modifying the algorithm structure [16,18] to make it compatible with a parallel framework or using the original Adaboost in parallel hardware environments [10]. The method proposed in [18] works in two stages.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…Recently works have proposed parallel versions of Adaboost for classification problems, either by modifying the algorithm structure [16,18] to make it compatible with a parallel framework or using the original Adaboost in parallel hardware environments [10]. The method proposed in [18] works in two stages.…”
Section: Introductionmentioning
confidence: 99%
“…Parallel implementations of machine learning algorithms have gained importance [5,10,18,25] because dataset sizes appear to be growing considerably faster than computational capabilities. Recently works have proposed parallel versions of Adaboost for classification problems, either by modifying the algorithm structure [16,18] to make it compatible with a parallel framework or using the original Adaboost in parallel hardware environments [10].…”
Section: Introductionmentioning
confidence: 99%
“…Also, many parallel machine algorithms for classification problems have been proposed [17,18,19,20,21,22,23]. However parallel algorithms are usually analyzed using metrics such as speedup or efficiency.…”
Section: Motivationmentioning
confidence: 99%
“…The additional element x 0 in x is always set to 1 and is needed to compute the additional element β 0 in β which is the bias factor, which is analogous to the y-intercept in a linear regression model. Thus Equation 1.17 can be rewritten as 19) and the problem of learning the model is reduced to picking the values of the vector β based on the values in the training set such that p is as close to the actual Y value for all samples in the training set.…”
Section: Logistic Regressionmentioning
confidence: 99%