2013
DOI: 10.1016/j.neunet.2013.07.006
|View full text |Cite
|
Sign up to set email alerts
|

Fully corrective boosting with arbitrary loss and regularization

Abstract: We propose a general framework for analyzing and developing fully corrective boosting-based classifiers. The framework accepts any convex objective function, and allows any convex (for example, p -norm, p ≥ 1) regularization term. By placing the wide variety of existing fully corrective boosting-based classifiers on a common footing, and considering the primal and dual problems together, the framework allows direct comparison between apparently disparate methods. By solving the primal rather than the dual the … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2

Citation Types

0
12
0

Year Published

2014
2014
2023
2023

Publication Types

Select...
6

Relationship

1
5

Authors

Journals

citations
Cited by 9 publications
(12 citation statements)
references
References 31 publications
0
12
0
Order By: Relevance
“…In standard boosting, one has a large number of variables while in SSVM, one has a large number of constraints. For the moment, let us put aside the difficulty of the large number of constraints, and focus on how to iteratively solve for w using column generation as in boosting methods [8], [28], [29]. We derive the Lagrange dual of the optimization of (35) as:…”
Section: Structured Boostingmentioning
confidence: 99%
See 3 more Smart Citations
“…In standard boosting, one has a large number of variables while in SSVM, one has a large number of constraints. For the moment, let us put aside the difficulty of the large number of constraints, and focus on how to iteratively solve for w using column generation as in boosting methods [8], [28], [29]. We derive the Lagrange dual of the optimization of (35) as:…”
Section: Structured Boostingmentioning
confidence: 99%
“…The subproblem is to add new variables (or constraints for the dual form) into the master problem. With the primal-dual pair of (35) and (37) and following the general framework of column generation based boosting [8], [28], [29], we can obtain our StructBoost as follows: Iterate the following two steps until converge : 1) Solve the following subproblem, which generates the best weak structured learner by finding the most violated constraint in the dual:…”
Section: Structured Boostingmentioning
confidence: 99%
See 2 more Smart Citations
“…It can be interpreted as forward fitting an additive model in functional space by stagewise optimizing the expected risk based on the exponential loss function ϕ e (y f (x)) = exp(−y f (x)) [5], [6]. In particular, AdaBoost maintains a sample distribution D t over the training samples that is initialized from the uniform distribution D t (i ) = 1/m.…”
mentioning
confidence: 99%