2014
DOI: 10.1109/tpami.2014.2315792
|View full text |Cite
|
Sign up to set email alerts
|

StructBoost: Boosting Methods for Predicting Structured Output Variables

Abstract: Abstract-Boosting is a method for learning a single accurate predictor by linearly combining a set of less accurate weak learners. Recently, structured learning has found many applications in computer vision. Inspired by structured support vector machines (SSVM), here we propose a new boosting algorithm for structured output prediction, which we refer to as StructBoost. StructBoost supports nonlinear structured learning by combining a set of weak structured learners. As SSVM generalizes SVM, our StructBoost ge… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

1
13
0

Year Published

2014
2014
2019
2019

Publication Types

Select...
4
2

Relationship

3
3

Authors

Journals

citations
Cited by 12 publications
(14 citation statements)
references
References 50 publications
1
13
0
Order By: Relevance
“…Moreover, compared to the above mentioned works of [5] and [24], we achieve nonlinear learning on both the unary and the pairwise terms while theirs are limited to nonlinear unary potential learning. The recent work of Shen et al [31] generalizes standard boosting methods to structured learning, which shares similarities to our work here. However, our method bears critical differences from theirs: 1) We design a column generation method for non-linear tree potentials learning in CRFs directly from the SSVMs formulation.…”
supporting
confidence: 72%
See 3 more Smart Citations
“…Moreover, compared to the above mentioned works of [5] and [24], we achieve nonlinear learning on both the unary and the pairwise terms while theirs are limited to nonlinear unary potential learning. The recent work of Shen et al [31] generalizes standard boosting methods to structured learning, which shares similarities to our work here. However, our method bears critical differences from theirs: 1) We design a column generation method for non-linear tree potentials learning in CRFs directly from the SSVMs formulation.…”
supporting
confidence: 72%
“…3) We learn class-wise decision trees (potentials) for each object that appears in the image. This is different from [31]. The work of decision tree fields [28] is close to ours in that they also use decision trees to model the pairwise potentials.…”
mentioning
confidence: 89%
See 2 more Smart Citations
“…Here we exploit the stage-wise learning strategy to speedup the training. In column generation based totallycorrective boosting methods [29,30,31], all the hash function weights w are updated during each column generation iteration. In contrast, in stage-wise boosting, e.g., AdaBoost, only the weight of the newly added weak learner is updated in the current boosting iteration and weights of all previous weak learners are fixed.…”
Section: Stage-wise Trainingmentioning
confidence: 99%