2008 IEEE Conference on Computer Vision and Pattern Recognition 2008
DOI: 10.1109/cvpr.2008.4587630
|View full text |Cite
|
Sign up to set email alerts
|

Classification using intersection kernel support vector machines is efficient

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

9
526
1
5

Year Published

2012
2012
2018
2018

Publication Types

Select...
5
3
2

Relationship

1
9

Authors

Journals

citations
Cited by 813 publications
(541 citation statements)
references
References 11 publications
9
526
1
5
Order By: Relevance
“…• We learn an Intersection Kernel Support Vector Machine (IKSVM [16,22]) based on the positive and negative sets for each component, and the model is bootstrapped once by data-mining hard negatives. The SVM scores are then converted to probabilities through a sigmoid function whose parameters are also learned from data.…”
Section: Training Componentsmentioning
confidence: 99%
“…• We learn an Intersection Kernel Support Vector Machine (IKSVM [16,22]) based on the positive and negative sets for each component, and the model is bootstrapped once by data-mining hard negatives. The SVM scores are then converted to probabilities through a sigmoid function whose parameters are also learned from data.…”
Section: Training Componentsmentioning
confidence: 99%
“…Since we need a binary categorization, we use SVM in binary classification setting with linear kernel, which is efficient to learn and evaluate compared to nonlinear ones. Also with sophisticated features [26,27] the accuracy of linear kernel is comparable with the accuracy of non linear kernels. Once the hyperplane W for the query q is given, the classification score for any sample X is computed as follows.…”
Section: Retrieval With Eigen Queriesmentioning
confidence: 58%
“…To handle imbalance in the number of positive versus negative training examples, we fixed the weights of the positive and negative classes by estimating the prior probabilities of the classes on training data. We used the histogram intersection kernel and its efficient approximation as suggested by Maji et al [30]. For difference coded BOWs, we used a linear kernel [19].…”
Section: Visual Featuresmentioning
confidence: 99%