2013
DOI: 10.1007/s10994-013-5429-5
|View full text |Cite
|
Sign up to set email alerts
|

A theoretical and empirical analysis of support vector machine methods for multiple-instance classification

Abstract: The standard support vector machine (SVM) formulation, widely used for supervised learning, possesses several intuitive and desirable properties. In particular, it is convex and assigns zero loss to solutions if, and only if, they correspond to consistent classifying hyperplanes with some nonzero margin. The traditional SVM formulation has been heuristically extended to multiple-instance (MI) classification in various ways. In this work, we analyze several such algorithms and observe that all MI techniques lac… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
48
0

Year Published

2015
2015
2024
2024

Publication Types

Select...
4
3
2

Relationship

0
9

Authors

Journals

citations
Cited by 69 publications
(49 citation statements)
references
References 20 publications
1
48
0
Order By: Relevance
“…Additionally, it allows one to explicitly consider all the user profile images for classification in an elegant manner. Concretely, extensions to the SVM classifiers for MIL have been proposed [13], [18], [19]. These classifiers are interesting because they keep the desirable properties of SVMs.…”
Section: Multiple Instance Learning Methodsmentioning
confidence: 99%
“…Additionally, it allows one to explicitly consider all the user profile images for classification in an elegant manner. Concretely, extensions to the SVM classifiers for MIL have been proposed [13], [18], [19]. These classifiers are interesting because they keep the desirable properties of SVMs.…”
Section: Multiple Instance Learning Methodsmentioning
confidence: 99%
“…In a standard MIL framework, instance labels in each positive bag are treated as hidden variables with the constraint that at least one of them should be positive. MI-SVM and mi-SVM [2] are two popular methods for MIL, and have been widely adapted for many weakly supervised computer vision problems, achieving state-of-the-art results in many different applications [7,13]. In these methods, images in each bag inherit the label of the bag and an SVM is trained to classify images.…”
Section: Related Workmentioning
confidence: 99%
“…The k-NN classifier has been adapted for MIL by defining the distance between bags [49]. Later on, kernel methods have also been adapted to work with MI data such as [4,19]; a complete review on such approaches is available in [12]. More recently, algorithms that involve boosting [55], embedding the data into a different feature space [7], or treating the data in bags as graphs [57] have been proposed.…”
Section: Related Workmentioning
confidence: 99%