Proceedings of the 22nd International Conference on Machine Learning - ICML '05 2005
DOI: 10.1145/1102351.1102439
|View full text |Cite
|
Sign up to set email alerts
|

Supervised versus multiple instance learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

5
151
0

Year Published

2006
2006
2023
2023

Publication Types

Select...
6
1
1

Relationship

0
8

Authors

Journals

citations
Cited by 182 publications
(156 citation statements)
references
References 12 publications
5
151
0
Order By: Relevance
“…Multi-instance learning was originated from Dietterich et al (1997)'s research on drug activity prediction, since that it has been studied by a lot of researchers and many algorithms have been developed, such as Diverse Density (Maron & Lozano-Pérez, 1998) and EM-DD (Zhang & Goldman, 2002), the k-nearest neighbor algorithm Citation-kNN (Wang & Zucker, 2000), decision tree algorithms RELIC (Ruffo, 2000) and ID3-MI (Chevaleyre & Zucker, 2001), rule learning algorithm RIPPER-MI (Chevaleyre & Zucker, 2001), SVM algorithms MI-SVM and mi-SVM (Andrews et al, 2003) and DD-SVM (Chen & Wang, 2004), ensemble algorithms MI-Ensemble (Zhou & Zhang, 2003) and MIBoosting (Xu & Frank, 2004), logistic regression algorithm MI-LR (Ray & Craven, 2005), etc. Many of those algorithms were developed by adapting a single-instance supervised learning algorithm to multiinstance learning through shifting its focus from the discrimination on the instances to the discrimination on the bags (Zhou & Zhang, 2003).…”
Section: Multi-instance Learningmentioning
confidence: 99%
See 1 more Smart Citation
“…Multi-instance learning was originated from Dietterich et al (1997)'s research on drug activity prediction, since that it has been studied by a lot of researchers and many algorithms have been developed, such as Diverse Density (Maron & Lozano-Pérez, 1998) and EM-DD (Zhang & Goldman, 2002), the k-nearest neighbor algorithm Citation-kNN (Wang & Zucker, 2000), decision tree algorithms RELIC (Ruffo, 2000) and ID3-MI (Chevaleyre & Zucker, 2001), rule learning algorithm RIPPER-MI (Chevaleyre & Zucker, 2001), SVM algorithms MI-SVM and mi-SVM (Andrews et al, 2003) and DD-SVM (Chen & Wang, 2004), ensemble algorithms MI-Ensemble (Zhou & Zhang, 2003) and MIBoosting (Xu & Frank, 2004), logistic regression algorithm MI-LR (Ray & Craven, 2005), etc. Many of those algorithms were developed by adapting a single-instance supervised learning algorithm to multiinstance learning through shifting its focus from the discrimination on the instances to the discrimination on the bags (Zhou & Zhang, 2003).…”
Section: Multi-instance Learningmentioning
confidence: 99%
“…The results are tabulated in (Chen et al, 2006), MI-LR (Ray & Craven, 2005), MIBoosting (Xu & Frank, 2004), DD-SVM (Chen & Wang, 2004), mi-SVM and MI-SVM (Andrews et al, 2003), RIPPER-MI (Chevaleyre & Zucker, 2001), RELIC (Ruffo, 2000), Citation-kNN (Wang & Zucker, 2000), Diverse Density (Maron & Lozano-Pérez, 1998), MULTINST (Auer, 1997) and Iterated-discrim APR (Dietterich et al, 1997). Table 1 shows that on the Musk data MissSVM is competitive with state-of-the-art multi-instance learning algorithms.…”
Section: Drug Activity Predictionmentioning
confidence: 99%
“…for j = 1 : mi do 8: These two candidate values (i.e., (19) and (20)) are then compared, and the larger value is the solution of the lth subproblem in (16). With g features, there are thus a total of 2g candidates ford.…”
Section: Finding a Violated Constraintmentioning
confidence: 99%
“…The first is the Diverse Density (DD) algorithm [15] and its variants, e.g., EM-DD [26] and multi-instance logistic regression [19]. These methods apply gradient search with multiple restarts to identify an instance which maximizes the diverse density, that is, an instance close to every positive bags while far from negative bags.…”
Section: Introductionmentioning
confidence: 99%
“…This is a scenario with latent variables in training, as the label of an individual instance in a bag is not observed; only the label of the whole bag. The model for β = 1 and no error terms recovers the MI/LR from [18], for β → ∞ the model reduces to the MI-SVM [19]. We construct a one-dimensional synthetic dataset which illustrates the deficiencies of the MI-SVM.…”
Section: Multiple Instance Learningmentioning
confidence: 99%