2020
DOI: 10.1162/neco_a_01262
|View full text |Cite
|
Sign up to set email alerts
|

Classification from Triplet Comparison Data

Abstract: Learning from triplet comparison data has been extensively studied in the context of metric learning, where we want to learn a distance metric between two instances, and ordinal embedding, where we want to learn an embedding in an Euclidean space of the given instances that preserves the comparison order as well as possible. Unlike fullylabeled data, triplet comparison data can be collected in a more accurate and humanfriendly way. Although learning from triplet comparison data has been considered in many appl… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
3

Relationship

1
6

Authors

Journals

citations
Cited by 14 publications
(6 citation statements)
references
References 16 publications
0
6
0
Order By: Relevance
“…This assumption was commonly used in previous studies (Carbonneau et al, 2018;Cui et al, 2020;Zhang et al, 2020;Bao et al, 2022). It can be implied in the data collection process, e.g., when we first collect px, yq-pairs but Classification from ¨¨¨Size m Aggregate Information Aggregate Function g pairwise similarity m " 2 if y 1 and y 2 belong to same class or not gpy 1 , y 2 q " Iry 1 " y 2 s triplet comparison m " 3 if dpy 1 , y 2 q is smaller than dpy 1 , y 3 q gpy 1:3 q " Irdpy 1 , y 2 q ă dpy 1 , y3qs multiple instances m ě 2 if at least one positive label exits in y 1:m (k " 2) gpy 1:m q " maxpy 1:m q label proportion m ě 2 proportion of data from each class in the group g j py 1:m q " p ř m i"1 Iry i " jsq{m ordinal rank m " 2 if y 1 is larger than y 2 , i.e., y 1 ě y 2 .…”
Section: Classification From Aggregate Observationsmentioning
confidence: 93%
See 1 more Smart Citation
“…This assumption was commonly used in previous studies (Carbonneau et al, 2018;Cui et al, 2020;Zhang et al, 2020;Bao et al, 2022). It can be implied in the data collection process, e.g., when we first collect px, yq-pairs but Classification from ¨¨¨Size m Aggregate Information Aggregate Function g pairwise similarity m " 2 if y 1 and y 2 belong to same class or not gpy 1 , y 2 q " Iry 1 " y 2 s triplet comparison m " 3 if dpy 1 , y 2 q is smaller than dpy 1 , y 3 q gpy 1:3 q " Irdpy 1 , y 2 q ă dpy 1 , y3qs multiple instances m ě 2 if at least one positive label exits in y 1:m (k " 2) gpy 1:m q " maxpy 1:m q label proportion m ě 2 proportion of data from each class in the group g j py 1:m q " p ř m i"1 Iry i " jsq{m ordinal rank m " 2 if y 1 is larger than y 2 , i.e., y 1 ě y 2 .…”
Section: Classification From Aggregate Observationsmentioning
confidence: 93%
“…Bao et al (2018a) stud-ied classification from pairwise similarities, where the aggregate information is whether two instances in the group belong to the same class (similar) or not (dissimilar). Cui et al (2020) studied classification from triplet comparisons, where the aggregate information is whether one instance is more similar to the other one, compared with the third one.…”
Section: Introductionmentioning
confidence: 99%
“…Schroff et al [ 29 ] discussed the successes of similarity learning in the re-identification of human beings using facial images. There are a significant number of loss functions adopted in experiments for training similarity learning models, namely contrastive loss, triplet loss, and Proxy-NCA [ 30 , 31 , 32 ].…”
Section: Related Workmentioning
confidence: 99%
“…Therefore, unlike some studies [19,31] that assume the positive confidence is known, we only assume that the labeling system has access to the full labels. Specifically, we adopt the assumption [23] that weakly supervised examples are first sampled from the true data distribution, but the labels are only accessible by the labeling system, not us. Then, the labeling system would provide us weakly supervised information (i.e., pairwise comparison information) according to the received labels.…”
Section: Data Generation Processmentioning
confidence: 99%
“…In many real-world scenarios, it may be too difficult to collect such data. To alleviate this issue, a large number of weakly supervised learning problems [1] have been extensively studied, including semi-supervised learning [2,3,4], multi-instance learning [5,6,7], noisy-label learning [8,9,10], partiallabel learning [11,12,13], complementary-label learning [14,15,16,17], positive-unlabeled classification [18], positive-confidence classification [19], similar-unlabeled classification [20], unlabeled-unlabeled classification [21,22], and triplet classification [23].…”
Section: Introductionmentioning
confidence: 99%