2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2020
DOI: 10.1109/cvpr42600.2020.01437
|View full text |Cite
|
Sign up to set email alerts
|

RankMI: A Mutual Information Maximizing Ranking Loss

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
13
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 28 publications
(14 citation statements)
references
References 24 publications
0
13
0
Order By: Relevance
“…We follow DGI and use InfoNCE [27] as our learning objective to maximize the hierarchical mutual information. But we find that, compared with the binary cross-entropy loss, the pairwise ranking loss, which has also been proved to be effective in mutual information estimation [18], is more compatible with the recommendation task. We then define the objective function of the self-supervised task as follows:…”
Section: Enhancing Mhcn With Self-supervised Learningmentioning
confidence: 93%
“…We follow DGI and use InfoNCE [27] as our learning objective to maximize the hierarchical mutual information. But we find that, compared with the binary cross-entropy loss, the pairwise ranking loss, which has also been proved to be effective in mutual information estimation [18], is more compatible with the recommendation task. We then define the objective function of the self-supervised task as follows:…”
Section: Enhancing Mhcn With Self-supervised Learningmentioning
confidence: 93%
“…For example, LLE was the best when the dimension was 32, and Isomap was the best when the dimension was 1024. When the embedding size was 128, P 2 ML-PA showed about 1% higher recall rate than RankMI [19] and PADS [58]. This proves that P 2 ML can support the scalability of embedding size.…”
Section: B Ablation Studymentioning
confidence: 70%
“…Although P 2 ML-Tri was designed based on basic Trip-semi, it showed 2.0% higher performance in CUB200 than DSML [21], which analyzed class variability in terms of SNR. Also, when the embedding size is 128, P 2 ML-Tri was comparable to RankMI [19] of SOTA performance. In the large-scale SOP dataset, it is worth paying attention to the generalization performance of P 2 ML, which is superior to RankMI.…”
Section: A Performance Evaluationmentioning
confidence: 84%
See 2 more Smart Citations