2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW) 2019
DOI: 10.1109/iccvw.2019.00493
|View full text |Cite
|
Sign up to set email alerts
|

A Decoder-Free Approach for Unsupervised Clustering and Manifold Learning with Random Triplet Mining

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
5
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
6
2
2

Relationship

0
10

Authors

Journals

citations
Cited by 11 publications
(5 citation statements)
references
References 18 publications
0
5
0
Order By: Relevance
“…Quantitative comparison. We first compare the proposed DCCS with several baseline methods as well as other state-of-the-art clustering approaches based [33] 0.572 0.500 0.365 0.474 0.512 0.348 SC [40] 0.696 0.663 0.521 0.508 0.575 -AC [11] 0.695 0.609 0.481 0.500 0.564 0.371 NMF [3] 0.545 0.608 0.430 0.434 0.425 -DEC [36] 0.843 0.772 0.741 0.590 0.601 0.446 JULE [37] 0.964 0.913 0.927 0.563 0.608 -VaDE [18] 0.945 0.876 -0.578 0.630 -DEPICT [9] 0.965 0.917 -0.392 0.392 -IMSAT [16] 0.984 0.956 0.965 0.736 0.696 0.609 DAC [4] 0.978 0.935 0.949 0.615 0.632 0.502 SpectralNet [32] 0.971 0.924 0.936 0.533 0.552 -ClusterGAN [27] 0.950 0.890 0.890 0.630 0.640 0.500 DLS-Clustering [8] 0.975 0.936 -0.693 0.669 -DualAE [39] 0.978 0.941 -0.662 0.645 -RTM [29] 0.968 0.933 0.932 0.710 0.685 0.578 NCSC [41] 0.941 0.861 0.875 0.721 0.686 0.592 IIC [17] 0.992 0.978 0.983 0.657 0.637 0.523 DCCS (Proposed) 0.989 0.970 0.976 0.756 0.704 0.623 on deep learning, as shown in Table 2 and Table 3. DCCS outperforms all the other methods by large margins on Fashion-MNIST, CIFAR-10, STL-10 and ImageNet-10.…”
Section: Resultsmentioning
confidence: 99%
“…Quantitative comparison. We first compare the proposed DCCS with several baseline methods as well as other state-of-the-art clustering approaches based [33] 0.572 0.500 0.365 0.474 0.512 0.348 SC [40] 0.696 0.663 0.521 0.508 0.575 -AC [11] 0.695 0.609 0.481 0.500 0.564 0.371 NMF [3] 0.545 0.608 0.430 0.434 0.425 -DEC [36] 0.843 0.772 0.741 0.590 0.601 0.446 JULE [37] 0.964 0.913 0.927 0.563 0.608 -VaDE [18] 0.945 0.876 -0.578 0.630 -DEPICT [9] 0.965 0.917 -0.392 0.392 -IMSAT [16] 0.984 0.956 0.965 0.736 0.696 0.609 DAC [4] 0.978 0.935 0.949 0.615 0.632 0.502 SpectralNet [32] 0.971 0.924 0.936 0.533 0.552 -ClusterGAN [27] 0.950 0.890 0.890 0.630 0.640 0.500 DLS-Clustering [8] 0.975 0.936 -0.693 0.669 -DualAE [39] 0.978 0.941 -0.662 0.645 -RTM [29] 0.968 0.933 0.932 0.710 0.685 0.578 NCSC [41] 0.941 0.861 0.875 0.721 0.686 0.592 IIC [17] 0.992 0.978 0.983 0.657 0.637 0.523 DCCS (Proposed) 0.989 0.970 0.976 0.756 0.704 0.623 on deep learning, as shown in Table 2 and Table 3. DCCS outperforms all the other methods by large margins on Fashion-MNIST, CIFAR-10, STL-10 and ImageNet-10.…”
Section: Resultsmentioning
confidence: 99%
“…While we focus on embedding using variational autoencoders, an open direction for could involve embedding hierarchical structure using other representation learning methods [3,10,11,15,57,63,67,79,85]. Another direction is to better understand the similarities and differences between learned embeddings, comparison-based methods, and ordinal relations [22,28,37,39].…”
Section: Discussionmentioning
confidence: 99%
“…We aim to train the style embedding network such that the output embeddings of similar classes are close together. While Meshry et al [25] used triplet loss to train the network by selecting triplets using style distance metric, there has been a series of work exploring different triplet mining techniques and losses [26,27,28,29]. In this work, we proceed to train the style encoder by using Easy Positive Hard Negative triplet mining proposed by Xuan et al [29].…”
Section: Frameworkmentioning
confidence: 99%