Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence 2021
DOI: 10.24963/ijcai.2021/133
|View full text |Cite
|
Sign up to set email alerts
|

Unsupervised Hashing with Contrastive Information Bottleneck

Abstract: Many unsupervised hashing methods are implicitly established on the idea of reconstructing the input data, which basically encourages the hashing codes to retain as much information of original data as possible. However, this requirement may force the models spending lots of their effort on reconstructing the unuseful background information, while ignoring to preserve the discriminative semantic information that is more important for the hashing task. To tackle this problem, inspired by the recent success of c… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
24
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
3
2

Relationship

0
8

Authors

Journals

citations
Cited by 41 publications
(29 citation statements)
references
References 0 publications
0
24
0
Order By: Relevance
“…Unsupervised Hashing Deep hashing methods with Convolutional Neural Networks (CNNs) (Krizhevsky, Sutskever, and Hinton 2012;Simonyan and Zisserman 2015;He et al 2016) usually perform better than non-deep hashing methods. Existing deep hashing methods can be categorized into generative (Dai et al 2017;Duan et al 2017;Zieba et al 2018;Song et al 2018;Dizaji et al 2018;Shen, Liu, and Shao 2019;Shen et al 2020;Li and van Gemert 2021;Qiu et al 2021) or discriminative (Lin et al 2016;Huang et al 2017;Su et al 2018;Chen, Cheung, and Wang 2018;Yang et al 2018Yang et al , 2019Tu, Mao, and Wei 2020) series. Most of them impose various constraints (i.e., loss or regularization terms) such as pointwise constraints: (i) quantization error (Duan et al 2017;Chen, Cheung, and Wang 2018), (ii) even bit distribution (Zieba et al 2018;Shen, Liu, and Shao 2019), (iii) bit irrelevance (Dizaji et al 2018), (iv) maximizing mutual information between features and codes (Li and van Gemert 2021;Qiu et al 2021); and pairwise constraints: (v) preserving similarity among continuous feature vectors (Su et al 2018;Yang et al 2018Yang et al , 2019Tu, Mao, and Wei 2020).…”
Section: Related Workmentioning
confidence: 99%
“…Unsupervised Hashing Deep hashing methods with Convolutional Neural Networks (CNNs) (Krizhevsky, Sutskever, and Hinton 2012;Simonyan and Zisserman 2015;He et al 2016) usually perform better than non-deep hashing methods. Existing deep hashing methods can be categorized into generative (Dai et al 2017;Duan et al 2017;Zieba et al 2018;Song et al 2018;Dizaji et al 2018;Shen, Liu, and Shao 2019;Shen et al 2020;Li and van Gemert 2021;Qiu et al 2021) or discriminative (Lin et al 2016;Huang et al 2017;Su et al 2018;Chen, Cheung, and Wang 2018;Yang et al 2018Yang et al , 2019Tu, Mao, and Wei 2020) series. Most of them impose various constraints (i.e., loss or regularization terms) such as pointwise constraints: (i) quantization error (Duan et al 2017;Chen, Cheung, and Wang 2018), (ii) even bit distribution (Zieba et al 2018;Shen, Liu, and Shao 2019), (iii) bit irrelevance (Dizaji et al 2018), (iv) maximizing mutual information between features and codes (Li and van Gemert 2021;Qiu et al 2021); and pairwise constraints: (v) preserving similarity among continuous feature vectors (Su et al 2018;Yang et al 2018Yang et al , 2019Tu, Mao, and Wei 2020).…”
Section: Related Workmentioning
confidence: 99%
“…We compared our SSCQ and SSCQ-p with three types of classic and state-of-the-art unsupervised image retrieval methods, including (1) shallow methods with input features extracted from an ImageNet pre-trained VGG16 48.9 53.0 62.7 PQ [19] 65.4 67.4 68.6 ITQ † [11] 68.0 70.9 72.8 OPQ [10] 65.7 68.4 69.1 Deep pre-trained unsupervised DeepBit [29] 39.2 40.3 42.9 GreedyHash [40] 63.3 69.1 73.1 SSDH [46] 58.0 59.3 61.0 CIBHash † [35] 79.5 81.2 81.7 Bi-half [27] 76.9 78.3 79.9 MeCoQ † [41] 77.2 81.5 82.3 SSCQ-p (ours) 80.3 81.9 82.6…”
Section: Comparison With the State-of-the-artsmentioning
confidence: 99%
“…Earlier studies rely heavily on artificial annotations, which makes it difficult to apply in real-world scenarios due to the high labor costs. As a result, unsupervised deep hashing [27,22,23,36] has gradually become the major research direction in this field, with the recent boom in unsupervised learning [3,13,4,26,2,12]. The key difficulty with unsupervised hash is that the ad-hoc encoding process does not extract the key information for hashing, precisely because of the lack of supervised information.…”
Section: Introductionmentioning
confidence: 99%
“…which constructs similar and dissimilar instances and learns the discrete representations by prompting the model to pull in the similar instances and push away the dissimilar instances. With simply the most fundamental concepts for contrastive learning, existing methods [27,23,36] based on contrastive learning have achieved significant success. Despite their success, most of the current methods mainly focus on adjusting the contrastive loss to fit the hash learning criterion [27,23].…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation