2016
DOI: 10.1007/978-3-319-46454-1_14
|View full text |Cite
|
Sign up to set email alerts
|

Learning to Hash with Binary Deep Neural Network

Abstract: This work proposes deep network models and learning algorithms for unsupervised and supervised binary hashing. Our novel network design constrains one hidden layer to directly output the binary codes. This addresses a challenging issue in some previous works: optimizing non-smooth objective functions due to binarization. Moreover, we incorporate independence and balance properties in the direct and strict forms in the learning. Furthermore, we include similarity preserving property in our objective function. O… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
145
0

Year Published

2017
2017
2021
2021

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 146 publications
(145 citation statements)
references
References 20 publications
0
145
0
Order By: Relevance
“…On the basis of DH, Deepbit adds another constraint, rotation invariant, to the final loss function, and achieve a slight boost on the performance. Do et al [41] propose a Binary Deep Neural Network (BDNN), which has both unsupervised and supervised versions. BDNN introduces one hidden layer to directly output the binary codes and then utilises the binary code layer to reconstruct the original feature.…”
Section: A Deep Hashing Methodsmentioning
confidence: 99%
See 3 more Smart Citations
“…On the basis of DH, Deepbit adds another constraint, rotation invariant, to the final loss function, and achieve a slight boost on the performance. Do et al [41] propose a Binary Deep Neural Network (BDNN), which has both unsupervised and supervised versions. BDNN introduces one hidden layer to directly output the binary codes and then utilises the binary code layer to reconstruct the original feature.…”
Section: A Deep Hashing Methodsmentioning
confidence: 99%
“…We systematically compare our method with eight state-of-the-art non-deep methods: ITQ [22], PCAH [20], LSH [13], DSH [55], SpH [44], SH [19], AGH [21], and SELVE [56], and three deep methods: DH [39], Deepbit [40], UH-BDNN [41] for retrieval task, all these eleven methods are unsupervised. All of the non-deep methods and UH-BDNN [41] in our experiments use the same VGG [58] fc7 feature as that in our method, and DH [39] and Deepbit [40] are based the same settings like that in their original papers.…”
Section: B Experimental Settings 1) Baselinesmentioning
confidence: 99%
See 2 more Smart Citations
“…Do et al [25] have addressed the problem of learning binary hash codes for large scale image search using a deep model which tries to preserve similarity, balance and independence of images. Two sub-optimizations during the learning process allow to efficiently solve binary constraints.…”
Section: Neural Networkmentioning
confidence: 99%