2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2015
DOI: 10.1109/cvpr.2015.7298947
|View full text |Cite
|
Sign up to set email alerts
|

Simultaneous feature learning and hash coding with deep neural networks

Abstract: Similarity-preserving hashing is a widely-used method for nearest neighbour search in large-scale image retrieval tasks. For most existing hashing methods, an image is first encoded as a vector of hand-engineering visual features, followed by another separate projection or quantization step that generates binary codes. However, such visual feature vectors may not be optimally compatible with the coding process, thus producing sub-optimal hashing codes. In this paper, we propose a deep architecture for supervis… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

5
696
0

Year Published

2017
2017
2019
2019

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 790 publications
(712 citation statements)
references
References 16 publications
5
696
0
Order By: Relevance
“…[34] that a convolution layer with a larger convolution kernel replaced by a convolution layer with multiple smaller convolution kernels may not only reduce the number of parameters, but also create more nonlinear mappings and enhance the expression ability of the network. In this paper, the VGG-16 (VGG Network with 16 layers) model is treated as the basic framework, and three same VGG-16 network structures are used to comprise a feature extraction network.…”
Section: Deep Hash Network Structure With Distance Constraint and Quamentioning
confidence: 99%
See 1 more Smart Citation
“…[34] that a convolution layer with a larger convolution kernel replaced by a convolution layer with multiple smaller convolution kernels may not only reduce the number of parameters, but also create more nonlinear mappings and enhance the expression ability of the network. In this paper, the VGG-16 (VGG Network with 16 layers) model is treated as the basic framework, and three same VGG-16 network structures are used to comprise a feature extraction network.…”
Section: Deep Hash Network Structure With Distance Constraint and Quamentioning
confidence: 99%
“…The other is the "one-stage" network form: Lai et al modified a CNNH (Convolutional Neural Network Hashing) algorithm and the proposed DNNH (Deep Neural Network Hashing) algorithm, which incorporate the process of generating the hash code into the network framework, performed the image feature learning and hash function learning simultaneously in the network, and optimized the overall performance through feedback. Compared to the "two-stage" deep hash algorithms, such a method often yields a better effect [34].…”
Section: Introductionmentioning
confidence: 99%
“…Following [19], we only use the images associated with the 21 most frequent classes. For these classes, the number of images of each class is at least 5000.…”
Section: Datasetsmentioning
confidence: 99%
“…Due to the advantages of speed and storage, many of the hashing methods [9][10][11][12][13][14][15][16][17][18][19][20][21] have been proposed to raise performance of approximate nearest neighbor search. Data-independent hash methods use unlabeled data for learning a set of hash functions.…”
Section: Related Workmentioning
confidence: 99%
“…In order to solve this limitation, CNNs-based hash methods [9,10,[18][19][20][21][22][23] is proposed. [9,10,19,22,23] guides the network to learn the approximate binary code output that preserves the semantics similar and dissimilar relation of input image triplets. [18] first decomposes the semantic similarity matrix of the training image data into approximately hash codes, and then uses these approximate hash codes and image labels to train a deep convolution network.…”
Section: Related Workmentioning
confidence: 99%