2019 IEEE 31st International Conference on Tools With Artificial Intelligence (ICTAI) 2019
DOI: 10.1109/ictai.2019.00090
|View full text |Cite
|
Sign up to set email alerts
|

Triplet Deep Hashing with Joint Supervised Loss for Fast Image Retrieval

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(8 citation statements)
references
References 16 publications
0
5
0
Order By: Relevance
“…5) Universality on different hashing models: We argue that our proposed attack and defense algorithms are generic to the most popular hashing models with different backbones. To verify this point, we carry out non-targeted attacks (HAG and SDHA) on other hashing methods, including DPSH [4], HashNet [5] and CSQ [7]. The results are summarized in Table VIII.…”
Section: ) Analysis On Hyper-parametersmentioning
confidence: 99%
“…5) Universality on different hashing models: We argue that our proposed attack and defense algorithms are generic to the most popular hashing models with different backbones. To verify this point, we carry out non-targeted attacks (HAG and SDHA) on other hashing methods, including DPSH [4], HashNet [5] and CSQ [7]. The results are summarized in Table VIII.…”
Section: ) Analysis On Hyper-parametersmentioning
confidence: 99%
“…(Semi-)Supervised (KSH [33], SDH [34]) utilize labeled information to improve binary codes. Recently, inspired by powerful deep networks, some deep hashing methods (CNNH [35], DPSH [36], SSGAH [37], ABML [38], ) have been proposed and achieve much better performance. They usually utilize a CNN to extract meaningful features, formulate the hashing function as a fully-connected layer with tanh/sigmoid activation function, and quantize features by signature function.…”
Section: Hashing Algorithmmentioning
confidence: 99%
“…( 2) is discrete and Hamming distance in Eq. ( 2) is not differentiable, a natural relaxation [36] is utilised in Eq. ( 5) by replacing sgn with tanh and changing the Hamming distance to the inner-product distance.…”
Section: Code Pyramidmentioning
confidence: 99%
“…As the first deep hashing algorithm, CNNH [3] consists of two independent stages, i.e., designing approximate hash codes of training data and learning feature representation through DNN. Recent hashing methods [4], [5], [7]- [12] focused on the design of end-to-end strategies and loss functions to improve the efficacy of hashing learning. For example, DPSH [4] integrated image representation and hash coding in a unified framework and adopted a pairwise loss to preserve the semantic similarity between data objects.…”
Section: Related Work a Deep Hashing-based Retrievalmentioning
confidence: 99%
“…Recent hashing methods [4], [5], [7]- [12] focused on the design of end-to-end strategies and loss functions to improve the efficacy of hashing learning. For example, DPSH [4] integrated image representation and hash coding in a unified framework and adopted a pairwise loss to preserve the semantic similarity between data objects. HashNet [5] proposed a continuous scale strategy to tackle the optimization problem in discrete Hamming space, and alleviated the data imbalance by a weighted pairwise loss.…”
Section: Related Work a Deep Hashing-based Retrievalmentioning
confidence: 99%