2020 13th International Conference on Intelligent Computation Technology and Automation (ICICTA) 2020
DOI: 10.1109/icicta51737.2020.00066
|View full text |Cite
|
Sign up to set email alerts
|

A deep hashing method based on attention module for image retrieval

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(2 citation statements)
references
References 7 publications
0
2
0
Order By: Relevance
“…Jin et al [37] put forward deep ordinal hashing (DOH), which uses effective spatial attention to emphasize relevant information in local space. Long et al [26] combined an attention mechanism with a deep hashing retrieval algorithm; the spatial and channel attention models are embedded in the CNN to improve the ability of image feature expression. Yang et al [38] presented a deep hashing algorithm with parameter-free attention to improve extraction ability of image semantic, where an energy function is used to assign attention weight for feature maps.…”
Section: Supervised Hashingmentioning
confidence: 99%
See 1 more Smart Citation
“…Jin et al [37] put forward deep ordinal hashing (DOH), which uses effective spatial attention to emphasize relevant information in local space. Long et al [26] combined an attention mechanism with a deep hashing retrieval algorithm; the spatial and channel attention models are embedded in the CNN to improve the ability of image feature expression. Yang et al [38] presented a deep hashing algorithm with parameter-free attention to improve extraction ability of image semantic, where an energy function is used to assign attention weight for feature maps.…”
Section: Supervised Hashingmentioning
confidence: 99%
“…The accuracy of the hash code is decreased as a result of the inevitable quantization error that occurs during the binary mapping process while utilizing the hash function. In order to compensate for the defects in feature extraction, many scholars introduce attention mechanisms [2,22,26,27].…”
Section: Introductionmentioning
confidence: 99%