2021
DOI: 10.3390/info12070285
|View full text |Cite
|
Sign up to set email alerts
|

Deep Hash with Improved Dual Attention for Image Retrieval

Abstract: Recently, deep learning to hash has extensively been applied to image retrieval, due to its low storage cost and fast query speed. However, there is a defect of insufficiency and imbalance when existing hashing methods utilize the convolutional neural network (CNN) to extract image semantic features and the extracted features do not include contextual information and lack relevance among features. Furthermore, the process of the relaxation hash code can lead to an inevitable quantization error. In order to sol… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2

Citation Types

0
4
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
1

Relationship

1
4

Authors

Journals

citations
Cited by 5 publications
(4 citation statements)
references
References 38 publications
0
4
0
Order By: Relevance
“…Early on, Xia et al proposed 32 to learn semantic features and hash codes separately, and there is no feedback between them. Recent supervised hashing usually designs an end-to-end learning framework to learn features and hash codes simultaneously such as 31 – 34 . On this basis, Cao et al 18 selected a activation function that make the network output is continuous hash codes.…”
Section: Related Workmentioning
confidence: 99%
See 3 more Smart Citations
“…Early on, Xia et al proposed 32 to learn semantic features and hash codes separately, and there is no feedback between them. Recent supervised hashing usually designs an end-to-end learning framework to learn features and hash codes simultaneously such as 31 – 34 . On this basis, Cao et al 18 selected a activation function that make the network output is continuous hash codes.…”
Section: Related Workmentioning
confidence: 99%
“…Hence, Li et al 19 embedded channel attention and spatial attention into CNN to obtain sufficient semantic features. Yang et al 34 improved the feature map in the dual attention module and combined it with the backbone network. However, these modules can aggravate the complexity of the network model and affect the speed of training.…”
Section: Related Workmentioning
confidence: 99%
See 2 more Smart Citations