2020
DOI: 10.1007/s11432-019-2721-0
|View full text |Cite
|
Sign up to set email alerts
|

Multi-attention based cross-domain beauty product image retrieval

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
4
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
6

Relationship

0
6

Authors

Journals

citations
Cited by 7 publications
(5 citation statements)
references
References 7 publications
0
4
0
Order By: Relevance
“…Next, we assess the contributions of each component in our method to verify their effectiveness. Finally, we compare our approach with state-of-the-art methods, conducting evaluations on three benchmark datasets (CUB200-2011, 24 Perfect500k, 25 and Stanford Online Products 26 ) to demonstrate the superiority of our method.…”
Section: Resultsmentioning
confidence: 99%
“…Next, we assess the contributions of each component in our method to verify their effectiveness. Finally, we compare our approach with state-of-the-art methods, conducting evaluations on three benchmark datasets (CUB200-2011, 24 Perfect500k, 25 and Stanford Online Products 26 ) to demonstrate the superiority of our method.…”
Section: Resultsmentioning
confidence: 99%
“…It is worth noting that the text reconstructed by the proposed method allows capturing key information of text descriptions, compensating for the interference caused by redundant or irrelevant information and noise. In the future, we plan to apply the attention mechanism [36] to improve the accuracy of the text-reconstruction network in the key information extraction [37].…”
Section: Resultsmentioning
confidence: 99%
“…This section describes the multi-attention mechanism used in our framework, which has shown high capability in extracting the features and dependencies from modalities (Zhang et al 2022;Wang et al 2020;Song et al 2022).…”
Section: Modality Feature Extraction and Fusionmentioning
confidence: 99%