2021 IEEE International Conference on Big Data (Big Data) 2021
DOI: 10.1109/bigdata52589.2021.9671546
|View full text |Cite
|
Sign up to set email alerts
|

GLOW : Global Weighted Self-Attention Network for Web Search

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
3
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
2
1

Relationship

1
6

Authors

Journals

citations
Cited by 7 publications
(4 citation statements)
references
References 15 publications
0
3
0
Order By: Relevance
“…Since embedding-based retrieval solves the above three challenges, it has been widely used in modern information systems to facilitate new state-of-the-art retrieval quality and performance. Numerous prior studies have concentrated on deep embedding models, from DSSM [21], CDSSM [46], LSTM-RNN [38], and ARC-I [20] to transformer-based embedding models [10,16,39,40,45,53,54]. They have shown impressive gains with brute-force nearest neighbor embedding search on some small datasets as compared with traditional keyword matching.…”
Section: Background and Related Work 21 Web Scale Information Retrievalmentioning
confidence: 99%
“…Since embedding-based retrieval solves the above three challenges, it has been widely used in modern information systems to facilitate new state-of-the-art retrieval quality and performance. Numerous prior studies have concentrated on deep embedding models, from DSSM [21], CDSSM [46], LSTM-RNN [38], and ARC-I [20] to transformer-based embedding models [10,16,39,40,45,53,54]. They have shown impressive gains with brute-force nearest neighbor embedding search on some small datasets as compared with traditional keyword matching.…”
Section: Background and Related Work 21 Web Scale Information Retrievalmentioning
confidence: 99%
“…Zhan et al [96] presented an efficient technique for training dense retrieval models using a combination of hard and soft negative sampling. Meanwhile, Shan et al [97] employed a global weighted self-attention network for web search. Other researchers have explored optimization techniques and innovative methodologies to enhance the performance of dense retrieval models in various scenarios [98], [99], [100], [101], [102], [103].…”
Section: ) Approximate Nearest Neighbor Search and Negative Contrasti...mentioning
confidence: 99%
“…Even so, combinations of vectorizing and symbolic methods are not often discussed. BISON (Shan et al , 2020) adds a new layer of information through an attention model to bring different weights to each of the tokens by using the BM25 measure. Although remaining in a vector representation while processing queries and online documents generates some of the problems mentioned above, this approach exploits the advantages of both methods: word importance in the corpus and documents, taking into account word context, text dimension reduction, etc.…”
Section: Related Workmentioning
confidence: 99%