2016
DOI: 10.1016/j.ipm.2015.09.002
|View full text |Cite
|
Sign up to set email alerts
|

A query term re-weighting approach using document similarity

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
8
0

Year Published

2017
2017
2021
2021

Publication Types

Select...
8
1

Relationship

1
8

Authors

Journals

citations
Cited by 35 publications
(13 citation statements)
references
References 6 publications
0
8
0
Order By: Relevance
“…The queries prepared by the bioCADDIE organizers are relatively verbose , averaging 15.8 terms (words) per query. Query term weighting and keyword detection for such verbose queries have shown to be effective in previous work ( 8 , 9 ). We used one such technique, called Weighted Information Gain ( WIG ), introduced by Zhou and Croft in reference ( 10 ), to detect the most important keywords in the query and boost (assign a higher weight to) these terms.…”
Section: Methodology: Retrieval System Architecture and Implementatiomentioning
confidence: 97%
“…The queries prepared by the bioCADDIE organizers are relatively verbose , averaging 15.8 terms (words) per query. Query term weighting and keyword detection for such verbose queries have shown to be effective in previous work ( 8 , 9 ). We used one such technique, called Weighted Information Gain ( WIG ), introduced by Zhou and Croft in reference ( 10 ), to detect the most important keywords in the query and boost (assign a higher weight to) these terms.…”
Section: Methodology: Retrieval System Architecture and Implementatiomentioning
confidence: 97%
“…Pseudo-relevance feedback (PRF) is a method that used in a branch of automatic query modification technique. PRF assumes that the initial retrieved documents are relevant and then it uses these documents to find more relevant terms to the query or it just re-weighs the original query terms (Karisani et al, 2016). Word embeddings (WE) is a common name to a set of techniques to model languages and extract interested features.…”
Section: Related Workmentioning
confidence: 99%
“…Finding of neighbors. To calculate the neighbors of the input query, the document similarity measure [20] is used, in which the matching process is done between the input query and the feature library. Because of the consideration of both frequency and similarity of the nearest documents, the similarity measure is applied to find similar values.…”
Section: Semantic Classifier Based Featuresmentioning
confidence: 99%