2015
DOI: 10.1002/asi.23351
|View full text |Cite
|
Sign up to set email alerts
|

Analysis of biomedical and health queries: Lessons learned fromTRECandCLEFevaluation benchmarks

Abstract: A large body of research work examined, from both the query side and the user behavior side, the characteristics of medical‐ and health‐related searches. One of the core issues in medical information retrieval (IR) is diversity of tasks that lead to diversity of categories of information needs and queries. From the evaluation perspective, another related and challenging issue is the limited availability of appropriate test collections allowing the experimental validation of medically task oriented IR technique… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
10
0

Year Published

2016
2016
2023
2023

Publication Types

Select...
5
1

Relationship

1
5

Authors

Journals

citations
Cited by 13 publications
(10 citation statements)
references
References 54 publications
0
10
0
Order By: Relevance
“…In a clinical information search setting, the shorter the query in terms of words with low hierarchical specificity (refers to "is-a" specificity derived from a medical terminology), the more difficult it is [19].…”
Section: Research Contributions and Hypothesesmentioning
confidence: 99%
See 2 more Smart Citations
“…In a clinical information search setting, the shorter the query in terms of words with low hierarchical specificity (refers to "is-a" specificity derived from a medical terminology), the more difficult it is [19].…”
Section: Research Contributions and Hypothesesmentioning
confidence: 99%
“…x is the set of Nc weighted concepts associated to query facet Qx resulting from Algorithm 1, SIM (c, d) is the cosine similarity between the TF-IDF vectors of document d and preferred entry of concept c [19,15]. With respect to the prioritized aggregation operator principle [11] and according to research hypothesis H3, we compute the PICO importance weights as follows:…”
Section: Computing the Document Relevance Scoresmentioning
confidence: 99%
See 1 more Smart Citation
“…IR performance evaluation involves test collections, sampling, topics (queries, tasks) formation, and relevance evaluation, and as a general topic, this area has been widely studied (Corcoglioniti, Dragoni, Rospocher, & Aprosio, 2016;Cormack & Lynam, 2006;Hu, Huang, & Hu, 2012;J€ arvelin & Kek€ al€ ainen, 2002;Koopman, Bruza, Sitbon, & Lawley, 2011;Liu, An, & Huang, 2015;Tamine, Chouquet, & Palmer, 2015;Waitelonis, Exeler, & Sack, 2015;Yilmaz, Kanoulas, & Aslam, 2008). In this article, we study relevance evaluation, and particularly, novelty and diversity evaluation in biomedical IR.…”
Section: Related Workmentioning
confidence: 99%
“…Several information retrieval (IR) studies (Hauff, Azzopardi, & Hiemstra, ; Tamine, Chouquet, & Palmer, ) have adopted features such as term frequency and query length to predict the effectiveness of query and retrieval systems. Ayadi et al () and Bashir and Rauber () used these features to predict a correlation between query and retrieval function; Burges et al (), Can, Croft, and Manmatha (), Cao, Qin, Liu, Tsai, and Li (), and Ye and Huang () used them to learn to rank, and Xu, Xu, Wang, and Wang () used them to re‐rank.…”
Section: Related Work: Query Featuresmentioning
confidence: 99%