Proceedings of the 40th International ACM SIGIR Conference on Research and Development in Information Retrieval 2017
DOI: 10.1145/3077136.3080707
|View full text |Cite
|
Sign up to set email alerts
|

A Test Collection for Evaluating Retrieval of Studies for Inclusion in Systematic Reviews

Abstract: This version is available at https://strathprints.strath.ac.uk/62696/ Strathprints is designed to allow users to access the research output of the University of Strathclyde. Unless otherwise explicitly stated on the manuscript, Copyright © and Moral Rights for the papers on this site are retained by the individual authors and/or other copyright owners. Please check the manuscript for details of any other licences that may have been applied. You may not engage in further distribution of the material for any pro… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
26
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
3
3
2

Relationship

0
8

Authors

Journals

citations
Cited by 29 publications
(28 citation statements)
references
References 11 publications
0
26
0
Order By: Relevance
“…Larger corpora for EBM tasks have been derived using (noisy) automated annotation approaches. This approach has been used to build, e.g., datasets to facilitate work on Information Retrieval (IR) models for biomedical texts (Scells et al, 2017;Chung, 2009;Boudin et al, 2010). Similar approaches have been used to 'distantly supervise' annotation of full-text articles describing clinical trials .…”
Section: Nlp For Ebmmentioning
confidence: 99%
“…Larger corpora for EBM tasks have been derived using (noisy) automated annotation approaches. This approach has been used to build, e.g., datasets to facilitate work on Information Retrieval (IR) models for biomedical texts (Scells et al, 2017;Chung, 2009;Boudin et al, 2010). Similar approaches have been used to 'distantly supervise' annotation of full-text articles describing clinical trials .…”
Section: Nlp For Ebmmentioning
confidence: 99%
“…Rece♪tly the CLEF eHealth track o♪ Tech♪ology Assisted Reviews i♪ Empirical ℧edici♪e [9,20] developed datasets co♪tai♪i♪g 72 topics created from diag♪ostic test accuracy systematic reviews produced by the Cochra♪e Col-laboratio♪. A♪other test collectio♪ has also bee♪ derived from 94 Cochra♪e reviews [18]. However, ♪o♪e of these datasets focus o♪ the review updates.…”
Section: Related Workmentioning
confidence: 99%
“…There is a recently published dataset for evaluating the retrieval process in systematic reviews [7]. However, this dataset does not contain information (e.g., pmids) of all candidate documents of its SRs.…”
Section: Datasetmentioning
confidence: 99%
“…In addition, we evaluate a ranking model based on word embeddings, which we call average embedding similarity (AES) 7 . Word embedding is an unsupervised approach to learn continuous distributed vector representations of words.…”
Section: S -D Rmentioning
confidence: 99%
See 1 more Smart Citation