2018
DOI: 10.1007/978-3-319-98932-7_5
|View full text |Cite
|
Sign up to set email alerts
|

Learning-to-Rank and Relevance Feedback for Literature Appraisal in Empirical Medicine

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
6
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
5
2
1

Relationship

1
7

Authors

Journals

citations
Cited by 14 publications
(6 citation statements)
references
References 17 publications
0
6
0
Order By: Relevance
“…There has been significant interest in the development of techniques to support the identification of studies for inclusion in systematic reviews by applying text mining techniques, for example, [9][10][11][12][13][14][15][16]. The vast majority of this work has focused on the identification of studies for new reviews and only a few papers explored the problem of identifying relevant studies for review updates [9,17,18].…”
Section: Related Workmentioning
confidence: 99%
“…There has been significant interest in the development of techniques to support the identification of studies for inclusion in systematic reviews by applying text mining techniques, for example, [9][10][11][12][13][14][15][16]. The vast majority of this work has focused on the identification of studies for new reviews and only a few papers explored the problem of identifying relevant studies for review updates [9,17,18].…”
Section: Related Workmentioning
confidence: 99%
“…For the implementation of the AUTO-TAR BMI, we followed Algorithm 1 as described in Cormack and Grossman (2018). The Hybrid variation method is similar to the one presented in Lagopoulos et al (2018) which uses TF-IDF instead of the Sent2Vec for the representation of the documents in the intra-review model. Finally, for the PubMed method, we query the PubMed database 9 , using the Entrez Programming Utilities, with the title and the objective of each SLR.…”
Section: Comparison With State-of-the-art Methods and A Baselinementioning
confidence: 99%
“…This work is an extension of a previously published conference paper (Lagopoulos et al, 2018). Specifically, we extended our previous pipeline with a primary retrieval engine, fine-tuned the inter-review ranker features and adopted sentence embeddings for both inter-review and intra-review rankers.…”
Section: Introductionmentioning
confidence: 99%
“…This ranking of studies has come to be known as 'screening prioritisation', as popularised by the CLEF TAR tasks which aimed to automate these early stages of the systematic review creation pipeline [9,11,10]. As a result, in recent years there has been an uptake in Information Retrieval approaches to enable screening prioritisation [18,5,3,25,2,22,16,15,1,27,21]. The vast majority of screening prioritisation use a different representation than the original Boolean query for ranking.…”
Section: Related Workmentioning
confidence: 99%