2018
DOI: 10.1101/255760
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Machine learning algorithms for systematic review: reducing workload in a preclinical review of animal studies and reducing human screening error

Abstract: BackgroundHere we outline a method of applying existing machine learning (ML) approaches to aid citation screening in an on-going broad and shallow systematic review of preclinical animal studies, with the aim of achieving a high performing algorithm comparable to human screening.MethodsWe applied ML approaches to a broad systematic review of animal models of depression at the citation screening stage. We tested two independently developed ML approaches which used different classification models and feature se… Show more

Help me understand this report
View published versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
36
0
1

Year Published

2019
2019
2023
2023

Publication Types

Select...
6
2
1

Relationship

1
8

Authors

Journals

citations
Cited by 31 publications
(37 citation statements)
references
References 38 publications
0
36
0
1
Order By: Relevance
“…If an adequate sample size of 40 cases is not reached, we will continue searching for promising preclinical therapies in order to identify additional phase II or later translated therapies. The CAMARADES group has used a machine learning algorithm 35 to identify a cohort of over 100,000 reports of in vivo research available on PubMed Central. From this we will randomly select reports of in vivo preclinical research and assess these for clinical translation.…”
Section: Methodsmentioning
confidence: 99%
“…If an adequate sample size of 40 cases is not reached, we will continue searching for promising preclinical therapies in order to identify additional phase II or later translated therapies. The CAMARADES group has used a machine learning algorithm 35 to identify a cohort of over 100,000 reports of in vivo research available on PubMed Central. From this we will randomly select reports of in vivo preclinical research and assess these for clinical translation.…”
Section: Methodsmentioning
confidence: 99%
“…Meta-analyses are often performed in collaborations, and a recent feasibility study using crowd-sourcing for clinical study quality assessment suggests that this could be a way forward, since experts and novices obtained the same results (Pianta et al 2018). Combined with recently developed and highly promising machine learning algorithms (Bannach-Brown et al 2019), collaborative efforts could increase the pace and reduce human error in systematic reviews and meta-analysis.…”
Section: Working Together To Improve Nonclinical Data Reliabilitymentioning
confidence: 99%
“…This is particularly problematic in systematic reviews because low recall increases the risk of bias [9]. The lack of appropriate stopping criteria has therefore been identified as a research gap [10,11], although some approaches have been suggested. These have most commonly fallen into the following categories:…”
Section: Introductionmentioning
confidence: 99%