2021
DOI: 10.1002/jrsm.1541
|View full text |Cite
|
Sign up to set email alerts
|

Machine learning in systematic reviews: Comparing automated text clustering with Lingo3G and human researcher categorization in a rapid review

Abstract: Systematic reviews are resource‐intensive. The machine learning tools being developed mostly focus on the study identification process, but tools to assist in analysis and categorization are also needed. One possibility is to use unsupervised automatic text clustering, in which each study is automatically assigned to one or more meaningful clusters. Our main aim was to assess the usefulness of an automated clustering method, Lingo3G, in categorizing studies in a simplified rapid review, then compare performanc… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0
1

Year Published

2023
2023
2024
2024

Publication Types

Select...
4
4
1

Relationship

2
7

Authors

Journals

citations
Cited by 13 publications
(10 citation statements)
references
References 39 publications
0
5
0
1
Order By: Relevance
“…Automatic classification and exclusion of non-randomized designs with a study design classifier saved Cochrane Crowd from manually screening more than 40% of identified references in 2018 [25]. We have also reported that categorizing studies using automated clustering used 33% of the time compared to manual categorization [26].…”
Section: Evidence Synthesis and Machine Learningmentioning
confidence: 99%
“…Automatic classification and exclusion of non-randomized designs with a study design classifier saved Cochrane Crowd from manually screening more than 40% of identified references in 2018 [25]. We have also reported that categorizing studies using automated clustering used 33% of the time compared to manual categorization [26].…”
Section: Evidence Synthesis and Machine Learningmentioning
confidence: 99%
“…In the third most used strategy, review updates, all included papers and excluded records of a published review are used for training, and the aim is to predict the inclusion of a record from new search results in the updated review (N = 12/89, 13.5%) 34,46,81,83,89,93,112,124,125,129,152,153 . The priority ranking strategy (N = 10/89, 11.2%) 33,36,38,65,66,75,91,132,134,139 was used least often. This strategy predicts the priority of records after single training round.…”
Section: Record Screeningmentioning
confidence: 99%
“…The off-the shelf or freeware screening automation software were Abstrackr 77,90,107,108,112,114,131 , EPPI Reviewer 50,114,139,147 , RobotAnalyst 80,87, 90,112 , Distiller SR 90,109,132 , Rayyan 87,120 , Systematic Review Accelerator 18,119 , RCT Tagger 141,143 , SWIFT Review 91,132 , SyRF, 84,89 , ASR (Automated Systematic Review) 95 , ASReview 146 , Aggregator 52 , ATCER 79 , Cochrane RCT Classi er 126 , Covidence 87 , Curious Snake 41 , DoCTER 86 , GAP Screener 35 , MetaPreg 130 , Research Screener 118 , revtools 100 , RobotAnalyst, and TeMMPo 75 . The detailed description of these tools is beyond the scope of this study.…”
Section: Record Screeningmentioning
confidence: 99%
“…At the Norwegian Institute of Public Health (NIPH) a key focus has been on how to reduce the lead time for our commissions without compromising methodological quality. To achieve this goal, a lean-inspired (1) project to improve work ows was conducted, and a machine learning team was founded to evaluate tools to speed up the review process (2,3,4,5).…”
Section: Rationale For Implementing the Intensive Team Pilotmentioning
confidence: 99%