2021
DOI: 10.1016/j.jclinepi.2021.01.006
|View full text |Cite
|
Sign up to set email alerts
|

An evaluation of Cochrane Crowd found that crowdsourcing produced accurate results in identifying randomized trials

Abstract: Background and Objectives: Filtering the deluge of new research to facilitate evidence synthesis has proven to be unmanageable using current paradigms of search and retrieval. Crowdsourcing, a way of harnessing the collective effort of a ''crowd'' of people, has the potential to support evidence synthesis by addressing this information overload created by the exponential growth in primary research outputs. Cochrane Crowd, Cochrane's citizen science platform, offers a range of tasks aimed at identifying studies… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
14
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
6
2

Relationship

2
6

Authors

Journals

citations
Cited by 69 publications
(14 citation statements)
references
References 30 publications
0
14
0
Order By: Relevance
“…One of the main strengths of this study is the quality of the three data sets. We were able to use highly representative records for each stage, with a high level of confidence in the quality of each, derived as they were from the Cochrane Centralised Search Service team and Cochrane Crowd [ 7 ]. In addition, the training data set was fairly large ( n =59,513), made up of both the class of interest (‘included’) and non-eligible records (‘excluded’).…”
Section: Discussionmentioning
confidence: 99%
See 2 more Smart Citations
“…One of the main strengths of this study is the quality of the three data sets. We were able to use highly representative records for each stage, with a high level of confidence in the quality of each, derived as they were from the Cochrane Centralised Search Service team and Cochrane Crowd [ 7 ]. In addition, the training data set was fairly large ( n =59,513), made up of both the class of interest (‘included’) and non-eligible records (‘excluded’).…”
Section: Discussionmentioning
confidence: 99%
“…As the CCSR’s other sources were trial registers (not bibliographic title-abstract records), most of the training set records were from PubMed. These records had originally been identified using conventional Boolean searches of selected electronic bibliographic databases and trials registries, before being manually screened and labelled as either ‘included’ (eligible for the CCSR) or ‘excluded’ (ineligible) by Cochrane information specialists or the Cochrane Crowd [ 7 ]. The search strategies used can be seen on the About page of the CCSR [ 6 ].…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…The results are then de-duplicated and screened. A sub-set of results (those retrieved from Embase) are sent to Cochrane Crowd, Cochrane's citizen science platform 5 the rest are screened by the core register team 6,7 . The screening process involves an assessment of record eligibility based on titles and abstracts.…”
Section: Introductionmentioning
confidence: 99%
“…A complementary process of evaluating and classifying references in the CCSR segment is conducted by contributors to COVID Quest, a citizen science task hosted on Cochrane Crowd (crowd.cochrane.org). 5 Cochrane Crowd contributors are community volunteers who assist by assessing references for eligibility in the CCSR and providing study classifications. The production process of the CCSR is depicted in Figure 1.…”
Section: Manuscript Text 1 Introductionmentioning
confidence: 99%