Background Crowdsourcing involves obtaining ideas, needed services, or content by soliciting Web-based contributions from a crowd. The 4 types of crowdsourced tasks (problem solving, data processing, surveillance or monitoring, and surveying) can be applied in the 3 categories of health (promotion, research, and care). Objective This study aimed to map the different applications of crowdsourcing in health to assess the fields of health that are using crowdsourcing and the crowdsourced tasks used. We also describe the logistics of crowdsourcing and the characteristics of crowd workers. Methods MEDLINE, EMBASE, and ClinicalTrials.gov were searched for available reports from inception to March 30, 2016, with no restriction on language or publication status. Results We identified 202 relevant studies that used crowdsourcing, including 9 randomized controlled trials, of which only one had posted results at ClinicalTrials.gov. Crowdsourcing was used in health promotion (91/202, 45.0%), research (73/202, 36.1%), and care (38/202, 18.8%). The 4 most frequent areas of application were public health (67/202, 33.2%), psychiatry (32/202, 15.8%), surgery (22/202, 10.9%), and oncology (14/202, 6.9%). Half of the reports (99/202, 49.0%) referred to data processing, 34.6% (70/202) referred to surveying, 10.4% (21/202) referred to surveillance or monitoring, and 5.9% (12/202) referred to problem-solving. Labor market platforms (eg, Amazon Mechanical Turk) were used in most studies (190/202, 94%). The crowd workers’ characteristics were poorly reported, and crowdsourcing logistics were missing from two-thirds of the reports. When reported, the median size of the crowd was 424 (first and third quartiles: 167-802); crowd workers’ median age was 34 years (32-36). Crowd workers were mainly recruited nationally, particularly in the United States. For many studies (58.9%, 119/202), previous experience in crowdsourcing was required, and passing a qualification test or training was seldom needed (11.9% of studies; 24/202). For half of the studies, monetary incentives were mentioned, with mainly less than US $1 to perform the task. The time needed to perform the task was mostly less than 10 min (58.9% of studies; 119/202). Data quality validation was used in 54/202 studies (26.7%), mainly by attention check questions or by replicating the task with several crowd workers. Conclusions The use of crowdsourcing, which allows access to a large pool of participants as well as saving time in data collection, lowering costs, and speeding up innovations, is increasing in health promotion, research, and care. However, the description of crowdsourcing logistics and crowd workers’ characteristics is frequently missing in study reports and needs to be precisely reported to better interpret the study findings and replicate them.
BackgroundMultiple treatments are frequently available for a given condition, and clinicians and patients need a comprehensive, up-to-date synthesis of evidence for all competing treatments. We aimed to quantify the waste of research related to the failure of systematic reviews to provide a complete and up-to-date evidence synthesis over time.MethodsWe performed a series of systematic overviews and networks of randomized trials assessing the gap between evidence covered by systematic reviews and available trials of second-line treatments for advanced non-small cell lung cancer. We searched the Cochrane Database of Systematic Reviews, Database of Abstracts of Reviews of Effects, MEDLINE, EMBASE, and other resources sequentially by year from 2009 to March 2, 2015. We sequentially compared the amount of evidence missing from systematic reviews to the randomized evidence available for inclusion each year. We constructed cumulative networks of randomized evidence over time and evaluated the proportion of trials, patients, treatments, and treatment comparisons not covered by systematic reviews on December 31 each year from 2009 to 2015.ResultsWe identified 77 trials (28,636 patients) assessing 47 treatments with 54 comparisons and 29 systematic reviews (13 published after 2013). From 2009 to 2015, the evidence covered by existing systematic reviews was consistently incomplete: 45 % to 70 % of trials; 30 % to 58 % of patients; 40 % to 66 % of treatments; and 38 % to 71 % of comparisons were missing. In the cumulative networks of randomized evidence, 10 % to 17 % of treatment comparisons were partially covered by systematic reviews and 55 % to 85 % were partially or not covered.ConclusionsWe illustrate how systematic reviews of a given condition provide a fragmented, out-of-date panorama of the evidence for all treatments. This waste of research might be reduced by the development of live cumulative network meta-analyses.Electronic supplementary materialThe online version of this article (doi:10.1186/s12916-016-0555-0) contains supplementary material, which is available to authorized users.
Many recently FDA-approved new cancer drugs did not have high clinical benefit as measured by current scales. We found no relation between the price of drugs and benefit to society and patients.
Purpose We aimed to compare treatment effect sizes between overall survival (OS) and progression-free survival (PFS) in trials of US Food and Drug Administration-approved oncology immunotherapy drugs with results posted at ClinicalTrials.gov . Methods We searched ClinicalTrials.gov for phase II to IV cancer trials of Food and Drug Administration-approved immunotherapy drugs and selected those reporting results for both OS and PFS. For each trial, we extracted the hazard ratios (HRs) with 95% CIs for both outcomes and evaluated the differences by a ratio of HRs (rHRs): the HR for PFS to that for OS. We performed a random effects meta-analysis across trials to obtain a summary rHR. We also evaluated surrogacy of PFS for OS by the coefficient of determination and the surrogacy threshold effect, the minimal value of HR for PFS to predict a non-null effect on OS. Results We identified 51 trials assessing 14 drugs across 15 conditions. Treatment effect sizes were 17% greater, on average, with PFS than with OS (rHR, 0.83; 95% CI, 0.79 to 0.88; I = 34.4%; P = .01; τ = 0.0129). Nearly one half of the trials (n = 23, 45%) showed statistically significant benefits for PFS but not for OS. Differences were great for trials of obinutuzumab (rHR, 0.21; 95% CI, 0.08 to 0.54), bevacizumab (rHR, 0.75; 95% CI, 0.67 to 0.84), and rituximab (rHR, 0.79; 95% CI, 0.64 to 0.98). The coefficient of determination was 38% and the surrogacy threshold effect was 0.50. Conclusion Treatment effect sizes in trials of immunotherapy drugs were greater for PFS than for OS, with important differences for some drugs, which is consistent with surrogacy metrics. Caution must be taken when interpreting PFS in the absence of OS data.
BackgroundInadequate planning, selective reporting, and incomplete reporting of outcomes in randomized controlled trials (RCTs) contribute to the problem of waste of research. We aimed to describe such a waste and to examine to what extent this waste could be avoided.MethodsThis research-on-research study was based on RCTs included in Cochrane reviews with a summary of findings (SoF) table. We considered the outcomes reported in the SoF tables as surrogates for important outcomes for patients and other decision makers. We used a three-step approach. (1) First, in each review, we identified, for each important outcome, RCTs that were excluded from the corresponding meta-analysis. (2) Then, for these RCTs, we systematically searched for registrations and protocols to distinguish between inadequate planning (an important outcome was not reported in registries or protocols), selective reporting (an important outcome was reported in registries or protocols but not in publications), and incomplete reporting (an important outcome was incompletely reported in publications). (3) Finally, we assessed, with the consensus of five experts, the feasibility and cost of measuring the important outcomes that were not planned. We considered inadequately planned or selectively or incompletely reported important outcomes as avoidable waste if the outcome could have been easily measured at no additional cost based on expert evaluation.ResultsOf the 2711 RCTs included in the main comparison of 290 reviews, 2115 (78%) were excluded from at least one meta-analysis of important outcomes. Every trial contributed to 55%, on average, of the meta-analyses of important outcomes. Of the 310 RCTs published in 2010 or later, 156 were registered. Inadequate planning affected 79% of these RCTs, whereas incomplete and selective reporting affected 41% and 15%, respectively. For 63% of RCTs, we found at least one missing important outcome for which the waste was avoidable and for 30%, the waste was avoidable for all important outcomes.ConclusionsMost of the RCTs included in our sample did not contribute to all the important outcomes in meta-analyses, mostly because of inadequate planning or incomplete reporting. A large part of this waste of research seemed to be avoidable.Electronic supplementary materialThe online version of this article (10.1186/s12916-018-1083-x) contains supplementary material, which is available to authorized users.
Forty-nine cases of crizotinib-associated ILD have been identified by the systematic review of the literature, including our six cases. Two types of adverse lung reactions may be observed with different presentation, prognosis, and treatment. Their potential mechanisms should be clarified. Nine patients with the less severe form of ILD were safely retreated.
To make healthcare decisions, patients, clinicians, clinical practice guideline developers, researchers, policy-makers and health system managers need a comprehensive, critical, accessible, actionable and up-to-date synthesis of all available evidence in a given condition.Systematic reviews and meta-analyses are a cornerstone of healthcare decisions. However, despite the increasing number of published systematic reviews of therapeutic interventions, the current evidence synthesis ecosystem is not properly addressing stakeholders' needs.The current production process leads to a series of disparate systematic reviews due to erratic and inefficient planning with a process that is not always comprehensive, and is prone to bias.Evidence synthesis depends on the quality of primary research, so primary research that is not available, is biased or selectively reported raises important concerns. Moreover, the lack of interactions between the community of primary research producers and systematic reviewers impedes the optimal use of data.The context has considerably evolved, with ongoing research innovations, a new medical approach with the end of the one-size-fits-all approach, more available data, and new patient expectations. All these changes must be introduced into the future evidence ecosystem. Dramatic changes are needed to enable this future ecosystem to become user-driven and useroriented and more useful for decision-making.
To become user-driven and more useful for decision-making, the current evidence synthesis ecosystem requires significant changes (Paper 1.Future of evidence ecosystem series).Reviewers have access to new sources of data (clinical trial registries, protocols, clinical study reports from regulatory agencies or pharmaceutical companies) for more information on randomized control trials. With all these new available data, the management of multiple and scattered trial reports is even more challenging. New types of data are also becoming available: individual patient data and routinely collected data. With the increasing number of diverse sources to be searched and the amount of data to be extracted, the process needs to be rethought. New approaches and tools, such as automation technologies and crowdsourcing, should help accelerate the process. The implementation of these new approaches and methods requires a substantial rethinking and redesign of the current evidence synthesis ecosystem.The concept of a "living" evidence synthesis enterprise, with living systematic review and living network meta-analysis, has recently emerged. Such an evidence synthesis ecosystem implies conceptualizing evidence synthesis as a continuous process built around a clinical question of interest and no longer as a small team independently answering a specific clinical question at a single point in time.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.