Crowdsourcing is emerging as an alternative outsourcing strategy which is gaining increasing attention in the software engineering community. However, crowdsourcing software development involves complex tasks which differ significantly from the micro-tasks that can be found on crowdsourcing platforms such as Amazon Mechanical Turk which are much shorter in duration, are typically very simple, and do not involve any task interdependencies. To achieve the potential benefits of crowdsourcing in the software development context, companies need to understand how this strategy works, and what factors might affect crowd participation. We present a multi-method qualitative and quantitative theory-building research study. Firstly, we derive a set of key concerns from the crowdsourcing literature as an initial analytical framework for an exploratory case study in a Fortune 500 company. We complement the case study findings with an analysis of 13,602 crowdsourcing competitions over a ten-year period on the very popular Topcoder crowdsourcing platform. Drawing from our empirical findings and the crowdsourcing literature, we propose a theoretical model of crowd interest and actual participation in crowdsourcing competitions. We evaluate this model using Structural Equation Modeling. Among the findings are that the level of prize and duration of competitions do not significantly increase crowd interest in competitions.
Purpose
To date there has not been an extensive analysis of the outcomes of biomarker use in oncology.
Methods
Data were pooled across four indications in oncology drawing upon trial outcomes from http://www.clinicaltrials.gov: breast cancer, non‐small cell lung cancer (NSCLC), melanoma and colorectal cancer from 1998 to 2017. We compared the likelihood drugs would progress through the stages of clinical trial testing to approval based on biomarker status. This was done with multi‐state Markov models, tools that describe the stochastic process in which subjects move among a finite number of states.
Results
Over 10000 trials were screened, which yielded 745 drugs. The inclusion of biomarker status as a covariate significantly improved the fit of the Markov model in describing the drug trajectories through clinical trial testing stages. Hazard ratios based on the Markov models revealed the likelihood of drug approval with biomarkers having nearly a fivefold increase for all indications combined. A 12, 8 and 7‐fold hazard ratio was observed for breast cancer, melanoma and NSCLC, respectively. Markov models with exploratory biomarkers outperformed Markov models with no biomarkers.
Conclusion
This is the first systematic statistical evidence that biomarkers clearly increase clinical trial success rates in three different indications in oncology. Also, exploratory biomarkers, long before they are properly validated, appear to improve success rates in oncology. This supports early and aggressive adoption of biomarkers in oncology clinical trials.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.