IMPORTANCE Radiation therapy (RT) is a critical cancer treatment, but the existing radiation oncologist work force does not meet growing global demand. One key physician task in RT planning involves tumor segmentation for targeting, which requires substantial training and is subject to significant interobserver variation. OBJECTIVE To determine whether crowd innovation could be used to rapidly produce artificial intelligence (AI) solutions that replicate the accuracy of an expert radiation oncologist in segmenting lung tumors for RT targeting. DESIGN, SETTING, AND PARTICIPANTS We conducted a 10-week, prize-based, online, 3-phase challenge (prizes totaled $55 000). A well-curated data set, including computed tomographic (CT) scans and lung tumor segmentations generated by an expert for clinical care, was used for the contest (CT scans from 461 patients; median 157 images per scan; 77 942 images in total; 8144 images with tumor present). Contestants were provided a training set of 229 CT scans with accompanying expert contours to develop their algorithms and given feedback on their performance throughout the contest, including from the expert clinician. MAIN OUTCOMES AND MEASURES The AI algorithms generated by contestants were automatically scored on an independent data set that was withheld from contestants, and performance ranked using quantitative metrics that evaluated overlap of each algorithm's automated segmentations with the expert's segmentations. Performance was further benchmarked against human expert interobserver and intraobserver variation. RESULTS A total of 564 contestants from 62 countries registered for this challenge, and 34 (6%) submitted algorithms. The automated segmentations produced by the top 5 AI algorithms, when combined using an ensemble model, had an accuracy (Dice coefficient = 0.79) that was within the benchmark of mean interobserver variation measured between 6 human experts. For phase 1, the top 7 algorithms had average custom segmentation scores (S scores) on the holdout data set ranging from 0.15 to 0.38, and suboptimal performance using relative measures of error. The average S scores for phase 2 increased to 0.53 to 0.57, with a similar improvement in other performance metrics. In phase 3, performance of the top algorithm increased by an additional 9%. Combining the top 5 algorithms from phase 2 and phase 3 using an ensemble model, yielded an additional 9% to 12% improvement in performance with a final S score reaching 0.68. CONCLUSIONS AND RELEVANCE A combined crowd innovation and AI approach rapidly produced automated algorithms that replicated the skills of a highly trained physician for a critical task in radiation therapy. These AI algorithms could improve cancer care globally by transferring the skills of expert clinicians to under-resourced health care settings.
Open data science and algorithm development competitions offer a unique avenue for rapid discovery of better computational strategies. We highlight three examples in computational biology and bioinformatics research in which the use of competitions has yielded significant performance gains over established algorithms. These include algorithms for antibody clustering, imputing gene expression data, and querying the Connectivity Map (CMap). Performance gains are evaluated quantitatively using realistic, albeit sanitized, data sets. The solutions produced through these competitions are then examined with respect to their utility and the prospects for implementation in the field. We present the decision process and competition design considerations that lead to these successful outcomes as a model for researchers who want to use competitions and non-domain crowds as collaborators to further their research.
Data analysis requires new approaches in many domains for evaluating tools and techniques, particularly when the data sets grow large and more complex. Evaluation-as-aservice (EaaS) was coined as a term to represent evaluation approaches based on APIs, virtual machines or source code submission, different from the common paradigm of evaluating techniques on a distributed test collection, tasks and submitted results files. Such new approaches become necessary when data sets become extremely large, contain confidential information or might change quickly over time. The workshop on cloud-based evaluation (CBE) took place in Boston, MA, USA on November 5, 2015 and explored several approaches for data analysis evaluation and frameworks in this field. The objective was to include several stakeholders from academic partners, companies to funding agencies to cover various interests and viewpoints in the discussion of evaluation infrastructures. The workshop focused on the biomedical domain but the results are easily applicable to many domains of information analysis and retrieval.
OVERVIEW: Innovation managers rarely use crowdsourcing as an innovative instrument despite extensive academic and theoretical research. The lack of tools available to compare and measure crowdsourcing, specifically contests, against traditional methods of procuring goods and services is one barrier to adoption. Using ethnographic research to understand how managers solved their problems, we find that the crowdsourcing model produces higher costs in the framing phase but yields savings in the solving phase, whereas traditional procurement is downstream cost-intensive. Two case study examples with the National Aeronautics and Space Agency (NASA) and the United States Department of Energy demonstrate a potential total cost savings of 27 percent and 33 percent, respectively, using innovation contests. We provide a comprehensive evaluation framework for crowdsourcing contests developed from a high-tech industry perspective, which are applicable to other industries.Senior executives, leaders, and managers constantly look for new ways to increase efficiencies, maximize value, and solve perceived unsolvable problems for their firms, but many face resistance due to rigid organizational structures, lack of resources, or inability to effectively measure and compare the value of new methods. Despite extensive academic Jin H. Paik is the program director and senior researcher at the Laboratory for Innovation Science at Harvard (LISH). He develops the lab's strategic vision and directs project and research activities. He also oversees the development of open innovation projects through partnerships with the National Aeronautics and Space Agency (NASA), Harvard Medical School, federal government agencies, academic and research institutions, and industry leaders. He advises organizations on innovation strategies with a focus on open innovation practices. He has worked extensively on programs focused on data science, development and use of artificial intelligence, digital transformation in organizations, and the future of work. He is an advisor to the National Academies of Sciences, Engineering, and Medicine's Committee on the Use of Inducement Prizes. Prior to joining the LISH team, he worked at the Harvard Kennedy School and Mathematica Policy Research. He holds a Bachelor's degree from the University of Michigan and a Master's degree from Harvard University. jpaik@hbs.edu Martin Scholl is a senior digital product manager at BYTON, a Chinese electric vehicle startup. Previously, he completed an 18-month management development program in the field of digital services and digital strategy at the BMW Group. He is a former Harvard University Visiting Fellow with the Laboratory for Innovation Science at Harvard where he researched open innovation at the NASA Tournament Lab. He holds
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.