In recent years, AI research has become more and more computationally demanding. In natural language processing (NLP), this tendency is reflected in the emergence of large language models (LLMs) like GPT-3. These powerful neural network-based models can be used for a range of NLP tasks and their language generation capacities have become so sophisticated that it can be very difficult to distinguish their outputs from human language. LLMs have raised concerns over their demonstrable biases, heavy environmental footprints, and future social ramifications. In December 2020, critical research on LLMs led Google to fire Timnit Gebru, co-lead of the company’s AI Ethics team, which sparked a major public controversy around LLMs and the growing corporate influence over AI research. This article explores the role LLMs play in the political economy of AI as infrastructural components for AI research and development. Retracing the technical developments that have led to the emergence of LLMs, we point out how they are intertwined with the business model of big tech companies and further shift power relations in their favour. This becomes visible through the Transformer, which is the underlying architecture of most LLMs today and started the race for ever bigger models when it was introduced by Google in 2017. Using the example of GPT-3, we shed light on recent corporate efforts to commodify LLMs through paid API access and exclusive licensing, raising questions around monopolization and dependency in a field that is increasingly divided by access to large-scale computing power.
Artificial Intelligence (AI) is quickly being taken-up across scientific disciplines, medical imaging is no exception. To stimulate development and facilitate the scientific evaluation of new approaches, AI-based research in medical imaging is increasingly organised in a competitive manner through digital machine-learning development platforms such as Kaggle and Grand Challenge—two of the leading platforms in the field. For medical image analysis, such competitions constitute an important research infrastructure, steering global research and development in this dedicated AI subfield. Yet, little is known about how these platform-based infrastructures that operate across the medical AI research pipeline shape the conditions for model production and evaluation. This paper addresses this issue through a critical empirical case study of 120 medical imaging competitions on Kaggle and Grand Challenge between 2017 and 2022. We show that platforms as well as competition organisers shape power relations in medical AI research at the level of data and task design, model production and evaluation in several distinct ways. Taken together, because competitions play a central rol within the field, these findings highlight the impact these powerful actors ultimately have on steering medical image AI research directions as they see fit and influence the types of models that are implemented into clinical settings.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.