In the academic year of 2017-18, one of the editors of this volume convened a course on gender and sexuality at a UK university. The course elicited overwhelmingly positive feedback from students. However, following examinations an invigilator expressed concern, communicated via management, with the language some students used in their answers. Specifically, the invigilator took issue with students employing the acronym 'TERF' (Trans-Exclusionary Radical Feminist) to criticise a range of ideological positions, because they considered the acronym a misogynist slur. The course convenor's line manager subsequently asked whether the term was used within teaching materials. The convenor had not, in fact, used the TERF acronym at all in any of their teaching, nor explicitly engaged with questions of 'pro' or 'anti' trans 1 positions within feminism. A lecture on trans feminism had focused specifically on understanding transphobia as a manifestation of misogyny, drawing on the work of writers such as Julia Serano (2007), and media analysis of films, including Silence of the Lambs and Ace Ventura. It was the students themselves who applied what they had learned from contemporary popular discourse to their exam scripts. They had chosen to use the acronym to reference a series of increasingly fraught disputes over how feminism should conceptualise and respond to trans identities and experiences, and did so because 'TERF' was part of their everyday vernacular in discussing the politics of gender, sex and inclusion/exclusion in feminism. The invigilator's objection to the acronym, meanwhile, is indicative of wider
In recent years, discourses around “personalized,” “stratified,” and “precision” medicine have proliferated. These concepts broadly refer to the translational potential carried by new data-intensive biomedical research modes. Each describes expectations about the future of medicine and healthcare that data-intensive innovation promises to bring forth. The definitions and uses of the concepts are, however, plural, contested and characterized by diverse ideas about the kinds of futures that are desired and desirable. In this paper, we unpack key disputes around the “personalized,” “stratified,” and “precision” terms, and map the epistemic, political and economic contexts that structure them as well as the different roles attributed to patients and citizens in competing future imaginaries. We show the ethical and value baggage embedded within the promises that are manufactured through terminological choices and argue that the context and future-oriented nature of these choices helps to understanding how data-intensive biomedical innovations are made socially meaningful.
The ‘digital era’ of informatics and knowledge integration has changed the roles and experiences of patients, research participants and health consumers. No longer figured (merely) as passive recipients of healthcare services or as beneficiaries of top-down biomedical information, individuals are increasingly seen as active contributors in healthcare and research. They are positioned into multiple roles that are experienced simultaneously by those who access and co-produce digital content that can easily be transformed into data. This is contextualised by ‘big data’ technologies that have altered biomedicine, enabling collation and analysis of myriad data from digitised records to personal mobile data. Social media facilitate new formations of communities and knowledge enacted online, while novel kinds of commercial value emerge from digital networks that enable health data commodification. In this paper, we draw from exemplary digital era shifts towards participatory medicine to cast light on the rapprochements between patienthood, participation and consumption, and we explore how these rapprochements are mediated by, and materialise through, the use of participatory digital technologies and big data. We argue that there is a need to use new conceptual tools that account for the multiple roles and experiences of patient–participant–consumers that co-emerge through digital technologies. We must also ethically re-assess the rights and responsibilities of individuals in the digital era, and the implications of digital era changes for the future of biomedicine and healthcare.
Population-level biomedical research offers new opportunities to improve population health, but also raises new challenges to traditional systems of research governance and ethical oversight. Partly in response to these challenges, various models of public involvement in research are being introduced. Yet, the ways in which public involvement should meet governance challenges are not well understood. We conducted a qualitative study with 36 experts and stakeholders using the World Café method to identify key governance challenges and explore how public involvement can meet these challenges. This brief report discusses four cross-cutting themes from the study: the need to move beyond individual consent; issues in benefit and data sharing; the challenge of delineating and understanding publics; and the goal of clarifying justifications for public involvement. The report aims to provide a starting point for making sense of the relationship between public involvement and the governance of population-level biomedical research, showing connections, potential solutions and issues arising at their intersection. We suggest that, in population-level biomedical research, there is a pressing need for a shift away from conventional governance frameworks focused on the individual and towards a focus on collectives, as well as to foreground ethical issues around social justice and develop ways to address cultural diversity, value pluralism and competing stakeholder interests. There are many unresolved questions around how this shift could be realised, but these unresolved questions should form the basis for developing justificatory accounts and frameworks for suitable collective models of public involvement in population-level biomedical research governance.
This paper scrutinises how AI and robotic technologies are transforming the relationships between people and machines in new affective, embodied and relational ways. Through investigating what it means to exist as human ‘in relation’ to AI across health and care contexts, we aim to make three main contributions. (1) We start by highlighting the complexities of philosophical issues surrounding the concepts of “artificial intelligence” and “ethical machines.” (2) We outline some potential challenges and opportunities that the creation of such technologies may bring in the health and care settings. We focus on AI applications that interface with health and care via examples where AI is explicitly designed as an ‘augmenting’ technology that can overcome human bodily and cognitive as well as socio-economic constraints. We focus on three dimensions of ‘intelligence’ - physical, interpretive, and emotional - using the examples of robotic surgery, digital pathology, and robot caregivers, respectively. Through investigating these areas, we interrogate the social context and implications of human-technology interaction in the interrelational sphere of care practice. (3) We argue, in conclusion, that there is a need for an interdisciplinary mode of theorising ‘intelligence’ as relational and affective in ways that can accommodate the fragmentation of both conceptual and material boundaries between human and AI, and human and machine. Our aim in investigating these sociological, philosophical and ethical questions is primarily to explore the relationship between affect, relationality and ‘intelligence,’ the intersection and integration of ‘human’ and ‘artificial’ intelligence, through an examination of how AI is used across different dimensions of intelligence. This allows us to scrutinise how ‘intelligence’ is ultimately conveyed, understood and (technologically or algorithmically) configured in practice through emerging relationships that go beyond the conceptual divisions between humans and machines, and humans vis-à-vis artificial intelligence-based technologies.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.