In 2012 the United Kingdom's General Medical Council (GMC) commissioned research to develop guidance for medical schools on how best to support students with mental illness. One of the key findings from medical student focus groups in the study was students' strong belief that medical schools excluded students on mental health grounds. Students believed mental illness was a fitness to practice matter that led to eventual dismissal, although neither personal experience nor empirical evidence supported this belief. The objective of the present study was a deeper exploration of this belief and its underlying social mechanisms. This included any other beliefs that influenced medical students' reluctance to disclose a mental health problem, the factors that reinforced these beliefs, and the feared consequences of revealing a mental illness. The study involved a secondary analysis of qualitative data from seven focus groups involving 40 student participants across five UK medical schools in England, Scotland, and Wales. Student beliefs clustered around (1) the unacceptability of mental illness in medicine, (2) punitive medical school support systems, and (3) the view that becoming a doctor is the only successful career outcome. Reinforcing mechanisms included pressure from senior clinicians, a culture of "presenteeism," distrust of medical school staff, and expectations about conduct. Feared consequences centered on regulatory "fitness to practice" proceedings that would lead to expulsion, reputational damage, and failure to meet parents' expectations. The study's findings provide useful information for veterinary medical educators interested in creating a culture that encourages the disclosure of mental illness and contributes to the debate about mental illness within the veterinary profession.
Artificial intelligence (AI) and machine learning (ML) techniques occupy a prominent role in medical research in terms of the innovation and development of new technologies. However, while many perceive AI as a technology of promise and hope—one that is allowing for more early and accurate diagnosis—the acceptance of AI and ML technologies in hospitals remains low. A major reason for this is the lack of transparency associated with these technologies, in particular epistemic transparency, which results in AI disturbing or troubling established knowledge practices in clinical contexts. In this article, we describe the development process of one AI application for a clinical setting. We show how epistemic transparency is negotiated and co-produced in close collaboration between AI developers and clinicians and biomedical scientists, forming the context in which AI is accepted as an epistemic operator. Drawing on qualitative research with collaborative researchers developing an AI technology for the early diagnosis of a rare respiratory disease (pulmonary hypertension/PH), this paper examines how including clinicians and clinical scientists in the collaborative practices of AI developers de-troubles transparency. Our research shows how de-troubling transparency occurs in three dimensions of AI development relating to PH: querying of data sets, building software and training the model. The close collaboration results in an AI application that is at once social and technological: it integrates and inscribes into the technology the knowledge processes of the different participants in its development. We suggest that it is a misnomer to call these applications ‘artificial’ intelligence, and that they would be better developed and implemented if they were reframed as forms of sociotechnical intelligence.
The role of Artificial Intelligence (AI) in clinical decision-making raises issues of trust. One issue concerns the conditions of trusting the AI which tends to be based on validation. However, little attention has been given to how validation is formed, how comparisons come to be accepted, and how AI algorithms are trusted in decision-making. Drawing on interviews with collaborative researchers developing three AI technologies for the early diagnosis of pulmonary hypertension (PH), we show how validation of the AI is jointly produced so that trust in the algorithm is built up through the negotiation of criteria and terms of comparison during interactions. These processes build up interpretability and interrogation, and co-constitute trust in the technology. As they do so, it becomes difficult to sustain a strict distinction between artificial and human/social intelligence.
The staged model, derived from analysis of existing interventions, provides a framework for evaluation of current provision and comparison of different methods of delivery. Moreover, it provides a framework for future research.
A digital twin is a computer-based “virtual” representation of a complex system, updated using data from the “real” twin. Digital twins are established in product manufacturing, aviation, and infrastructure and are attracting significant attention in medicine. In medicine, digital twins hold great promise to improve prevention of cardiovascular diseases and enable personalised health care through a range of Internet of Things (IoT) devices which collect patient data in real-time. However, the promise of such new technology is often met with many technical, scientific, social, and ethical challenges that need to be overcome—if these challenges are not met, the technology is therefore less likely on balance to be adopted by stakeholders. The purpose of this work is to identify the facilitators and barriers to the implementation of digital twins in cardiovascular medicine. Using, the Non-adoption, Abandonment, Scale-up, Spread, and Sustainability (NASSS) framework, we conducted a document analysis of policy reports, industry websites, online magazines, and academic publications on digital twins in cardiovascular medicine, identifying potential facilitators and barriers to adoption. Our results show key facilitating factors for implementation: preventing cardiovascular disease, in silico simulation and experimentation, and personalised care. Key barriers to implementation included: establishing real-time data exchange, perceived specialist skills required, high demand for patient data, and ethical risks related to privacy and surveillance. Furthermore, the lack of empirical research on the attributes of digital twins by different research groups, the characteristics and behaviour of adopters, and the nature and extent of social, regulatory, economic, and political contexts in the planning and development process of these technologies is perceived as a major hindering factor to future implementation.
The disclosure of absences from professional sporting activities to the media is a routine and generally unproblematic part of a sporting career. However, when the reason for the absence relates to mental health concerns, players can encounter difficulties in trying to define, describe and conceptualise their own issues while attempting to maintain privacy as they undergo assessment and treatment. Drawing on ethnomethodology and conversation analysis principles and methods, this paper explores first/initial public mental health disclosure narratives produced by players and sporting organizations across several professional sports via media interviews, press statements, and social media posts. The analysis focuses on (in)voluntary accounts produced by teams or players themselves during their careers and examines the different communication strategies they employ to categorise and explain their predicament. The analysis reveals how some players provide partial or proxy public disclosure announcements (due to a desire to mask issues or delayed help-seeking and assessment), whereas others prefer fuller disclosure of the problems experienced, including diagnoses and on-going treatment and therapy regimes. The paper outlines the consequences of these disclosure strategies and considers the implications they can have for a player’s wellbeing in these stressful circumstances.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.