he prospect of improved clinical outcomes and more efficient health systems has fueled a rapid rise in the development and evaluation of AI systems over the last decade. Because most AI systems within healthcare are complex interventions designed as clinical decision support systems, rather than autonomous agents, the interactions among the AI systems, their users and the implementation environments are defining components of the AI interventions' overall potential effectiveness. Therefore, bringing AI systems from mathematical performance to clinical utility needs an adapted, stepwise implementation and evaluation pathway, addressing the complexity of this collaboration between two independent forms of intelligence, beyond measures of effectiveness alone 1 . Despite indications that some AI-based algorithms now match the accuracy of human experts within preclinical in silico studies 2 , there
This American College of Physicians position paper aims to inform ethical decision making surrounding participation in short-term global health clinical care experiences. Although the positions are primarily intended for practicing physicians, they may apply to other health care professionals and should inform how institutions, organizations, and others structure short-term global health experiences. The primary goal of short-term global health clinical care experiences is to improve the health and well-being of the individuals and communities where they occur. In addition, potential benefits for participants in global health include increased awareness of global health issues, new medical knowledge, enhanced physical diagnosis skills when practicing in low-technology settings, improved language skills, enhanced cultural sensitivity, a greater capacity for clinical problem solving, and an improved sense of self-satisfaction or professional satisfaction. However, these activities involve several ethical challenges. Addressing these challenges is critical to protecting patient welfare in all geographic locales, promoting fair and equitable care globally, and maintaining trust in the profession. This paper describes 5 core positions that focus on ethics and the clinical care context and provides case scenarios to illustrate them.
Background: Patient, public, consumer, and community (P2C2) engagement in organization-, community-, and systemlevel healthcare decision-making is increasing globally, but its formal evaluation remains challenging. To define a taxonomy of possible P2C2 engagement metrics and compare existing evaluation tools against this taxonomy, we conducted a systematic review. Methods: A broad search strategy was developed for English language publications available from January 1962 through April 2015 in PubMed, Embase, Sociological Abstracts, PsycINFO, EconLit, and the gray literature. A publication was excluded if: (1) the setting was not healthcare delivery (ie, we excluded non-health sectors, such as urban planning; research settings; and public health settings not involving clinical care delivery); (2) the P2C2 engagement was episodic; or (3) the concept of evaluation or possible evaluation metrics were absent. To be included as an evaluation tool, publications had to contain an evaluative instrument that could be employed with minimal modification by a healthcare organization. Results: A total of 199 out of 3953 publications met exclusion and inclusion criteria. These were qualitatively analyzed using inductive content analysis to create a comprehensive taxonomy of 116 possible metrics for evaluating P2C2 engagement. 44 outcome metrics were grouped into three domains (internal, external, and aggregate outcomes) that included six subdomains: impact on engagement participants, impact on services provided by the healthcare organization, impact on the organization itself, influence on the broader public, influence on population health, and engagement cost-effectiveness. The 72 process metrics formed four domains (direct process metrics; surrogate process metrics; aggregate process metrics; and preconditions for engagement) that comprised sixteen subdomains. We identified 23 potential tools for evaluating P2C2 engagement. The identified tools were published between 1973-2015 and varied in their coverage of the taxonomy, methodology used (qualitative, quantitative, or mixed), and intended evaluators (organizational leaders, P2C2 participants, external evaluators, or some combination). Parts of the metric taxonomy were absent from all tools. Conclusions: By comprehensively mapping potential outcome and process metrics as well as existing P2C2 engagement tools, this review supports high-quality P2C2 engagement globally by informing the selection of existing evaluation tools and identifying gaps where new tools are needed. Systematic Review Registration: PROSPERO registration number CRD42015020317.
Increasing recognition of biases in artificial intelligence (AI) algorithms has motivated the quest to build fair models, free of biases. However, building fair models may be only half the challenge. A seemingly fair model could involve, directly or indirectly, what we call “latent biases.” Just as latent errors are generally described as errors “waiting to happen” in complex systems, latent biases are biases waiting to happen. Here we describe 3 major challenges related to bias in AI algorithms and propose several ways of managing them. There is an urgent need to address latent biases before the widespread implementation of AI algorithms in clinical practice.
BackgroundTwitter is home to many health professionals who send messages about a variety of health-related topics. Amid concerns about physicians posting inappropriate content online, more in-depth knowledge about these messages is needed to understand health professionals’ behavior on Twitter.ObjectiveOur goal was to characterize the content of Twitter messages, specifically focusing on health professionals and their tweets relating to health.MethodsWe performed an in-depth content analysis of 700 tweets. Qualitative content analysis was conducted on tweets by health users on Twitter. The primary objective was to describe the general type of content (ie, health-related versus non-health related) on Twitter authored by health professionals and further to describe health-related tweets on the basis of the type of statement made. Specific attention was given to whether a tweet was personal (as opposed to professional) or made a claim that users would expect to be supported by some level of medical evidence (ie, a “testable” claim). A secondary objective was to compare content types among different users, including patients, physicians, nurses, health care organizations, and others.ResultsHealth-related users are posting a wide range of content on Twitter. Among health-related tweets, 53.2% (184/346) contained a testable claim. Of health-related tweets by providers, 17.6% (61/346) were personal in nature; 61% (59/96) made testable statements. While organizations and businesses use Twitter to promote their services and products, patient advocates are using this tool to share their personal experiences with health.ConclusionsTwitter users in health-related fields tweet about both testable claims and personal experiences. Future work should assess the relationship between testable tweets and the actual level of evidence supporting them, including how Twitter users—especially patients—interpret the content of tweets posted by health providers.
Physicians are increasingly counted among Face-book’s 1 billion users and Twitter’s 500 million members. Beyond these social media platforms, other innovative social media tools are being used in medical practice, including for online consultation,1 in the conduct of clinical research,2 and in medical school curricula.3 Social media content is brief, characterized as “many-to-many” communication, and able to spread rapidly across the Internet beyond a person’s control. These and other features of social media create new dimensions to traditional ethical issues, particularly around maintaining appropriate boundaries between physicians and patients.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.