Background Healthcare is a rapidly expanding area of application for Artificial Intelligence (AI). Although there is considerable excitement about its potential, there are also substantial concerns about the negative impacts of these technologies. Since screening and diagnostic AI tools now have the potential to fundamentally change the healthcare landscape, it is important to understand how these tools are being represented to the public via the media. Methods Using a framing theory approach, we analysed how screening and diagnostic AI was represented in the media and the frequency with which media articles addressed the benefits and the ethical, legal, and social implications (ELSIs) of screening and diagnostic AI. Results All the media articles coded (n = 136) fit into at least one of three frames: social progress (n = 131), economic development (n = 59), and alternative perspectives (n = 9). Most of the articles were positively framed, with 135 of the articles discussing benefits of screening and diagnostic AI, and only 9 articles discussing the ethical, legal, and social implications. Conclusions We found that media reporting of screening and diagnostic AI predominantly framed the technology as a source of social progress and economic development. Screening and diagnostic AI may be represented more positively in the mass media than AI in general. This represents an opportunity for health journalists to provide publics with deeper analysis of the ethical, legal, and social implications of screening and diagnostic AI, and to do so now before these technologies become firmly embedded in everyday healthcare delivery.
This is the first study in Australia that explores the impact planned and achieved in research projects from a large-scale prospective cohort study, the 45 and Up Study • Most projects were intended to achieve policy and practice impact. However, a gap was identified between study planning and achieving impact because the impact was potentially achieved after project completion and outside of the study reporting period • Future research would benefit from a more targeted approach to impact planning
Objectives:Applications of artificial intelligence (AI) have the potential to improve aspects of healthcare. However, studies have shown that healthcare AI algorithms also have the potential to perpetuate existing inequities in healthcare, performing less effectively for marginalised populations. Studies on public attitudes towards AI outside of the healthcare field have tended to show higher levels of support for AI among socioeconomically advantaged groups that are less likely to be sufferers of algorithmic harms. We aimed to examine the sociodemographic predictors of support for scenarios related to healthcare AI.Methods:The Australian Values and Attitudes toward AI survey was conducted in March 2020 to assess Australians’ attitudes towards AI in healthcare. An innovative weighting methodology involved weighting a non-probability web-based panel against results from a shorter omnibus survey distributed to a representative sample of Australians. We used multinomial logistic regression to examine the relationship between support for AI and a suite of sociodemographic variables in various healthcare scenarios.Results:Where support for AI was predicted by measures of socioeconomic advantage such as education, household income and Socio-Economic Indexes for Areas index, the same variables were not predictors of support for the healthcare AI scenarios presented. Variables associated with support for healthcare AI included being male, having computer science or programming experience and being aged between 18 and 34 years. Other Australian studies suggest that these groups may have a higher level of perceived familiarity with AI.Conclusion:Our findings suggest that while support for AI in general is predicted by indicators of social advantage, these same indicators do not predict support for healthcare AI.
Background: In recent years, innovations in artificial intelligence (AI) have led to the development of new healthcare AI (HCAI) technologies. Whilst some of these technologies show promise for improving the patient experience, ethicists have warned that AI can introduce and exacerbate harms and wrongs in healthcare. It is important that HCAI reflects the values that are important to people. However, involving patients and publics in substantive conversations about AI ethics remains challenging due to relatively limited awareness of HCAI technologies. This scoping review aims to map how the existing literature on publics’ attitudes toward HCAI addresses key issues in AI ethics and governance.Methods: We developed a search query to conduct a comprehensive search of PubMed, Scopus, Web of Science, CINAHL, and Academic Search Complete from January 2010 onwards. We will include primary research studies which document publics’ or patients’ attitudes toward HCAI. A coding framework has been designed and will be used capture qualitative and quantitative data from the articles. Two reviewers will code a proportion of the included articles and any discrepancies will be discussed amongst the team, with changes made to the coding framework accordingly. Final results will be reported quantitatively and qualitatively, examining how each AI ethics issue has been addressed by the included studies.Discussion: If HCAI is to be implemented ethically and legitimately, publics and patients must be included in important conversations about HCAI ethics. This review will explore how ethical issues are addressed in literature examining publics and patients’ attitudes toward HCAI. We aim to describe how publics and patients have been successfully consulted on HCAI ethics, and to identify any areas of HCAI ethics where more work is needed to include publics and patients in research and discussions.
IntroductionPrecision public health is an emerging and evolving field. Academic communities are divided regarding terminology and definitions, and what the scope, parameters and goals of precision public health should include. This protocol summarises the procedure for a scoping review which aims to identify and describe definitions, terminology, uses of the term and concepts in current literature.Methods and analysisA scoping review will be undertaken to gather existing literature on precision public health. We will search CINAHL, PubMed, Scopus, Web of Science and Google Scholar, and include all documents published in English that mention precision public health. A critical discourse analysis of the resulting papers will generate an account of precision public health terminology, definitions and uses of the term and the use and meaning of language. The analysis will occur in stages: first, descriptive information will be extracted and descriptive statistics will be calculated in order to characterise the literature. Second, occurrences of the phrase ‘precision public health’ and alternative terms in documents will be enumerated and mapped, and definitions collected. The third stage of discourse analysis will involve analysis and interpretation of the meaning of precision public health, including the composition, organisation and function of discourses. Finally, discourse analysis of alternative phrases to precision public health will be undertaken. This will include analysis and interpretation of what alternative phrases to precision public health are used to mean, how the phrases relate to each other and how they are compared or contrasted to precision public health. Results will be grouped under headings according to how they answer the research questions.Ethics and disseminationNo ethical approval will be required for the scoping review. Results of the scoping review will be used as part of a doctoral thesis, and may be published in journals, conference proceedings or elsewhere.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.