Wearing face masks is recommended as part of personal protective equipment and as a public health measure to prevent the spread of coronavirus disease 2019 (COVID-19) pandemic. Their use, however, is deeply connected to social and cultural practices and has acquired a variety of personal and social meanings. This article aims to identify the diversity of sociocultural, ethical, and political meanings attributed to face masks, how they might impact public health policies, and how they should be considered in health communication. In May 2020, we involved 29 experts of an interdisciplinary research network on health and society to provide their testimonies on the use of face masks in 20 European and 2 Asian countries (China and South Korea). They reflected on regulations in the corresponding jurisdictions as well as the personal and social aspects of face mask wearing. We analyzed those testimonies thematically, employing the method of qualitative descriptive analysis. The analysis framed the four dimensions of the societal and personal practices of wearing (or not wearing) face masks: individual perceptions of infection risk, personal interpretations of responsibility and solidarity, cultural traditions and religious imprinting, and the need of expressing self-identity. Our study points to the importance for an in-depth understanding of the cultural and sociopolitical considerations around the personal and social meaning of mask wearing in different contexts as a necessary prerequisite for the assessment of the effectiveness of face masks as a public health measure. Improving the personal and collective understanding of citizens' behaviors and attitudes appears essential for designing more effective health communications about COVID-19 pandemic or other global crises in the future. To wear a face mask or not to wear a face mask? Nowadays, this question has been analogous to the famous line from Shakespeare's Hamlet: “To be or not to be, that is the question.” This is a bit allegorical, but certainly not far from the current circumstances where a deadly virus is spreading amongst us... Vanja Kopilaš, Croatia.
The application of machine learning (ML) technologies in medicine generally but also in radiology more specifically is hoped to improve clinical processes and the provision of healthcare. A central motivation in this regard is to advance patient treatment by reducing human error and increasing the accuracy of prognosis, diagnosis and therapy decisions. There is, however, also increasing awareness about bias in ML technologies and its potentially harmful consequences. Biases refer to systematic distortions of datasets, algorithms, or human decision making. These systematic distortions are understood to have negative effects on the quality of an outcome in terms of accuracy, fairness, or transparency. But biases are not only a technical problem that requires a technical solution. Because they often also have a social dimension, the ‘distorted’ outcomes they yield often have implications for equity. This paper assesses different types of biases that can emerge within applications of ML in radiology, and discusses in what cases such biases are problematic. Drawing upon theories of equity in healthcare, we argue that while some biases are harmful and should be acted upon, others might be unproblematic and even desirable—exactly because they can contribute to overcome inequities.
Biomedical data, both in ‘traditional’, analogue forms as well as in the form of digital, ‘big’ data, are contingent social products. They reflect the categories and practices that structure our societies. We illustrate this by discussing gender biases in data stemming from clinical trials and electronic health records (EHR) and consider how biomedical data are prone to bias in different phases of data work, from data capture and representation to category building and analysis to using outputs. We argue that developments such as ‘Personalised’ and ‘Precision Medicine’ that have been made possible by ‘big data’ analyses could be seen as a shift away from the male ‘standard patient’ by trying to comprehensively and objectively represent many different aspects of patients' lives and bodies. At the same time, the very promises of comprehensiveness and objectivity are problematic: The data generated and collected, as well as the infrastructures and analytic tools used to do this, reflect the social realities – including the injustices and inequities – within which they were developed. The knowledge created on the basis of this ‘evidence’ can thus perpetuate existing biases. While we do not subscribe to a view of the world that considers truly objective, neutral, and – in this sense – ‘unbiased’ knowledge possible or even desirable, we suggest a number of ways in which gender bias in biomedical data should be made visible, reflected upon, and in certain instances acted upon.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.