The radiation fields in space define tangible risks to the health of astronauts, and significant work in rodent models has clearly shown a variety of exposure paradigms to compromise central nervous system (CNS) functionality. Despite our current knowledge, sex differences regarding the risks of space radiation exposure on cognitive function remain poorly understood, which is potentially problematic given that 30% of astronauts are women. While work from us and others have demonstrated pronounced cognitive decrements in male mice exposed to charged particle irradiation, here we show that female mice exhibit significant resistance to adverse neurocognitive effects of space radiation. The present findings indicate that male mice exposed to low doses (≤30 cGy) of energetic (400 MeV/n) helium ions (4 He) show significantly higher levels of neuroinflammation and more extensive cognitive deficits than females. Twelve weeks following 4 He ion exposure, irradiated male mice demonstrated significant deficits in object and place recognition memory accompanied by activation of microglia, marked upregulation of hippocampal Toll-like receptor 4 (TLR4), and increased expression of the pro-inflammatory marker high mobility group box 1 protein (HMGB1). Additionally, we determined that exposure to 4 He ions caused a significant decline in the number of dendritic branch points and total dendritic length along with the hippocampus neurons in female mice. Interestingly, only male mice showed a significant decline of dendritic spine density following irradiation. These data indicate that fundamental differences in inflammatory cascades between male and female mice may drive divergent CNS radiation responses that differentially impact the structural plasticity of neurons and neurocognitive outcomes following cosmic radiation exposure.
This cross-sectional study examines public beliefs about the coronavirus disease 2019 (COVID-19) pandemic in response to President Trump’s social media posts during and after his infection with the virus.
The COVID-19 pandemic and its related policies (e.g., stay at home and social distancing orders) have increased people's use of digital technology, such as social media. Researchers have, in turn, utilized artificial intelligence to analyze social media data for public health surveillance. For example, through machine learning and natural language processing, they have monitored social media data to examine public knowledge and behavior. This paper explores the ethical considerations of using artificial intelligence to monitor social media to understand the public's perspectives and behaviors surrounding COVID-19, including potential risks and benefits of an AI-driven approach. Importantly, investigators and ethics committees have a role in ensuring that researchers adhere to ethical principles of respect for persons, beneficence, and justice in a way that moves science forward while ensuring public safety and confidence in the process.
Components of artificial intelligence (AI) for analysing social big data, such as natural language processing (NLP) algorithms, have improved the timeliness and robustness of health data. NLP techniques have been implemented to analyse large volumes of text from social media platforms to gain insights on disease symptoms, understand barriers to care and predict disease outbreaks. However, AI-based decisions may contain biases that could misrepresent populations, skew results or lead to errors. Bias, within the scope of this paper, is described as the difference between the predictive values and true values within the modelling of an algorithm. Bias within algorithms may lead to inaccurate healthcare outcomes and exacerbate health disparities when results derived from these biased algorithms are applied to health interventions. Researchers who implement these algorithms must consider when and how bias may arise. This paper explores algorithmic biases as a result of data collection, labelling and modelling of NLP algorithms. Researchers have a role in ensuring that efforts towards combating bias are enforced, especially when drawing health conclusions derived from social media posts that are linguistically diverse. Through the implementation of open collaboration, auditing processes and the development of guidelines, researchers may be able to reduce bias and improve NLP algorithms that improve health surveillance.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.