Objective. A major coronavirus disease 2019 (COVID-19) outbreak occurred in Northeastern France in spring 2020. This single-center retrospective observational cohort study aimed to compare patients with severe COVID-19 and those with non-severe COVID-19 (survivors vs. non-survivors, ICU patients vs. non-ICU patients) and to describe extrapulmonary complications. Patients and methods. We included all patients with a confirmed diagnosis of COVID-19 admitted to Colmar Hospital in March 2020. Results. We examined 600 patients (median age 71.09 years; median body mass index: 26.9 kg/m 2 ); 57.7% were males, 86.3% had at least one comorbidity, 153 (25.5%) required ICU hospitalization, and 115 (19.1%) died. Baseline independent factors associated with death were older age (>75 vs. ≤75 years), male sex, oxygen supply, chronic neurological, renal, and pulmonary diseases, diabetes, cancer, low platelet and hemoglobin counts, and high levels of C-reactive protein (CRP) and serum creatinine. Factors associated with ICU hospitalization were age <75 years, oxygen supply, chronic pulmonary disease, absence of dementia, and high levels of CRP, hemoglobin, and serum creatinine. Among the 600 patients, 80 (13.3%) had an acute renal injury, 33 (5.5%) had a cardiovascular event, 27 (4.5%) had an acute liver injury, 24 (4%) had venous thromboembolism, eight (1.3%) had a neurological event, five (0.8%) had rhabdomyolysis, and one had acute pancreatitis. Most extrapulmonary complications occurred in ICU patients. Conclusion. This study highlighted the main risk factors for ICU hospitalization and death caused by severe COVID-19 and the frequency of numerous extrapulmonary complications in France.
Music is a tool used in daily life in order to mitigate negative and enhance positive emotions. Listeners may orientate their engagement with music around its ability to facilitate particular emotional responses and to subsequently regulate mood. Existing scales have aimed to gauge both individual coping orientations in response to stress, as well as individual use of music for the purposes of mood regulation. This study utilised pre-validated scales through an online survey (N = 233) in order to measure whether music’s use in mood regulation is influenced by coping orientations and/or demographic variables in response to the lockdown measures imposed in the United Kingdom, as a consequence of the COVID-19 pandemic. Whilst factor analyses show that the existing theoretical structure of the COPE model has indicated a poor fit for clustered coping orientations, a subsequent five-factor structure was determined for coping orientations in response to lockdown. Analyses include observations that positive reframing and active coping (Positive Outlook) were strong predictors of music use in mood regulation amongst listener’s coping strategies, as was Substance Use. Higher Age indicated having a negative effect on music’s use in mood regulation, whilst factors such as gender were not seen to be significant in relation to the use of music in mood regulation within this context. These results provide insight into how individuals have engaged with music orientated coping strategies in response to a unique stressor.
An abundance of studies on emotional experiences in response to music have been published over the past decades, however, most have been carried out in controlled laboratory settings and rely on subjective reports. Facial expressions have been occasionally assessed but measured using intrusive methods such as facial electromyography (fEMG). The present study investigated emotional experiences of fifty participants in a live concert. Our aims were to explore whether automated face analysis could detect facial expressions of emotion in a group of people in an ecologically valid listening context, to determine whether emotions expressed by the music predicted specific facial expressions and examine whether facial expressions of emotion could be used to predict subjective ratings of pleasantness and activation. During the concert, participants were filmed and facial expressions were subsequently analyzed with automated face analysis software. Self-report on participants’ subjective experience of pleasantness and activation were collected after the concert for all pieces (two happy, two sad). Our results show that the pieces that expressed sadness resulted in more facial expressions of sadness (compared to happiness), whereas the pieces that expressed happiness resulted in more facial expressions of happiness (compared to sadness). Differences for other facial expression categories (anger, fear, surprise, disgust, and neutral) were not found. Independent of the musical piece or emotion expressed in the music facial expressions of happiness predicted ratings of subjectively felt pleasantness, whilst facial expressions of sadness and disgust predicted low and high ratings of subjectively felt activation, respectively. Together, our results show that non-invasive measurements of audience facial expressions in a naturalistic concert setting are indicative of emotions expressed by the music, and the subjective experiences of the audience members themselves.
This report proposes a new approach in studying music-induced emotion, namely, a method using facial expressions of emotion. The suggested procedure could avoid some of the flaws associated with other methods. Additionally, I suggest that this approach improves the ecological validity of the experimental setting in which music-induced emotions are studied, which subsequently provides results which are expected to be more reliable in terms of representing a subjectively felt experience in response to music rather than how emotions in music are perceived. Facial expressions of emotion can be defined as spontaneous and involuntary manifestations of an emotional experience and are displayed mainly subconsciously. Because of these features, facial displays can be utilized for inferring an emotional experience in response to music. In this study, I am evaluating 4 possible technologies that can be used for measuring facial expressions: optical motion capture, electromyography, manual coding, and automatic face analysis. I discuss these methods in regard to their advantages and disadvantages in an experimental setting, and evaluate them in terms of ecological validity, reliability of the results, flexibility of the set-up, and time that is required for conducting research using the respective technology.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.